LVM Basic Operations

From WikiMLT
Revision as of 07:23, 26 September 2022 by Spas (talk | contribs) (Text replacement - "mlw-line-height-1em" to "shrink-line-height")

Overview of Log­i­cal Vol­ume Man­age­ment (LVM)

Figure 1. LVM log­i­cal vol­ume com­po­nents.

Log­i­cal vol­ume man­age­ment (LVM) cre­ates a lay­er of ab­strac­tion over phys­i­cal stor­age, which helps you to cre­ate log­i­cal stor­age vol­umes. This pro­vides much greater flex­i­bil­i­ty in a num­ber of ways than us­ing phys­i­cal stor­age di­rect­ly.

In ad­di­tion, the hard­ware stor­age con­fig­u­ra­tion is hid­den from the soft­ware so it can be re­sized and moved with­out stop­ping ap­pli­ca­tions or un­mount­ing file sys­tems. This can re­duce op­er­a­tional costs.

The fol­low­ing are the com­po­nents of LVM – they are il­lus­trat­ed on the di­a­gram shown at Fig­ure 1:

  • Vol­ume group: A vol­ume group (VG) is a col­lec­tion of phys­i­cal vol­umes (PVs), which cre­ates a pool of disk space out of which log­i­cal vol­umes can be al­lo­cat­ed. For more in­for­ma­tion, see Man­ag­ing LVM vol­ume groups.

Phys­i­cal Vol­umes (PV)

The phys­i­cal vol­ume (PV) is a par­ti­tion or whole disk des­ig­nat­ed for LVM use. To use the de­vice for an LVM log­i­cal vol­ume, the de­vice must be ini­tial­ized as a phys­i­cal vol­ume.

If you are us­ing a whole disk de­vice for your phys­i­cal vol­ume, the disk must have no par­ti­tion ta­ble. For DOS disk par­ti­tions, the par­ti­tion id should be set to 0x8e us­ing the fdisk or cfdisk com­mand or an equiv­a­lent. If you are us­ing a whole disk de­vice for your phys­i­cal vol­ume, the disk must have no par­ti­tion ta­ble. Any ex­ist­ing par­ti­tion ta­ble must be erased, which will ef­fec­tive­ly de­stroy all da­ta on that disk. You can re­move an ex­ist­ing par­ti­tion ta­ble us­ing the wipefs ‑a <Phys­i­calVol­ume>` com­mand as root.

Dis­play PV

pvdis­play – Dis­play var­i­ous at­trib­ut­es of phys­i­cal volume(s). pvdis­play shows the at­trib­ut­es of PVs, like size, phys­i­cal ex­tent size, space used for the VG de­scrip­tor area, etc.

sudo pvdisplay
--- Physical volume ---
PV Name               /dev/nvme0n1p3
VG Name               vg_name
PV Size               <930.54 GiB / not usable 4.00 MiB
Allocatable           yes 
PE Size               4.00 MiB
Total PE              238216
Free PE               155272
Allocated PE          82944
PV UUID               BKK3Cm-CAzY-rNah-o3Ox-L7FY-aNIY-ysUSIL

pvs – Dis­play in­for­ma­tion about phys­i­cal vol­umes. pvs is a pre­ferred al­ter­na­tive of pvdis­play that shows the same in­for­ma­tion and more, us­ing a more com­pact and con­fig­urable out­put for­mat.

sudo pvs
PV             VG        Fmt  Attr PSize   PFree  
/dev/nvme0n1p3 vg_name   lvm2 a--  930.53g 626.53g

The com­mand ls­blk al­so out­puts use­ful in­for­ma­tion about PV-VG-LV re­la­tions.

lsblk -M
       NAME                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
       nvme0n1                               259:0   0 931.5G  0 disk 
       ├─nvme0n1p1                           259:1   0   512M  0 part /boot/efi
       ├─nvme0n1p2                           259:2   0   488M  0 part /boot
       └─nvme0n1p3                           259:3   0 930.5G  0 part 
   ┌─>   ├─vg-lv_1-real                      254:0   0   60G  0 lvm  
   │     │ └─vg-lv_1                         254:1   0   60G  0 lvm  /
┌─>│     ├─vg-lv_2-real                      254:2   0   60G  0 lvm  
│  │     │ └─vg-lv_2                         254:7   0   60G  0 lvm  /home
└┬>│     ├─vg-lv_2_ss_at_date_220912-cow     254:3   0   60G  0 lvm  
 │ └┬>   ├─vg-lv_1_ss_at_date_220908-cow     254:4   0   60G  0 lvm  
 │  │    └─vg-swap_1                         254:6   0    4G  0 lvm  [SWAP]
 │  └──vg-lv_1_ss_at_date_220908             254:5   0   60G  0 lvm  
 └─────vg-lv_2_ss_at_date_220912             254:9   0   60G  0 lvm

pvs­can – List all phys­i­cal vol­umes. The pvs­can com­mand scans all sup­port­ed LVM block de­vices in the sys­tem for phys­i­cal vol­umes. You can de­fine a fil­ter in the lvm.conf file so that this com­mand avoids scan­ning spe­cif­ic phys­i­cal vol­umes.

sudo pvscan
PV /dev/nvme0n1p3   VG vg_name         lvm2 [930.53 GiB / 566.53 GiB free]
Total: 1 [930.53 GiB] / in use: 1 [930.53 GiB] / in no VG: 0 [0   ]

Cre­ate PV

The phys­i­cal vol­ume could be cre­at­ed on the en­tire dri­ve or on a par­ti­tion. It is prefer­able to par­ti­tion the disk – cre­ate a sin­gle par­ti­tion that cov­ers the whole disk to la­bel as an LVM phys­i­cal vol­ume, – oth­er­wise some tools like CloneZil­la wont han­dle the dri­ve cor­rect­ly when LVM is in­stalled di­rect­ly on the dri­ve (with­out par­ti­tion­ing), etc.. How to par­ti­tion the disk from the CLI is de­scribed in the ar­ti­cle Lin­ux Ba­sic Par­ti­tion­ing.

Cre­ate Phys­i­cals vol­ume by the par­ti­tion /​​​dev/​​​nvme0n1p3.

sudo pvcreate /dev/nvme0n1p3
Physical volume "/dev/nvme0n1p3" successfully created.

An­oth­er ex­am­ple: Cre­ate Phys­i­cals vol­ume by the par­ti­tion /​​​dev/​​​vda1 and /​​​dev/​​​vdb1.

sudo pvcreate /dev/vda1 /dev/vdb1
Physical volume "/dev/vda1" successfully created.
Physical volume "/dev/vdb1" successfully created.

Re­move PV

If a de­vice is no longer re­quired for use by LVM, you can re­move the LVM la­bel by us­ing the pvre­move com­mand. Ex­e­cut­ing the pvre­move com­mand ze­roes the LVM meta­da­ta on an emp­ty phys­i­cal vol­ume.

sudo pvremove /dev/nvme0n2p1
Labels on physical volume "/dev/nvme0n2p1" successfully wiped.

Then use pvs and ver­i­fy if the re­quired vol­ume is re­moved. If the phys­i­cal vol­ume you want to re­move is cur­rent­ly part of a vol­ume group, you must re­move it from the vol­ume group with the vgre­duce com­mand. For more in­for­ma­tion, see Re­mov­ing phys­i­cal vol­umes from a vol­ume group.

Vol­ume Groups (VG)

A vol­ume group (VG) is a col­lec­tion of phys­i­cal vol­umes (PVs), which cre­ates a pool of disk space out of which log­i­cal vol­umes (LVs) can be al­lo­cat­ed.

With­in a vol­ume group, the disk space avail­able for al­lo­ca­tion is di­vid­ed in­to units of a fixed-size called ex­tents. An ex­tent is the small­est unit of space that can be al­lo­cat­ed. With­in a phys­i­cal vol­ume, ex­tents are re­ferred to as phys­i­cal ex­tents.

A log­i­cal vol­ume is al­lo­cat­ed in­to log­i­cal ex­tents of the same size as the phys­i­cal ex­tents. The ex­tent size is thus the same for all log­i­cal vol­umes in the vol­ume group. The vol­ume group maps the log­i­cal ex­tents to phys­i­cal ex­tents.

Dis­play VG

vgdis­play – Dis­play vol­ume group in­for­ma­tion. The vgdis­play com­mand dis­plays vol­ume group prop­er­ties such as size, ex­tents, num­ber of phys­i­cal vol­umes, and oth­er op­tions in a fixed form. If you do not spec­i­fy a vol­ume group, all ex­ist­ing vol­ume groups are dis­played.

sudo vgdisplay
--- Volume group ---
VG Name               vg_name
System ID             
Format                lvm2
Metadata Areas        1
Metadata Sequence No  19
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                6
Open LV               4
Max PV                0
Cur PV                1
Act PV                1
VG Size               930.53 GiB
PE Size               4.00 MiB
Total PE              238216
Alloc PE / Size       93184 / 364.00 GiB
Free  PE / Size       145032 / 566.53 GiB
VG UUID               NWWfr6-xxfN-0d22-muPn-Rgqj-vAcA-z5P5jV

The fol­low­ing ex­am­ple shows the out­put of the vgdis­play com­mand for the vol­ume group vg_​​​name.

sudo vgdisplay vg_name

vgs – Dis­play in­for­ma­tion about vol­ume groups. It is a pre­ferred al­ter­na­tive of vgdis­play that shows the same in­for­ma­tion and more, us­ing a more com­pact and con­fig­urable out­put for­mat.

sudo vgs
VG        #PV  #LV  #SN  Attr    VSize    VFree  
vg_name     1    6    2  wz--n-  930.53g  566.53g

vgscan – Search for all vol­ume groups. vgscan scans all sup­port­ed LVM block de­vices in the sys­tem for VGs.

sudo vgscan
Found volume group "vg_name" using metadata type lvm2

Cre­ate VG

Cre­ate Vol­ume group named vg_​​​name by us­ing the /​​​dev/​​​nvme0n1p3 Phys­i­cal vol­ume.

sudo vgcreate vg_name /dev/nvme0n1p3
Volume group "vg_name" successfully created.

An­oth­er ex­am­ple: Cre­ate Vol­ume group named another_​​​vg by us­ing /​​​dev/​​​vda1 and /​​​dev/​​​vdb1 Phys­i­cal vol­umes.

sudo vgcreate another_vg /dev/vda1 /dev/vdb1
Volume group "another_vg" successfully created.

Ex­tend VG

vgextend – Add phys­i­cal vol­umes to a vol­ume group. vgextend adds one or more PVs to a VG. This in­creas­es the space avail­able for LVs in the VG. Al­so, PVs that have gone miss­ing and then re­turned, e.g. due to a tran­sient de­vice fail­ure, can be added back to the VG with­out re-ini­tial­iz­ing them (see --restore­missing).

If the spec­i­fied PVs have not yet been ini­tial­ized with pvcreate, vgextend will ini­tial­ize them. In this case pvcreate op­tions can be used, e.g. --label­sector, --meta­data­size, --meta­data­ignore, --pv­meta­data­copies, --data­align­ment, --data­align­ment­offset.

Ex­am­ple 1: Oc­cu­py a par­ti­tion that is not yet de­fined as Phys­i­cal vol­ume.

sudo vgextend vg_name /dev/nvme0n2p1
Physical volume "/dev/nvme0n2p1" successfully created.
Volume group "vg_name" successfully extended.

Ex­am­ple 2: Cre­ate a new Phys­i­cal vol­ume and then ex­tend an ex­ist­ing Vol­ume group named another_​​​vg.

lsblk							# Check the devices
sudo pvcreat /dev/sdb		    # Create a Phisical volume at /dev/sdb
sudo pvdisplay					# Check the list of the Phisical volumes
sudo vgextend another_vg /dev/sdb 	# Extend the existing VG to occupy the new PV
df -hT							# Check the list of the available devices
sudo vgdisplay					# Check the VGs information

Merge VG: Com­bine VGs

vg­merge – Merge vol­ume groups. To com­bine two vol­ume groups in­to a sin­gle vol­ume group, use the vg­merge com­mand. You can merge an in­ac­tive "source" vol­ume with an ac­tive or an in­ac­tive "des­ti­na­tion" vol­ume if the phys­i­cal ex­tent sizes of the vol­ume are equal and the phys­i­cal and log­i­cal vol­ume sum­maries of both vol­ume groups fit in­to the des­ti­na­tion vol­ume groups lim­its.

Merge the in­ac­tive vol­ume group another_​​​vg in­to the ac­tive or in­ac­tive vol­ume group vg_​​​name giv­ing ver­bose run­time in­for­ma­tion.

sudo vgmerge -v vg_name another_vg

Re­duce VG: Re­move PVs from a VG

vgre­duce – Re­move phys­i­cal volume(s) from a vol­ume group. The vgre­duce com­mand shrinks a vol­ume group’s ca­pac­i­ty by re­mov­ing one or more emp­ty phys­i­cal vol­umes. This frees those phys­i­cal vol­umes to be used in dif­fer­ent vol­ume groups or to be re­moved from the sys­tem.

1. If the Phys­i­cal vol­ume is still be­ing used, mi­grate the da­ta to an­oth­er Phys­i­cal vol­ume from the same Vol­ume group by pv­move.

sudo pvmove /dev/vdb3
/dev/vdb3: Moved: 2.0%
...v/vdb3: Moved: 79...
/dev/vdb3: Moved: 100.0%

2. If there are no enough free ex­tents on the oth­er phys­i­cal vol­umes in the ex­ist­ing vol­ume group.

sudo pvcreate /dev/vdb4           # Create a new physical volume from /dev/vdb4
sudo vgextend vg_name /dev/vdb4   # Add the newly created PV to the vg_name VG
sudo pvmove /dev/vdb3 /dev/vdb4   # Move the data from /dev/vdb3 to /dev/vdb4

3. Re­move the phys­i­cal vol­ume /​​​dev/​​​vdb3 from the vol­ume group.

sudo pvmove /dev/vdb3

4. Ver­i­fy if the /​​​dev/​​​vdb3 phys­i­cal vol­ume is re­moved from the vg_​​​name vol­ume group by the help of pvs.

Split VG

vgsplit – Move phys­i­cal vol­umes in­to a new or ex­ist­ing vol­ume group. More de­tails at Red Hat's man­u­al – sec­tion Split­ting a LVM vol­ume group.

sudo vgsplit myvg yourvg /dev/vdb3
Volume group "yourvg" successfully split from "myvg"

Re­name VG

vgrename – Re­name a vol­ume group. vgchange – Change vol­ume group at­trib­ut­es.

sudo vgchange --activate n vg_name            # Deactivate the volume group
sudo vgrename vg_name new_vg_name             # Rename an existing volume group
sudo vgrename /dev/vg_name /dev/new_vg_name   # Rename by specifying the full paths to the devices

Re­move VG

vgre­move – Re­move vol­ume group(s). vgre­move re­moves one or more VGs. If LVs ex­ist in the VG, a prompt is used to con­firm LV re­moval.

If one or more PVs in the VG are lost, con­sid­er vgreduce --remove­missing to make the VG meta­da­ta con­sis­tent again. Re­peat the force op­tion (-ff) to forcibly re­move LVs in the VG with­out con­fir­ma­tion.

Azz

Cre­ate LVM log­i­cal vol­ume at lvm-vm-group:

sudo lvcreate -n vm-win-01 -L 60g lvm-vm-group
Logical volume "vm-win-01" created.
  • vm-win-01 is the name of the log­i­cal de­vice, it is mater of your choice.

Check the re­sult:

 lsblk | grep -P 'sdb|lvm'
├─sdb1                           8:17   0     1G  0 part
└─sdb2                           8:18   0 231.9G  0 part
  └─lvm--vm--group-vm--win--01 253:0    0    60G  0 lvm

##  Extend logical volume - note there mus have enoug free space in the volume group where the existing LV is located
sudo lvextend -L +10G /dev/mapper/lv_name
sudo rsize2fs /dev/mapper/lv_name
df -h
### for swap file sudo mkswap /dev/mapper/xxxx-swap_1
### then edit /etc/fstab and chenge the UUID if it is mounted by it...

## Exten to all available space and resize the FS in the same time
sudo lvextend --reziefs -l +100%FREE /dev/mapper/lv_name
df -h

# Shrink logical volume
    1  lsblk
    2  sudo apt update
    3  sudo apt install gnome-disk-utility
    4  ll
    5  lsblk
    6  ll /dev/kali-x-vg
    7  ll /dev/kali-x-vg/root
    8  ll /dev/dm-0
    9  sudo lvreduce --resizefs -L 60G kali-x-vg/root
   10  lsblk
   11  df -h


## Create a new volume group on newly attached block device /dev/sdc
lsblk 																						# check
sudo pvcreat /dev/sdbc						# crrate phisical volume
sudo vgcreate vg_name /dev/sdbc				# create volume group
sudo vgdisplay								# check

## Create a new logical volume
sudo lvcreate vg_name -L 5G -n lv_name		# create 5GB logical volume 		
sudo lvdisplay								# check
sudo mkfs.ext4 /dev/mapper/vg_name-lv_name	# format the logical folume as Ext4

# mount the new logical volume
sudo mkdir -p /mnt/volume																															# create mount point
sudo mount /dev/mapper/vg_name-lv_name	/mnt/volume	# mount the volume
df -h																																												# check

# mount it permanently via /etc/fstab (we can munt it by using the mapped path, but it is preferable to use UUID)
sudo blkid /dev/mapper/vg_name-lv_name			# get the UUID (universal unique identifier) of the logical volume
> /dev/mapper/vg_name-lv_name: UUID="b6ddc49d-...-...c90" BLOCK_SIZE="4096" TYPE="ext4"

sudo cp /etc/fstab{,.bak}			# backup the fstab file
sudo umount /mnt/volume		# unmount the nel volume
df -h														# check

sudo nano /etc/fstab
># <file system>                            <mount point>   <type>  <options>       <dump>  <pass> 
>     UUID=b6ddc49d-...-...c90 /mnt/volume     ext4        defaults			 0				  2

# Test /etc/fsstab for errors, by remount everything listed inside
sudo mount -a			# no output means everything is correctly mounted
df -h								# check

Snap­shots of Log­i­cal Vol­umes

Cre­ate a Snap­shot

lvcre­ate – Cre­ate a log­i­cal vol­ume.

#Ex­pla­na­tion
  • -s, --snapshot – Cre­ate a snap­shot. Snap­shots pro­vide a "frozen im­age" of an ori­gin LV. The snap­shot LV can be used, e.g. for back­ups, while the ori­gin LV con­tin­ues to be used. This op­tion can cre­ate a COW (copy on write) snap­shot, or a thin snap­shot (in a thin pool.) Thin snap­shots are cre­at­ed when the ori­gin is a thin LV and the size op­tion is NOT spec­i­fied.
  • -n, --name – Spec­i­fies the name of a new LV. When un­spec­i­fied, a de­fault name of lvol# is gen­er­at­ed, where # is a num­ber gen­er­at­ed by LVM.

Cre­ate 5GB snap­shot of a log­i­cal vol­ume named lv_​​​name from vol­ume group named vg_​​​name.

sudo lvcreate /dev/mapper/vg_name-lv_name -L 5G -s -n lv_name_ss_at_date_$(date +%y%m%d)
sudo lvs
LV                         VG        Attr       LSize   Pool Origin    Data%  Meta%  Move Log Cpy%Sync Convert
lv_name                    vg_name owi-aos---  60.00g                                                    
lv_name_ss_at_date_220908  vg_name swi-a-s---   5.00g        lv_name   0.01

Mount a Snap­shot

The snap­shot could be mount­ed in or­der to fetch files in their snap­shot state. In this case it is good to mount them in read on­ly mode.

sudo mkdir /mnt/snapshot
sudo mount -r /dev/mapper/vg_name-lv_name_ss_at_date_220908 /mnt/snapshot

You can use a mount­ed snap­shot (of the root filesys­tem) to cre­ate a back­up while the OS is run­ning.

Re­store a Snap­shot

Note af­ter the restor­ing of a snap­shot with merge op­tion in use the snap­shot will be re­moved, so you must cre­ate an­oth­er snap­shot for a lat­er use of the same state of the log­i­cal vol­ume.

For restor­ing a snap­shot we will need to use the com­mands lv­con­vert and lvchange.

Lv­Con­vert: Con­vert-merge the snap­shot with the source LV

lv­con­vert – Change log­i­cal vol­ume lay­out.

#Ex­pla­na­tion
  • --merge – An alias for --mergethin, --mergemirrors, or --mergesnapshot, de­pend­ing on the type of LV (log­i­cal vol­ume).
  • --mergesnapshot – Merge COW snap­shot LV in­to its ori­gin. When merg­ing a snap­shot, if both the ori­gin and snap­shot LVs are not open, the merge will start im­me­di­ate­ly. Oth­er­wise, the merge will start the first time ei­ther the ori­gin or snap­shot LV are ac­ti­vat­ed and both are closed. Merg­ing a snap­shot in­to an ori­gin that can­not be closed, for ex­am­ple a root filesys­tem, is de­ferred un­til the next time the ori­gin vol­ume is ac­ti­vat­ed. When merg­ing starts, the re­sult­ing LV will have the origin's name, mi­nor num­ber and UUID. While the merge is in progress, reads or writes to the ori­gin ap­pear as be­ing di­rect­ed to the snap­shot be­ing merged. When the merge fin­ish­es, the merged snap­shot is re­moved. Mul­ti­ple snap­shots may be spec­i­fied on the com­mand line or a @tag may be used to spec­i­fy mul­ti­ple snap­shots be merged to their re­spec­tive ori­gin.
sudo umount  /lv_name-mountpoint
sudo lvconvert --merge /dev/mapper/vg_name-lv_name_ss_at_date_220908

If it is a root filesys­tem, and the source log­i­cal vol­ume is not op­er­a­tional at all, we need to boot in live Lin­ux ses­sion in or­der to per­form the steps, be­cause it must be un­mount­ed. It it is not a root filesys­tem, we need to un­mount the log­i­cal vol­ume first.

LvChange: De­ac­ti­vate, re­ac­ti­vate and re­mount the LV

lvchange – Change the at­trib­ut­es of log­i­cal volume(s).

#Ex.
  • -a|, --activate y|n|ay – Change the ac­tive state of LVs. An ac­tive LV can be used through a block de­vice, al­low­ing da­ta on the LV to be ac­cessed. y makes LVs ac­tive, or avail­able. n makes LVs in­ac­tive, or un­avail­able…
sudo lvchange -an /dev/mapper/vg_name-lv_name
sudo lvchange -ay /dev/mapper/vg_name-lv_name
sudo lvchange -ay /dev/mapper/vg_name-lv_name -K

If we are work­ing with­in a live Lin­ux ses­sion – i.e. we are restor­ing a root filesys­tem, – these steps will be per­formed at the next re­boot. Oth­er­wise we need to run the fol­low­ing steps in or­der to ap­ply the changes with­out re­boot.

sudo mount -a   # remount via /etc/fstab
df -h           # list the mounted devices

Re­move a Snap­shot

lvre­move – Re­move log­i­cal volume(s) from the sys­tem.

sudo lvremove /dev/vg_name/lv_name_ss_at_date_220908
sudo lvremove /dev/mapper/vg_name-lv_name_ss_at_date_220908

Ref­er­ences