LVM Basic Operations

From WikiMLT
Revision as of 14:37, 24 May 2023 by Spas (talk | contribs) (Стадий: 4 [Фаза:Авторизиране, Статус:Разработен]; Категория:Linux Server)

In the fol­low­ing sec­tions is pro­vid­ed in­for­ma­tion about the most used LVM re­lat­ed op­er­a­tions and ba­sic con­cepts. There are much more ca­pa­bil­i­ties of the Log­i­cal Vol­ume Man­ag­er – you can get fa­mil­iar with them by read­ing the main source of this ar­ti­cle Red Hat Docs > [9] > Con­fig­ur­ing and man­ag­ing log­i­cal vol­umes .

Overview of Log­i­cal Vol­ume Man­age­ment (LVM)

Figure 1. LVM log­i­cal vol­ume com­po­nents.

Log­i­cal vol­ume man­age­ment (LVM) cre­ates a lay­er of ab­strac­tion over phys­i­cal stor­age, which helps you to cre­ate log­i­cal stor­age vol­umes. This pro­vides much greater flex­i­bil­i­ty in a num­ber of ways than us­ing phys­i­cal stor­age di­rect­ly.

In ad­di­tion, the hard­ware stor­age con­fig­u­ra­tion is hid­den from the soft­ware so it can be re­sized and moved with­out stop­ping ap­pli­ca­tions or un­mount­ing file sys­tems. This can re­duce op­er­a­tional costs.

The fol­low­ing are the com­po­nents of LVM – they are il­lus­trat­ed on the di­a­gram shown at Fig­ure 1:

  • Vol­ume group: A vol­ume group (VG) is a col­lec­tion of phys­i­cal vol­umes (PVs), which cre­ates a pool of disk space out of which log­i­cal vol­umes can be al­lo­cat­ed. For more in­for­ma­tion, see Man­ag­ing LVM vol­ume groups.

Re­quire­ments: the pack­age lvm2 must be in­stalled.

apt install lvm2

Phys­i­cal Vol­umes (PV)

The phys­i­cal vol­ume (PV) is a par­ti­tion or whole disk des­ig­nat­ed for LVM use. To use the de­vice for an LVM log­i­cal vol­ume, the de­vice must be ini­tial­ized as a phys­i­cal vol­ume.

If you are us­ing a whole disk de­vice for your phys­i­cal vol­ume, the disk must have no par­ti­tion ta­ble. For DOS disk par­ti­tions, the par­ti­tion id should be set to 0x8e us­ing the fdisk or cfdisk com­mand or an equiv­a­lent. If you are us­ing a whole disk de­vice for your phys­i­cal vol­ume, the disk must have no par­ti­tion ta­ble. Any ex­ist­ing par­ti­tion ta­ble must be erased, which will ef­fec­tive­ly de­stroy all da­ta on that disk. You can re­move an ex­ist­ing par­ti­tion ta­ble us­ing the wipefs ‑a <Phys­i­calVol­ume>` com­mand as root.

Dis­play PV

pvdis­play – Dis­play var­i­ous at­trib­ut­es of phys­i­cal volume(s). pvdis­play shows the at­trib­ut­es of PVs, like size, phys­i­cal ex­tent size, space used for the VG de­scrip­tor area, etc.

pvdisplay
--- Physical volume ---
PV Name               /dev/nvme0n1p3
VG Name               vg_name
PV Size               <930.54 GiB / not usable 4.00 MiB
Allocatable           yes 
PE Size               4.00 MiB
Total PE              238216
Free PE               155272
Allocated PE          82944
PV UUID               BKK3Cm-CAzY-rNah-o3Ox-L7FY-aNIY-ysUSIL

pvs – Dis­play in­for­ma­tion about phys­i­cal vol­umes. pvs is a pre­ferred al­ter­na­tive of pvdis­play that shows the same in­for­ma­tion and more, us­ing a more com­pact and con­fig­urable out­put for­mat.

pvs
PV             VG        Fmt  Attr PSize   PFree  
/dev/nvme0n1p3 vg_name   lvm2 a--  930.53g 626.53g

The com­mand ls­blk al­so out­puts use­ful in­for­ma­tion about PV-VG-LV re­la­tions.

lsblk -M
       NAME                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
       nvme0n1                               259:0   0 931.5G  0 disk 
       ├─nvme0n1p1                           259:1   0   512M  0 part /boot/efi
       ├─nvme0n1p2                           259:2   0   488M  0 part /boot
       └─nvme0n1p3                           259:3   0 930.5G  0 part 
   ┌─>   ├─vg-lv_1-real                      254:0   0   60G  0 lvm  
   │     │ └─vg-lv_1                         254:1   0   60G  0 lvm  /
┌─>│     ├─vg-lv_2-real                      254:2   0   60G  0 lvm  
│  │     │ └─vg-lv_2                         254:7   0   60G  0 lvm  /home
└┬>│     ├─vg-lv_2_ss_at_date_220912-cow     254:3   0   60G  0 lvm  
 │ └┬>   ├─vg-lv_1_ss_at_date_220908-cow     254:4   0   60G  0 lvm  
 │  │    └─vg-swap_1                         254:6   0    4G  0 lvm  [SWAP]
 │  └──vg-lv_1_ss_at_date_220908             254:5   0   60G  0 lvm  
 └─────vg-lv_2_ss_at_date_220912             254:9   0   60G  0 lvm

pvs­can – List all phys­i­cal vol­umes. The pvs­can com­mand scans all sup­port­ed LVM block de­vices in the sys­tem for phys­i­cal vol­umes. You can de­fine a fil­ter in the lvm.conf file so that this com­mand avoids scan­ning spe­cif­ic phys­i­cal vol­umes.

pvscan
PV /dev/nvme0n1p3   VG vg_name         lvm2 [930.53 GiB / 566.53 GiB free]
Total: 1 [930.53 GiB] / in use: 1 [930.53 GiB] / in no VG: 0 [0   ]

Cre­ate PV

The phys­i­cal vol­ume could be cre­at­ed on the en­tire dri­ve or on a par­ti­tion. It is prefer­able to par­ti­tion the disk – cre­ate a sin­gle par­ti­tion that cov­ers the whole disk to la­bel as an LVM phys­i­cal vol­ume, – oth­er­wise some tools like CloneZil­la wont han­dle the dri­ve cor­rect­ly when LVM is in­stalled di­rect­ly on the dri­ve (with­out par­ti­tion­ing), etc.. How to par­ti­tion the disk from the CLI is de­scribed in the ar­ti­cle Lin­ux Ba­sic Par­ti­tion­ing.

Cre­ate Phys­i­cals vol­ume by the par­ti­tion /​​​dev/​​​nvme0n1p3.

pvcreate /dev/nvme0n1p3
Physical volume "/dev/nvme0n1p3" successfully created.

An­oth­er ex­am­ple: Cre­ate Phys­i­cals vol­ume by the par­ti­tion /​​​dev/​​​vda1 and /​​​dev/​​​vdb1.

pvcreate /dev/vda1 /dev/vdb1
Physical volume "/dev/vda1" successfully created.
Physical volume "/dev/vdb1" successfully created.

Re­move PV

If a de­vice is no longer re­quired for use by LVM, you can re­move the LVM la­bel by us­ing the pvre­move com­mand. Ex­e­cut­ing the pvre­move com­mand ze­roes the LVM meta­da­ta on an emp­ty phys­i­cal vol­ume.

pvremove /dev/nvme0n2p1
Labels on physical volume "/dev/nvme0n2p1" successfully wiped.

Then use pvs and ver­i­fy if the re­quired vol­ume is re­moved. If the phys­i­cal vol­ume you want to re­move is cur­rent­ly part of a vol­ume group, you must re­move it from the vol­ume group with the vgre­duce com­mand. For more in­for­ma­tion, see Re­mov­ing phys­i­cal vol­umes from a vol­ume group.

Vol­ume Groups (VG)

A vol­ume group (VG) is a col­lec­tion of phys­i­cal vol­umes (PVs), which cre­ates a pool of disk space out of which log­i­cal vol­umes (LVs) can be al­lo­cat­ed.

With­in a vol­ume group, the disk space avail­able for al­lo­ca­tion is di­vid­ed in­to units of a fixed-size called ex­tents. An ex­tent is the small­est unit of space that can be al­lo­cat­ed. With­in a phys­i­cal vol­ume, ex­tents are re­ferred to as phys­i­cal ex­tents.

A log­i­cal vol­ume is al­lo­cat­ed in­to log­i­cal ex­tents of the same size as the phys­i­cal ex­tents. The ex­tent size is thus the same for all log­i­cal vol­umes in the vol­ume group. The vol­ume group maps the log­i­cal ex­tents to phys­i­cal ex­tents.

Dis­play VG

vgdis­play – Dis­play vol­ume group in­for­ma­tion. The vgdis­play com­mand dis­plays vol­ume group prop­er­ties such as size, ex­tents, num­ber of phys­i­cal vol­umes, and oth­er op­tions in a fixed form. If you do not spec­i­fy a vol­ume group, all ex­ist­ing vol­ume groups are dis­played.

vgdisplay
--- Volume group ---
VG Name               vg_name
System ID             
Format                lvm2
Metadata Areas        1
Metadata Sequence No  19
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                6
Open LV               4
Max PV                0
Cur PV                1
Act PV                1
VG Size               930.53 GiB
PE Size               4.00 MiB
Total PE              238216
Alloc PE / Size       93184 / 364.00 GiB
Free  PE / Size       145032 / 566.53 GiB
VG UUID               NWWfr6-xxfN-0d22-muPn-Rgqj-vAcA-z5P5jV

The fol­low­ing ex­am­ple shows the out­put of the vgdis­play com­mand for the vol­ume group vg_​​​name.

vgdisplay vg_name

vgs – Dis­play in­for­ma­tion about vol­ume groups. It is a pre­ferred al­ter­na­tive of vgdis­play that shows the same in­for­ma­tion and more, us­ing a more com­pact and con­fig­urable out­put for­mat.

vgs
VG        #PV  #LV  #SN  Attr    VSize    VFree  
vg_name     1    6    2  wz--n-  930.53g  566.53g

vgscan – Search for all vol­ume groups. vgscan scans all sup­port­ed LVM block de­vices in the sys­tem for VGs.

vgscan
Found volume group "vg_name" using metadata type lvm2

Cre­ate VG

Cre­ate Vol­ume group named vg_​​​name by us­ing the /​​​dev/​​​nvme0n1p3 Phys­i­cal vol­ume.

vgcreate vg_name /dev/nvme0n1p3
Volume group "vg_name" successfully created.

An­oth­er ex­am­ple: Cre­ate Vol­ume group named another_​​​vg by us­ing /​​​dev/​​​vda1 and /​​​dev/​​​vdb1 Phys­i­cal vol­umes.

vgcreate another_vg /dev/vda1 /dev/vdb1
Volume group "another_vg" successfully created.

An­oth­er ex­am­ple: Cre­ate a new phys­i­cal vol­ume and then a new vol­ume group on new­ly at­tached block de­vice – in this case /​​​dev/​​​sdc.

lsblk                      # check
pvcreat /dev/sdbc          # crate phisical volume
vgcreate vg_name /dev/sdbc # create volume group
vgdisplay                  # check

Ex­tend VG

vgextend – Add phys­i­cal vol­umes to a vol­ume group. vgextend adds one or more PVs to a VG. This in­creas­es the space avail­able for LVs in the VG. Al­so, PVs that have gone miss­ing and then re­turned, e.g. due to a tran­sient de­vice fail­ure, can be added back to the VG with­out re-ini­tial­iz­ing them (see --restore­missing).

If the spec­i­fied PVs have not yet been ini­tial­ized with pvcreate, vgextend will ini­tial­ize them. In this case pvcreate op­tions can be used, e.g. --label­sector, --meta­data­size, --meta­data­ignore, --pv­meta­data­copies, --data­align­ment, --data­align­ment­offset.

Ex­am­ple 1: Oc­cu­py a par­ti­tion that is not yet de­fined as Phys­i­cal vol­ume.

vgextend vg_name /dev/nvme0n2p1
Physical volume "/dev/nvme0n2p1" successfully created.
Volume group "vg_name" successfully extended.

Ex­am­ple 2: Cre­ate a new Phys­i­cal vol­ume and then ex­tend an ex­ist­ing Vol­ume group named another_​​​vg.

lsblk							# Check the devices
pvcreat /dev/sdb		    # Create a Phisical volume at /dev/sdb
pvdisplay					# Check the list of the Phisical volumes
vgextend another_vg /dev/sdb 	# Extend the existing VG to occupy the new PV
df -hT							# Check the list of the available devices
vgdisplay					# Check the VGs information

Merge VG: Com­bine VGs

vg­merge – Merge vol­ume groups. To com­bine two vol­ume groups in­to a sin­gle vol­ume group, use the vg­merge com­mand. You can merge an in­ac­tive "source" vol­ume with an ac­tive or an in­ac­tive "des­ti­na­tion" vol­ume if the phys­i­cal ex­tent sizes of the vol­ume are equal and the phys­i­cal and log­i­cal vol­ume sum­maries of both vol­ume groups fit in­to the des­ti­na­tion vol­ume groups lim­its.

Merge the in­ac­tive vol­ume group another_​​​vg in­to the ac­tive or in­ac­tive vol­ume group vg_​​​name giv­ing ver­bose run­time in­for­ma­tion.

vgmerge -v vg_name another_vg

Re­duce VG: Re­move PVs from a VG

vgre­duce – Re­move phys­i­cal volume(s) from a vol­ume group. The vgre­duce com­mand shrinks a vol­ume group’s ca­pac­i­ty by re­mov­ing one or more emp­ty phys­i­cal vol­umes. This frees those phys­i­cal vol­umes to be used in dif­fer­ent vol­ume groups or to be re­moved from the sys­tem.

1. If the Phys­i­cal vol­ume is still be­ing used, mi­grate the da­ta to an­oth­er Phys­i­cal vol­ume from the same Vol­ume group by pv­move.

pvmove /dev/vdb3
/dev/vdb3: Moved: 2.0%
...v/vdb3: Moved: 79...
/dev/vdb3: Moved: 100.0%

2. If there are no enough free ex­tents on the oth­er phys­i­cal vol­umes in the ex­ist­ing vol­ume group.

pvcreate /dev/vdb4           # Create a new physical volume from /dev/vdb4
vgextend vg_name /dev/vdb4   # Add the newly created PV to the vg_name VG
pvmove /dev/vdb3 /dev/vdb4   # Move the data from /dev/vdb3 to /dev/vdb4

3. Re­move the phys­i­cal vol­ume /​​​dev/​​​vdb3 from the vol­ume group.

pvmove /dev/vdb3

4. Ver­i­fy if the /​​​dev/​​​vdb3 phys­i­cal vol­ume is re­moved from the vg_​​​name vol­ume group by the help of pvs.

Split VG

vgsplit – Move phys­i­cal vol­umes in­to a new or ex­ist­ing vol­ume group. More de­tails at Red Hat's man­u­al – sec­tion Split­ting a LVM vol­ume group.

vgsplit myvg yourvg /dev/vdb3
Volume group "yourvg" successfully split from "myvg"

Re­name VG

vgrename – Re­name a vol­ume group. vgchange – Change vol­ume group at­trib­ut­es.

vgchange --activate n vg_name            # Deactivate the volume group
vgrename vg_name new_vg_name             # Rename an existing volume group
vgrename /dev/vg_name /dev/new_vg_name   # Rename by specifying the full paths to the devices

Re­name vol­ume group by UUID: This is use­ful when you have at­tached to the sys­tem a de­vice that have a VG with the same name as one al­ready ex­ist­ing. Use vgdis­play to iden­ti­fy the tar­get vol­ume group and its UUID.

vgdisplay
vgrename ZTIHAQ-rFck-3IEK-V2mx-mckk-R1zJ-nNKWox old_pve
WARNING: VG name "pve" is used by VGs "ZTIHAQ-rFck-3IEK-V2mx-mckk-R1zJ-nNKWox" and "hXgkkq-MdYE-dobE-VPrC-0c6f-p5Oi-L4XEZu".
Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
Processing VG pve because of matching UUID ZTIHAQ-rFck-3IEK-V2mx-mckk-R1zJ-nNKWox
Volume group "ZTIHAQ-rFck-3IEK-V2mx-mckk-R1zJ-nNKWox" successfully renamed to "old_pve"

Re­move VG

vgre­move – Re­move vol­ume group(s). vgre­move re­moves one or more VGs. If LVs ex­ist in the VG, a prompt is used to con­firm LV re­moval.

If one or more PVs in the VG are lost, con­sid­er vgreduce --remove­missing to make the VG meta­da­ta con­sis­tent again. Re­peat the force op­tion (-ff) to forcibly re­move LVs in the VG with­out con­fir­ma­tion.

Log­i­cal Vol­umes (LV)

A log­i­cal vol­ume is a vir­tu­al, block stor­age de­vice that a file sys­tem, data­base, or ap­pli­ca­tion can use. To cre­ate an LVM log­i­cal vol­ume, the phys­i­cal vol­umes (PVs) are com­bined in­to a vol­ume group (VG). This cre­ates a pool of disk space out of which LVM log­i­cal vol­umes (LVs) can be al­lo­cat­ed.

An ad­min­is­tra­tor can grow or shrink log­i­cal vol­umes with­out de­stroy­ing da­ta, un­like stan­dard disk par­ti­tions. If the phys­i­cal vol­umes in a vol­ume group are on sep­a­rate dri­ves or RAID ar­rays, then ad­min­is­tra­tors can al­so spread a log­i­cal vol­ume across the stor­age de­vices.

The fol­low­ing are the dif­fer­ent types of log­i­cal vol­umes (more de­tails):

  • Lin­ear vol­umes
  • Striped log­i­cal vol­umes
  • RAID log­i­cal vol­umes
  • Thin-pro­vi­sioned log­i­cal vol­umes (thin vol­umes) – Us­ing thin-pro­vi­sioned log­i­cal vol­umes, you can cre­ate log­i­cal vol­umes that are larg­er than the avail­able phys­i­cal stor­age…
  • Snap­shot vol­umes
  • Thin-pro­vi­sioned snap­shot vol­umes
  • Cache vol­umes

Cre­ate LV

lvcre­ate – Cre­ate LVM log­i­cal vol­ume at lvm-vm-group:

lvcreate -n vm-win-01 -L 60g lvm-vm-group
Logical volume "vm-win-01" created.
  • vm-win-01 is the name of the log­i­cal vol­ume (de­vice), it is mater of your choice.

Check the re­sult:

 lsblk | grep -P 'sdb|lvm'
├─sdb1                           8:17   0     1G  0 part
└─sdb2                           8:18   0 231.9G  0 part
  └─lvm--vm--group-vm--win--01 253:0    0    60G  0 lvm

An­oth­er ex­am­ple: For Re­cov­er­ing PVE from grub "disk not found" er­ror when boot­ing from LVM by cre­at­ing 4MiB LV.

vgscan
lvcreate -L 4M pve -n grubtemp
  • As­sume you have pve vol­ume group.
  • grubtemp is the name of the new log­i­cal vol­ume (LV).

An­oth­er ex­am­ple: Cre­ate a new log­i­cal vol­ume and for­mat it as Ext4, and mount it per­ma­nent­ly via /​​​etc/​​​fstab by its UUID.

lvcreate vg_name -L 50G -n lv_name    # create 50GB logical volume 		
lvdisplay                             # check
mkfs.ext4 /dev/mapper/vg_name-lv_name # format the logical folume as Ext4
mount /dev/mapper/vg_name-lv_name /mnt/volume # Mount it manually
df -h /mnt/volume                             # Check
umount /mnt/volume
blkid /dev/mapper/vg_name-lv_name
/dev/mapper/vg_name-lv_name: UUID="6cb60ea8-d45e..." BLOCK_SIZE="4096" TYPE="ext4"
nano /etc/fstab
UUID=6cb60ea8-d4... /mnt/volume  ext4  defaults  0 2

An­oth­er ex­am­ple: Cre­ate a new log­i­cal vol­ume and for­mat it as Swap, and mount it per­ma­nent­ly via /​​​etc/​​​fstab. by vg_​​​name/​​​lv_​​​name.

lvcreate vg_name -L 16G -n swap_1 # create 16GB logical volume 		
lvdisplay                         # check
mkswap /dev/mapper/vg_name-swap_1 # format the logical folume as Swap
swapon /dev/mapper/vg_name-swap_1
nano /etc/fstab
/dev/vg_name/swap_1  none  swap  sw  0 0

Re­name LV

lvre­name – Re­name a log­i­cal vol­ume.

umount /mnt/my-lv-mount-point
lvrename my-volume-group my-lv-prev my-lv-new

Al­ter­na­tive­ly you can re­name LV by its full path.

lvrename /dev/my-volume-group/my-lv-prev /dev/my-volume-group/my-lv-new

Re­move LV

lvre­move – Re­move log­i­cal volume(s) from the sys­tem.

If the log­i­cal vol­ume is cur­rent­ly mount­ed, un­mount the vol­ume.

umount /mnt/my-lv-name-mount-point

If the log­i­cal vol­ume ex­ists in a clus­tered en­vi­ron­ment, de­ac­ti­vate the log­i­cal vol­ume on all nodes where it is ac­tive. Use the fol­low­ing com­mand on each such node.

lvchange --activate n my-volume-group/my-lv-name

Re­move the log­i­cal vol­ume us­ing the lvre­move util­i­ty.

lvremove /dev/my-volume-group/my-lv-name
Do you really want to remove active logical volume my-volume-group/my-lv-name? [y/n]: y
  Logical volume "my-lv-name" successfully removed

Re­size LV (extend|reduce)

lvre­size – Re­size a log­i­cal vol­ume. lvre­size re­sizes an LV in the same way as lvex­tend and lvre­duce

Ex­am­ple 0: Re­size (Ex­tend) an LV by 16MB us­ing spe­cif­ic phys­i­cal ex­tents.

lvresize -L+16M vg1/lv1 /dev/sda:0-1 /dev/sdb:0-1

Ex­am­ple 1: Ex­tend a log­i­cal vol­ume by 10GiB and re­size the file-sys­tem man­u­al­ly. (note there mus have enough free space in the vol­ume group where the ex­ist­ing LV is lo­cat­ed.

lvextend -L +10G /dev/mapper/vg_name-lv_name
rsize2fs /dev/mapper/vg_name-lv_name
df -h # To see the changes if the LV is mounted

Ex­am­ple 2: Ex­tend a log­i­cal vol­ume, oc­cu­py 100% of the avail­able space in the VG and re­size the file-sys­tem in the same time.

lvextend --reziefs -l +100%FREE /dev/mapper/vg_name-lv_name
df -h # To see the changes if the LV is mounted

Ex­am­ple 3: Re­duce (Resize|Shrink) the LV and its file-sys­tem (from larg­er size) to ex­act­ly 60GiB. Note this may be dan­ger­ous an you may need to take a back­up of the file-sys­tem be­fore do it.

lvreduce --resizefs -L 60G vg_name/lv_name
df -h # To see the changes if the LV is mounted

Snap­shots of Log­i­cal Vol­umes

Us­ing the LVM snap­shot fea­ture, you can cre­ate vir­tu­al im­ages of a vol­ume, for ex­am­ple, /​​​dev/​​​sda, at a par­tic­u­lar in­stant with­out caus­ing a ser­vice in­ter­rup­tion.

When you mod­i­fy the orig­i­nal vol­ume (the ori­gin) af­ter you take a snap­shot, the snap­shot fea­ture makes a copy of the mod­i­fied da­ta area as it was pri­or to the change so that it can re­con­struct the state of the vol­ume. When you cre­ate a snap­shot, full read and write ac­cess to the ori­gin stays pos­si­ble.…

Cre­ate a Snap­shot

lvcre­ate – Cre­ate a log­i­cal vol­ume.

#Ex­pla­na­tion
  • -s, --snapshot – Cre­ate a snap­shot. Snap­shots pro­vide a "frozen im­age" of an ori­gin LV. The snap­shot LV can be used, e.g. for back­ups, while the ori­gin LV con­tin­ues to be used. This op­tion can cre­ate a COW (copy on write) snap­shot, or a thin snap­shot (in a thin pool.) Thin snap­shots are cre­at­ed when the ori­gin is a thin LV and the size op­tion is NOT spec­i­fied.
  • -n, --name – Spec­i­fies the name of a new LV. When un­spec­i­fied, a de­fault name of lvol# is gen­er­at­ed, where # is a num­ber gen­er­at­ed by LVM.

Cre­ate 5GB snap­shot of a log­i­cal vol­ume named lv_​​​name from vol­ume group named vg_​​​name.

lvcreate /dev/mapper/vg_name-lv_name -L 5G -s -n lv_name_ss_at_date_$(date +%y%m%d)
lvs
LV                         VG        Attr       LSize   Pool Origin    Data%  Meta%  Move Log Cpy%Sync Convert
lv_name                    vg_name owi-aos---  60.00g                                                    
lv_name_ss_at_date_220908  vg_name swi-a-s---   5.00g        lv_name   0.01

Mount a Snap­shot

The snap­shot could be mount­ed in or­der to fetch files in their snap­shot state. In this case it is good to mount them in read on­ly mode.

mkdir /mnt/snapshot
mount -r /dev/mapper/vg_name-lv_name_ss_at_date_220908 /mnt/snapshot

You can use a mount­ed snap­shot (of the root filesys­tem) to cre­ate a back­up while the OS is run­ning.

Re­store a Snap­shot

Note af­ter the restor­ing of a snap­shot with merge op­tion in use the snap­shot will be re­moved, so you must cre­ate an­oth­er snap­shot for a lat­er use of the same state of the log­i­cal vol­ume.

For restor­ing a snap­shot we will need to use the com­mands lv­con­vert and lvchange.

Lv­Con­vert: Con­vert-merge the snap­shot with the source LV

lv­con­vert – Change log­i­cal vol­ume lay­out.

#Ex­pla­na­tion
  • --merge – An alias for --mergethin, --mergemirrors, or --mergesnapshot, de­pend­ing on the type of LV (log­i­cal vol­ume).
  • --mergesnapshot – Merge COW snap­shot LV in­to its ori­gin. When merg­ing a snap­shot, if both the ori­gin and snap­shot LVs are not open, the merge will start im­me­di­ate­ly. Oth­er­wise, the merge will start the first time ei­ther the ori­gin or snap­shot LV are ac­ti­vat­ed and both are closed. Merg­ing a snap­shot in­to an ori­gin that can­not be closed, for ex­am­ple a root filesys­tem, is de­ferred un­til the next time the ori­gin vol­ume is ac­ti­vat­ed. When merg­ing starts, the re­sult­ing LV will have the origin's name, mi­nor num­ber and UUID. While the merge is in progress, reads or writes to the ori­gin ap­pear as be­ing di­rect­ed to the snap­shot be­ing merged. When the merge fin­ish­es, the merged snap­shot is re­moved. Mul­ti­ple snap­shots may be spec­i­fied on the com­mand line or a @tag may be used to spec­i­fy mul­ti­ple snap­shots be merged to their re­spec­tive ori­gin.
umount  /lv_name-mountpoint
lvconvert --merge /dev/mapper/vg_name-lv_name_ss_at_date_220908

If it is a root filesys­tem, and the source log­i­cal vol­ume is not op­er­a­tional at all, we need to boot in live Lin­ux ses­sion in or­der to per­form the steps, be­cause it must be un­mount­ed. It it is not a root filesys­tem, we need to un­mount the log­i­cal vol­ume first.

LvChange: De­ac­ti­vate, re­ac­ti­vate and re­mount the LV

lvchange – Change the at­trib­ut­es of log­i­cal volume(s).

#Ex.
  • -a|, --activate y|n|ay – Change the ac­tive state of LVs. An ac­tive LV can be used through a block de­vice, al­low­ing da­ta on the LV to be ac­cessed. y makes LVs ac­tive, or avail­able. n makes LVs in­ac­tive, or un­avail­able…
lvchange -an /dev/mapper/vg_name-lv_name
lvchange -ay /dev/mapper/vg_name-lv_name
lvchange -ay /dev/mapper/vg_name-lv_name -K

If we are work­ing with­in a live Lin­ux ses­sion – i.e. we are restor­ing a root filesys­tem, – these steps will be per­formed at the next re­boot. Oth­er­wise we need to run the fol­low­ing steps in or­der to ap­ply the changes with­out re­boot.

mount -a   # remount via /etc/fstab
df -h           # list the mounted devices

Re­move a Snap­shot

lvre­move – Re­move log­i­cal volume(s) from the sys­tem.

lvremove /dev/vg_name/lv_name_ss_at_date_220908
lvremove /dev/mapper/vg_name-lv_name_ss_at_date_220908

Thin Vol­umes, En­able Caching and Cre­ate Meth­ods…

Ref­er­ences:

Ref­er­ences