LVM Basic Operations: Difference between revisions

From WikiMLT
mNo edit summary
mNo edit summary
Line 1: Line 1:
<noinclude>{{ContentArticleHeader/Linux_Server|toc=off}}{{ContentArticleHeader/Linux_Desktop}}</noinclude>
<noinclude>{{ContentArticleHeader/Linux_Server|toc=off}}{{ContentArticleHeader/Linux_Desktop}}</noinclude>
== Overview of logical volume management (LVM) ==
* '''''Source: [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/overview-of-logical-volume-management_configuring-and-managing-logical-volumes Red Hat Docs > <nowiki>[9]</nowiki> > Configuring and managing logical volumes > Chapter 1]'''''
Logical volume management (LVM) creates a layer of abstraction over physical storage, which helps you to create logical storage volumes. This provides much greater flexibility in a number of ways than using physical storage directly.
In addition, the hardware storage configuration is hidden from the software so it can be resized and moved without stopping applications or unmounting file systems. This can reduce operational costs.
The following are the components of LVM - they are illustrated on the diagram shown at {{Media-cite|f|1}}:
* '''Physical volume''': A physical volume (PV) is a partition or whole disk designated for LVM use. For more information, see [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/managing-lvm-physical-volumes_configuring-and-managing-logical-volumes Managing LVM physical volumes].
* '''Volume group''': A volume group (VG) is a collection of physical volumes (PVs), which creates a pool of disk space out of which logical volumes can be allocated. For more information, see [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/managing-lvm-volume-groups_configuring-and-managing-logical-volumes Managing LVM volume groups].
* '''Logical volume''': A logical volume represents a mountable storage device. For more information, see [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/managing-lvm-logical-volumes_configuring-and-managing-logical-volumes Managing LVM logical volumes].
{{Media
| n = 1
| img = Basic-lvm-volume-components-redhat-docs.png
| sz = 1200
| pos = center
| label = f
}}


== Physical volumes (PV) Display ==
== Physical volumes (PV) Display ==

Revision as of 11:29, 12 September 2022

Overview of log­i­cal vol­ume man­age­ment (LVM)

Log­i­cal vol­ume man­age­ment (LVM) cre­ates a lay­er of ab­strac­tion over phys­i­cal stor­age, which helps you to cre­ate log­i­cal stor­age vol­umes. This pro­vides much greater flex­i­bil­i­ty in a num­ber of ways than us­ing phys­i­cal stor­age di­rect­ly.

In ad­di­tion, the hard­ware stor­age con­fig­u­ra­tion is hid­den from the soft­ware so it can be re­sized and moved with­out stop­ping ap­pli­ca­tions or un­mount­ing file sys­tems. This can re­duce op­er­a­tional costs.

The fol­low­ing are the com­po­nents of LVM – they are il­lus­trat­ed on the di­a­gram shown at Fig­ure 1:

  • Vol­ume group: A vol­ume group (VG) is a col­lec­tion of phys­i­cal vol­umes (PVs), which cre­ates a pool of disk space out of which log­i­cal vol­umes can be al­lo­cat­ed. For more in­for­ma­tion, see Man­ag­ing LVM vol­ume groups.
Figure 1. LVM log­i­cal vol­ume com­po­nents.

Phys­i­cal vol­umes (PV) Dis­play

pvdis­play – Dis­play var­i­ous at­trib­ut­es of phys­i­cal volume(s). pvdis­play shows the at­trib­ut­es of PVs, like size, phys­i­cal ex­tent size, space used for the VG de­scrip­tor area, etc.

sudo pvdisplay
--- Physical volume ---
PV Name               /dev/nvme0n1p3
VG Name              vg_name
PV Size               <930.54 GiB / not usable 4.00 MiB
Allocatable           yes 
PE Size               4.00 MiB
Total PE              238216
Free PE               155272
Allocated PE          82944
PV UUID               BKK3Cm-CAzY-rNah-o3Ox-L7FY-aNIY-ysUSIL

pvs – Dis­play in­for­ma­tion about phys­i­cal vol­umes. pvs is a pre­ferred al­ter­na­tive of pvdis­play that shows the same in­for­ma­tion and more, us­ing a more com­pact and con­fig­urable out­put for­mat.

sudo pvs
PV             VG        Fmt  Attr PSize   PFree  
/dev/nvme0n1p3 vg_name   lvm2 a--  930.53g 626.53g
# `lvdisplay` - Display information about a logical volume. Shows the attributes of LVs, like size, read/write status, snapshot information, etc.
# `lvs` is a preferred alternative that shows the same information and more, using a more compact and configurable output format.

## Extend VG  which is located on /dev/sda to encompass also /dev/sdb 
lsblk							# check the devices
sudo pvcreat /dev/sdb		    # create phisical volume at /dev/sdb
sudo pvdisplay					# check
sudo vgextend vg_name /dev/sdb 	# extend  the existing volume group
df -hT							# check
sudo vgdisplay					# check
# At this point you can create a new logical volume or extend an existing one

##  Extend logical volume - note there mus have enoug free space in the volume group where the existing LV is located
sudo lvextend -L +10G /dev/mapper/lv_name
sudo rsize2fs /dev/mapper/lv_name
df -h
### for swap file sudo mkswap /dev/mapper/xxxx-swap_1
### then edit /etc/fstab and chenge the UUID if it is mounted by it...

## Exten to all available space and resize the FS in the same time
sudo lvextend --reziefs -l +100%FREE /dev/mapper/lv_name
df -h

# Shrink logical volume
    1  lsblk
    2  sudo apt update
    3  sudo apt install gnome-disk-utility
    4  ll
    5  lsblk
    6  ll /dev/kali-x-vg
    7  ll /dev/kali-x-vg/root
    8  ll /dev/dm-0
    9  sudo lvreduce --resizefs -L 60G kali-x-vg/root
   10  lsblk
   11  df -h


## Create a new volume group on newly attached block device /dev/sdc
lsblk 																						# check
sudo pvcreat /dev/sdbc						# crrate phisical volume
sudo vgcreate vg_name /dev/sdbc				# create volume group
sudo vgdisplay								# check

## Create a new logical volume
sudo lvcreate vg_name -L 5G -n lv_name		# create 5GB logical volume 		
sudo lvdisplay								# check
sudo mkfs.ext4 /dev/mapper/vg_name-lv_name	# format the logical folume as Ext4

# mount the new logical volume
sudo mkdir -p /mnt/volume																															# create mount point
sudo mount /dev/mapper/vg_name-lv_name	/mnt/volume	# mount the volume
df -h																																												# check

# mount it permanently via /etc/fstab (we can munt it by using the mapped path, but it is preferable to use UUID)
sudo blkid /dev/mapper/vg_name-lv_name			# get the UUID (universal unique identifier) of the logical volume
> /dev/mapper/vg_name-lv_name: UUID="b6ddc49d-...-...c90" BLOCK_SIZE="4096" TYPE="ext4"

sudo cp /etc/fstab{,.bak}			# backup the fstab file
sudo umount /mnt/volume		# unmount the nel volume
df -h														# check

sudo nano /etc/fstab
># <file system>                            <mount point>   <type>  <options>       <dump>  <pass> 
>     UUID=b6ddc49d-...-...c90 /mnt/volume     ext4        defaults			 0				  2

# Test /etc/fsstab for errors, by remount everything listed inside
sudo mount -a			# no output means everything is correctly mounted
df -h								# check

Snap­shots of Log­i­cal Vol­umes

Cre­ate a Snap­shot

lvcre­ate – Cre­ate a log­i­cal vol­ume.

#Ex­pla­na­tion
  • -s, --snapshot – Cre­ate a snap­shot. Snap­shots pro­vide a "frozen im­age" of an ori­gin LV. The snap­shot LV can be used, e.g. for back­ups, while the ori­gin LV con­tin­ues to be used. This op­tion can cre­ate a COW (copy on write) snap­shot, or a thin snap­shot (in a thin pool.) Thin snap­shots are cre­at­ed when the ori­gin is a thin LV and the size op­tion is NOT spec­i­fied.
  • -n, --name – Spec­i­fies the name of a new LV. When un­spec­i­fied, a de­fault name of lvol# is gen­er­at­ed, where # is a num­ber gen­er­at­ed by LVM.

Cre­ate 5GB snap­shot of a log­i­cal vol­ume named lv_​​​name from vol­ume group named vg_​​​name.

sudo lvcreate /dev/mapper/vg_name-lv_name -L 5G -s -n lv_name_ss_at_date_$(date +%y%m%d)
sudo lvs
LV                         VG        Attr       LSize   Pool Origin    Data%  Meta%  Move Log Cpy%Sync Convert
lv_name                    vg_name owi-aos---  60.00g                                                    
lv_name_ss_at_date_220908  vg_name swi-a-s---   5.00g        lv_name   0.01

Mount a Snap­shot

The snap­shot could be mount­ed in or­der to fetch files in their snap­shot state. In this case it is good to mount them in read on­ly mode.

sudo mkdir /mnt/snapshot
sudo mount -r /dev/mapper/vg_name-lv_name_ss_at_date_220908 /mnt/snapshot

You can use a mount­ed snap­shot (of the root filesys­tem) to cre­ate a back­up while the OS is run­ning.

Re­store a Snap­shot

Note af­ter the restor­ing of a snap­shot with merge op­tion in use the snap­shot will be re­moved, so you must cre­ate an­oth­er snap­shot for a lat­er use of the same state of the log­i­cal vol­ume.

For restor­ing a snap­shot we will need to use the com­mands lv­con­vert and lvchange.

Lv­Con­vert: Con­vert-merge the snap­shot with the source LV

lv­con­vert – Change log­i­cal vol­ume lay­out.

#Ex­pla­na­tion
  • --merge – An alias for --mergethin, --mergemirrors, or --mergesnapshot, de­pend­ing on the type of LV (log­i­cal vol­ume).
  • --mergesnapshot – Merge COW snap­shot LV in­to its ori­gin. When merg­ing a snap­shot, if both the ori­gin and snap­shot LVs are not open, the merge will start im­me­di­ate­ly. Oth­er­wise, the merge will start the first time ei­ther the ori­gin or snap­shot LV are ac­ti­vat­ed and both are closed. Merg­ing a snap­shot in­to an ori­gin that can­not be closed, for ex­am­ple a root filesys­tem, is de­ferred un­til the next time the ori­gin vol­ume is ac­ti­vat­ed. When merg­ing starts, the re­sult­ing LV will have the origin's name, mi­nor num­ber and UUID. While the merge is in progress, reads or writes to the ori­gin ap­pear as be­ing di­rect­ed to the snap­shot be­ing merged. When the merge fin­ish­es, the merged snap­shot is re­moved. Mul­ti­ple snap­shots may be spec­i­fied on the com­mand line or a @tag may be used to spec­i­fy mul­ti­ple snap­shots be merged to their re­spec­tive ori­gin.
sudo umount  /lv_name-mountpoint
sudo lvconvert --merge /dev/mapper/vg_name-lv_name_ss_at_date_220908

If it is a root filesys­tem, and the source log­i­cal vol­ume is not op­er­a­tional at all, we need to boot in live Lin­ux ses­sion in or­der to per­form the steps, be­cause it must be un­mount­ed. It it is not a root filesys­tem, we need to un­mount the log­i­cal vol­ume first.

LvChange: De­ac­ti­vate, re­ac­ti­vate and re­mount the LV

lvchange – Change the at­trib­ut­es of log­i­cal volume(s).

#Ex­pl.
  • -a|, --activate y|n|ay – Change the ac­tive state of LVs. An ac­tive LV can be used through a block de­vice, al­low­ing da­ta on the LV to be ac­cessed. y makes LVs ac­tive, or avail­able. n makes LVs in­ac­tive, or un­avail­able…
sudo lvchange -an /dev/mapper/vg_name-lv_name
sudo lvchange -ay /dev/mapper/vg_name-lv_name

If we are work­ing with­in a live Lin­ux ses­sion – i.e. we are restor­ing a root filesys­tem, – these steps will be per­formed at the next re­boot. Oth­er­wise we need to run the fol­low­ing steps in or­der to ap­ply the changes with­out re­boot.

sudo mount -a   # remount via /etc/fstab
df -h           # list the mounted devices

Re­move a Snap­shot

lvre­move – Re­move log­i­cal volume(s) from the sys­tem.

sudo lvremove /dev/vg_name/lv_name_ss_at_date_220908
sudo lvremove /dev/mapper/vg_name-lv_name_ss_at_date_220908

Ref­er­ences


As­sum­ing we want to cre­ate LVM and we want to oc­cu­py the en­tire disk space at /​​​dev/​​​sdb. You can fdisk or gdisk to cre­ate a new GPT par­ti­tion ta­ble that will wipe all par­ti­tions and cre­ate new par­ti­tions you need:

  • If the de­vice /​​​dev/​​​sdb will be used used as boot de­vice you can cre­ate two par­ti­tions one for /​​​boot – Lin­ux ext4 and one for the root fs / – Lin­ux LVM.
  • If the de­vice /​​​dev/​​​sdb won't be used as boot de­vice you can cre­ate on­ly one for LVM.

Cre­ate par­ti­tion ta­ble:

sudo fdisk /dev/sdb
#fdisk
# create a new empty GPT partition table
Command (m for help): g
# add a new partition
Command (m for help): n
Partition number (1-128, default 1): 1
First sector (2048-488397134, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-488397134, default 488397134): +1G

Created a new partition 1 of type 'Linux filesystem' and of size 1 GiB.
# add a new partition
Command (m for help): n
Partition number (2-128, default 2):
First sector (2099200-488397134, default 2099200):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2099200-488397134, default 488397134):

Created a new partition 2 of type 'Linux filesystem' and of size 231.9 GiB.
# change a partition type
Command (m for help): t
Partition number (1,2, default 2): 2
Partition type (type L to list all types): 31

Changed type of partition 'Linux filesystem' to 'Linux LVM'.
# print the partition table
Command (m for help): p
Disk /dev/sdb: 232.91 GiB, 250059350016 bytes, 488397168 sectors
Disk model: CT250MX500SSD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 005499CA-7834-434B-9C36-5306537C8CF1

Device       Start       End   Sectors   Size Type
/dev/sdb1     2048   2099199   2097152     1G Linux filesystem
/dev/sdb2  2099200 488397134 486297935 231.9G Linux LVM

Filesystem/RAID signature on partition 1 will be wiped.
#  verify the partition table
Command (m for help): v
No errors detected.
Header version: 1.0
Using 2 out of 128 partitions.
A total of 0 free sectors is available in 0 segments (the largest is (null)).
# write table to disk and exit
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

For­mat the first par­ti­tion /​​​dev/​​​sdb1 to ext4:

sudo mkfs.ext4 /dev/sdb1
#mke2fs
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: a6a72cfe-46f1-4caa-b114-6bf03f1efe7f
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

Cre­ate LVM phys­i­cals vol­ume at the sec­ond par­ti­tion /​​​dev/​​​sdb1:

sudo pvcreate /dev/sdb2
#lvm2 pvc
Physical volume "/dev/sdb2" successfully created.

Cre­ate LVM vol­ume group at /​​​dev/​​​sdb1:

sudo vgcreate lvm-vm-group /dev/sdb2
#lvm2 vgc
Physical volume "/dev/sdb2" successfully created.
  • lvm-vm-group is the name of the group, it is mater of your choice.

Cre­ate LVM log­i­cal vol­ume at lvm-vm-group:

sudo lvcreate -n vm-win-01 -L 60g lvm-vm-group
#lvm2 lvc
Logical volume "vm-win-01" created.
  • vm-win-01 is the name of the log­i­cal de­vice, it is mater of your choice.

Check the re­sult:

 lsblk | grep -P 'sdb|lvm'
#Out­put
├─sdb1                           8:17   0     1G  0 part
└─sdb2                           8:18   0 231.9G  0 part
  └─lvm--vm--group-vm--win--01 253:0    0    60G  0 lvm

Ref­er­ences: