LVM Basic Operations: Difference between revisions

From WikiMLT
Line 84: Line 84:


=== Create a Snapshot ===
=== Create a Snapshot ===
Create 5GB snapshot of <code>lv_name</code>.
Create 5GB snapshot of a logical volume named <code>lv_name</code> from volume group named <code>vg_name</code>.
<syntaxhighlight lang="shell" line="1" class="mlw-continue">
<syntaxhighlight lang="shell" line="1" class="mlw-continue">
sudo lvcreate /dev/mapper/vg_name-lv_name -L 5G -s -n lv_name_ss_at_date_$(date +%y%m%d)
sudo lvcreate /dev/mapper/vg_name-lv_name -L 5G -s -n lv_name_ss_at_date_$(date +%y%m%d)
Line 90: Line 90:
<syntaxhighlight lang="shell" line="1" class="mlw-continue mlw-shell-gray">
<syntaxhighlight lang="shell" line="1" class="mlw-continue mlw-shell-gray">
sudo lvs  
sudo lvs  
</syntaxhighlight>
</syntaxhighlight><syntaxhighlight lang="shell-session" line="1">
<syntaxhighlight lang="shell">
  LV                        VG        Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_name                    vg_name owi-aos---  60.00g                                                   
  lv_name_ss_at_date_220908  vg_name swi-a-s---  5.00g      root  0.01
                                               
</syntaxhighlight><syntaxhighlight lang="shell">
# The snapshot could me mounted in order to fetch files in their snapshoted state. In yhis case it is good to mount them in read only mode
# The snapshot could me mounted in order to fetch files in their snapshoted state. In yhis case it is good to mount them in read only mode
sudo mkdir /mnt/snapshot
sudo mkdir /mnt/snapshot

Revision as of 15:10, 8 September 2022

#### https://www.youtube.com/watch?v=MeltFN-bXrQ&t=2090s
# `pvdisplay` Display various attributes of physical volume(s). Shows the attributes of PVs, like size, physical extent size, space used for the VG descriptor area, etc.
# `pvs` is a preferred alternative that shows the same information and more, using a more compact and configurable output format.
sudo pvdisplay

# `lvdisplay` - Display information about a logical volume. Shows the attributes of LVs, like size, read/write status, snapshot information, etc.
# `lvs` is a preferred alternative that shows the same information and more, using a more compact and configurable output format.

## Extend VG  which is located on /dev/sda to encompass also /dev/sdb 
lsblk							# check the devices
sudo pvcreat /dev/sdb		    # create phisical volume at /dev/sdb
sudo pvdisplay					# check
sudo vgextend vg_name /dev/sdb 	# extend  the existing volume group
df -hT							# check
sudo vgdisplay					# check
# At this point you can create a new logical volume or extend an existing one

##  Extend logical volume - note there mus have enoug free space in the volume group where the existing LV is located
sudo lvextend -L +10G /dev/mapper/lv_name
sudo rsize2fs /dev/mapper/lv_name
df -h
### for swap file sudo mkswap /dev/mapper/xxxx-swap_1
### then edit /etc/fstab and chenge the UUID if it is mounted by it...

## Exten to all available space and resize the FS in the same time
sudo lvextend --reziefs -l +100%FREE /dev/mapper/lv_name
df -h

# Shrink logical volume
    1  lsblk
    2  sudo apt update
    3  sudo apt install gnome-disk-utility
    4  ll
    5  lsblk
    6  ll /dev/kali-x-vg
    7  ll /dev/kali-x-vg/root
    8  ll /dev/dm-0
    9  sudo lvreduce --resizefs -L 60G kali-x-vg/root
   10  lsblk
   11  df -h


## Create a new volume group on newly attached block device /dev/sdc
lsblk 																						# check
sudo pvcreat /dev/sdbc						# crrate phisical volume
sudo vgcreate vg_name /dev/sdbc				# create volume group
sudo vgdisplay								# check

## Create a new logical volume
sudo lvcreate vg_name -L 5G -n lv_name		# create 5GB logical volume 		
sudo lvdisplay								# check
sudo mkfs.ext4 /dev/mapper/vg_name-lv_name	# format the logical folume as Ext4

# mount the new logical volume
sudo mkdir -p /mnt/volume																															# create mount point
sudo mount /dev/mapper/vg_name-lv_name	/mnt/volume	# mount the volume
df -h																																												# check

# mount it permanently via /etc/fstab (we can munt it by using the mapped path, but it is preferable to use UUID)
sudo blkid /dev/mapper/vg_name-lv_name			# get the UUID (universal unique identifier) of the logical volume
> /dev/mapper/vg_name-lv_name: UUID="b6ddc49d-...-...c90" BLOCK_SIZE="4096" TYPE="ext4"

sudo cp /etc/fstab{,.bak}			# backup the fstab file
sudo umount /mnt/volume		# unmount the nel volume
df -h														# check

sudo nano /etc/fstab
># <file system>                            <mount point>   <type>  <options>       <dump>  <pass> 
>     UUID=b6ddc49d-...-...c90 /mnt/volume     ext4        defaults			 0				  2

# Test /etc/fsstab for errors, by remount everything listed inside
sudo mount -a			# no output means everything is correctly mounted
df -h								# check

Snap­shots of Log­i­cal Vol­umes

Cre­ate a Snap­shot

Cre­ate 5GB snap­shot of a log­i­cal vol­ume named lv_​​​name from vol­ume group named vg_​​​name.

sudo lvcreate /dev/mapper/vg_name-lv_name -L 5G -s -n lv_name_ss_at_date_$(date +%y%m%d)
sudo lvs
  LV                         VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_name                    vg_name owi-aos---  60.00g                                                    
  lv_name_ss_at_date_220908  vg_name swi-a-s---   5.00g      root   0.01
# The snapshot could me mounted in order to fetch files in their snapshoted state. In yhis case it is good to mount them in read only mode
sudo mkdir /mnt/snapshot
sudo mount -r /dev/mapper/vg_name-good_state_on_date_20220822 /mnt/snapshot
# Uou can use the mouted snapshot (of the root fs) to create a backup while the OS is running...

# Restore, note after the restoring with merge option the snapshot will be removed, so you could create another wone tor later usage of the same state
# of the logical volume.
sudo umount  /mnt/volume
sudo lvconvert --merge /dev/mapper/vg_name-good_state_on_date_20220822 
# Deactivate and reactivate the logical volume
sudo lvchange -an /dev/mapper/vg_name-lv_name
sudo lvchange -ay /dev/mapper/vg_name-lv_name
# Remount via /etc/fstab
sudo mount -a
df -h

# --
--merge
              An alias for --mergethin, --mergemirrors, or --mergesnapshot, depending on the type of LV.


--mergesnapshot
      Merge COW snapshot LV into its origin.  When merging a snapshot, if both the origin and snapshot LVs are not open, the  merge  will  start
      immediately.  Otherwise, the merge will start the first time either the origin or snapshot LV are activated and both are closed. Merging a
      snapshot into an origin that cannot be closed, for example a root filesystem, is deferred until the next time the origin volume  is  acti‐
      vated.  When merging starts, the resulting LV will have the origin's name, minor number and UUID. While the merge is in progress, reads or
      writes to the origin appear as being directed to the snapshot being merged. When the merge finishes, the merged snapshot is removed.  Mul‐
      tiple  snapshots  may  be  specified on the command line or a @tag may be used to specify multiple snapshots be merged to their respective
      origin.


# Remove snapshots
sudo lvremove /dev/kali-x-vg/root_at_20220822

Ref­er­ences



As­sum­ing we want to cre­ate LVM and we want to oc­cu­py the en­tire disk space at /​​​dev/​​​sdb. You can fdisk or gdisk to cre­ate a new GPT par­ti­tion ta­ble that will wipe all par­ti­tions and cre­ate new par­ti­tions you need:

  • If the de­vice /​​​dev/​​​sdb will be used used as boot de­vice you can cre­ate two par­ti­tions one for /​​​boot – Lin­ux ext4 and one for the root fs / – Lin­ux LVM.
  • If the de­vice /​​​dev/​​​sdb won't be used as boot de­vice you can cre­ate on­ly one for LVM.

Cre­ate par­ti­tion ta­ble:

sudo fdisk /dev/sdb
#fdisk
# create a new empty GPT partition table
Command (m for help): g
# add a new partition
Command (m for help): n
Partition number (1-128, default 1): 1
First sector (2048-488397134, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-488397134, default 488397134): +1G

Created a new partition 1 of type 'Linux filesystem' and of size 1 GiB.
# add a new partition
Command (m for help): n
Partition number (2-128, default 2):
First sector (2099200-488397134, default 2099200):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2099200-488397134, default 488397134):

Created a new partition 2 of type 'Linux filesystem' and of size 231.9 GiB.
# change a partition type
Command (m for help): t
Partition number (1,2, default 2): 2
Partition type (type L to list all types): 31

Changed type of partition 'Linux filesystem' to 'Linux LVM'.
# print the partition table
Command (m for help): p
Disk /dev/sdb: 232.91 GiB, 250059350016 bytes, 488397168 sectors
Disk model: CT250MX500SSD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 005499CA-7834-434B-9C36-5306537C8CF1

Device       Start       End   Sectors   Size Type
/dev/sdb1     2048   2099199   2097152     1G Linux filesystem
/dev/sdb2  2099200 488397134 486297935 231.9G Linux LVM

Filesystem/RAID signature on partition 1 will be wiped.
#  verify the partition table
Command (m for help): v
No errors detected.
Header version: 1.0
Using 2 out of 128 partitions.
A total of 0 free sectors is available in 0 segments (the largest is (null)).
# write table to disk and exit
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

For­mat the first par­ti­tion /​​​dev/​​​sdb1 to ext4:

sudo mkfs.ext4 /dev/sdb1
#mke2fs
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: a6a72cfe-46f1-4caa-b114-6bf03f1efe7f
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

Cre­ate LVM phys­i­cals vol­ume at the sec­ond par­ti­tion /​​​dev/​​​sdb1:

sudo pvcreate /dev/sdb2
#lvm2 pvc
Physical volume "/dev/sdb2" successfully created.

Cre­ate LVM vol­ume group at /​​​dev/​​​sdb1:

sudo vgcreate lvm-vm-group /dev/sdb2
#lvm2 vgc
Physical volume "/dev/sdb2" successfully created.
  • lvm-vm-group is the name of the group, it is mater of your choice.

Cre­ate LVM log­i­cal vol­ume at lvm-vm-group:

sudo lvcreate -n vm-win-01 -L 60g lvm-vm-group
#lvm2 lvc
Logical volume "vm-win-01" created.
  • vm-win-01 is the name of the log­i­cal de­vice, it is mater of your choice.

Check the re­sult:

 lsblk | grep -P 'sdb|lvm'
#Out­put
├─sdb1                           8:17   0     1G  0 part
└─sdb2                           8:18   0 231.9G  0 part
  └─lvm--vm--group-vm--win--01 253:0    0    60G  0 lvm

Ref­er­ences: