PVE Adopt a Native Ubuntu Server

From WikiMLT

In this ar­ti­cle are doc­u­ment­ed the steps that I've per­formed to con­vert a na­tive in­stal­la­tion of Ubun­tu Serv­er 20.04 (in­stalled on a phys­i­cal SSD dri­ve) to a Prox­mox Vir­tu­al Ma­chine. The ar­ti­cle is di­vid­ed in­to two main parts. The firs part en­com­pass the top­ic "how to adopt the serv­er by run­ning the OS from the ex­ist­ing SSD in­stal­la­tion". In the sec­ond part is de­scribed "how to con­vert the phys­i­cal in­stal­la­tion to a Prox­moxVE LVM Im­age".

Part 1: Adopt the Phys­i­cal In­stal­la­tion

Figure 1. Prox­mox VM ini­tial con­fig­u­ra­tion for Ubun­tu Serv­er na­tive in­stal­la­tion on a phys­i­cal SSD.

In this part we will at­tach one SSD and three HDDs to a Prox­mox Vir­tu­al ma­chine. The op­er­at­ing sys­tem is in­stalled on the SSD, se we will at­tach this dri­ve first.

Cre­ate a Prox­mox VM

With­in the ref­er­ences be­low are pro­vid­ed few re­sources which ex­plain "how to cre­ate a vir­tu­al ma­chine ca­pa­ble to run Lin­ux OS with­in Prox­moxVE". So the en­tire process is not cov­ered here.

Ini­tial­ly a Prox­mox VM with an Id:100 was cre­at­ed by the GUI in­ter­faces the fi­nal re­sult of this op­er­a­tion is p shown at Fig­ure 1.

At­tach the Phys­i­cal Dri­ves to the Prox­mox VM

The phys­i­cal dri­ves – 1xSSD and 3xHDD – will be at­tached to the VM by their Ids. Al­so the se­r­i­al num­bers of the de­vices will be passed to the VM. The fol­low­ing com­mand should be ex­e­cut­ed on the Prox­mox host's con­sole. It ill pro­vide the nec­es­sary in­for­ma­tion for the next step.

lsblk -d -o NAME,SIZE,SERIAL -b | grep -E '^(sdb|sde|sdf|sdg)' | \
awk '{print $3}' | xargs -I{} find /dev/disk/by-id/ -name '*{}' | \
awk -F'[-_]' \
'{if($0~"SSD"){opt=",discard=on,ssd=1"}else{opt=""} {printf "%s\t%s \t \t%s\n",$(NF),$0,opt}}'
2006E28969E3    /dev/disk/by-id/ata-CT250MX500SSD1_2006E28969E3                 ,discard=on,ssd=1
WCC4M1YCNLUP    /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M1YCNLUP
WCC4M3SYTXLJ    /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M3SYTXLJ
JP1572JE180URK  /dev/disk/by-id/ata-Hitachi_HDS721050CLA662_JP1572JE180URK
# Ver.2 of the inline script:
QM=3015 IDX=10 TYPE="sata" DEVICES="(sda|sdb|sdc|sdf)"
lsblk -d -o NAME,SIZE,SERIAL -b | \
grep -E "^${DEVICES}" | \
awk '{print $3}' | \
xargs -I{} find /dev/disk/by-id/ -name '*{}' | \
awk -F'[_]' -v qm=$QM -v idx=$IDX -v type=$TYPE \
'{if($0~"SSD"){opt=",discard=on,ssd=1"}else{opt=""} {printf "qm set %d -%s%d %s,serial=%s%s\n",qm,type,NR-1+idx,$0,$(NF),opt}}'
qm set 3015 -sata10 /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M3SYTXLJ,serial=WD-WCC4M3SYTXLJ,
qm set 3015 -sata11 /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M1YCNLUP,serial=WD-WCC4M1YCNLUP,
qm set 3015 -sata12 /dev/disk/by-id/ata-TOSHIBA_DT01ACA200_Y4BY48MAS,serial=Y4BY48MAS,
qm set 3015 -sata13 /dev/disk/by-id/ata-SSDPR-CX400-256_GV2024436,serial=GV2024436,discard=on,ssd=1
  • Note: In the both cas­es of the in­line script the au­to de­tect­ing of the op­tions discard=on,ssd=1 is a fake so­lu­tion.

In the com­mand above (sdb|sde|sdf|sdg) are the names of the four de­vices at the host's side. The out­put is in the same or­der. The first col­umn of the out­put con­tains the se­r­i­al num­bers of the de­vices, the sec­ond col­umn their Ids and the third col­umn in­di­cates whether the de­vice is SSD.

The next four com­mands will at­tach these de­vices to the vir­tu­al ma­chine with Id:100, with­in these com­mands is used the sec­ond col­umn of the above out­put.

qm set 100 -scsi0 /dev/disk/by-id/ata-CT250MX500SSD1_2006E28969E3
qm set 100 -scsi1 /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M1YCNLUP
qm set 100 -scsi2 /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M3SYTXLJ
qm set 100 -scsi3 /dev/disk/by-id/ata-Hitachi_HDS721050CLA662_JP1572JE180URK

Tweak the Prox­mox VM

Next, some ad­di­tion­al tweaks are made di­rect­ly at the con­fig­u­ra­tion file of the vir­tu­al ma­chine with Id:100. The most no­table changes are at the high­light­ed lines where are added en­tries for the se­r­i­al num­bers of the de­vices – i.e. ,serial=JP1572JE180URK.

nano /etc/pve/qemu-server/100.conf
agent: 1
balloon: 4096
bios: ovmf
boot: order=scsi0
cores: 8
cpu: kvm64
machine: q35
memory: 12288
meta: creation-qemu=6.1.0,ctime=1647164728
name: Ubuntu.SSD
net0: virtio=DA:0F:18:F7:23:7F,bridge=vmbr0,firewall=1
numa: 1
onboot: 1
ostype: l26
scsi0: /dev/disk/by-id/ata-CT250MX500SSD1_2006E28969E3,discard=on,size=244198584K,ssd=1,serial=2006E28969E3
scsi1: /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M1YCNLUP,size=1953514584K,serial=WCC4M1YCNLUP
scsi2: /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M3SYTXLJ,size=1953514584K,serial=WCC4M3SYTXLJ
scsi3: /dev/disk/by-id/ata-Hitachi_HDS721050CLA662_JP1572JE180URK,size=488386584K,serial=JP1572JE180URK
scsihw: virtio-scsi-pci
shares: 1001
smbios1: uuid=70dc4258-04d7-4255-bec8-0c7d0cd9a0c1
sockets: 2
vcpus: 16
vmgenid: 39632f8b-b9d2-4263-87c3-f6ef0814ab71

Run the Prox­mox VM

Our VM with Id:100 is ready to be start­ed, this could be done ei­ther by the GUI in­ter­face or by the com­mand line.

qm start 100

In or­der to per­form force stop of the VM from the com­mand line one can do the fol­low. Usu­al­ly when the "guest tools" are in­stalled we do not to re­move the lock file.

rm /var/lock/qemu-server/lock-100.conf
qm stop 100

Set­up the Guest OS Ubun­tu Serv­er

Once the guest OS is run­ning, the very first step is to in­stall the QEMU/KVM guest tools.

sudo apt update
sudo apt install qemu-guest-agent
sudo apt install spice-vdagent

The next step is to re­con­fig­ure the net­work, be­cause the net­work adapter is changed from a phys­i­cal one to a vir­tu­al one and it is not con­fig­ured. You may use the com­mand ip a to find the adapters name. Then up­date the net­work con­fig­u­ra­tion and ap­ply the new set­tings or re­boot the guest sys­tem en­tire­ly. In my case I've done the fol­low.

sudo nano /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp6s18:
      dhcp4: false
      dhcp6: false
      addresses: [172.16.1.100/24]
      gateway4: 172.16.1.1
      nameservers:
        addresses: [172.16.1.1, 8.8.8.8, 8.8.4.4]
sudo netplan try
sudo netplan apply
sudo netplan –d apply  # in order to debug a problem
sudo systemctl restart systemd-networkd.service networking.service

The next step is to check whether the at­tached HDDs are rec­og­nized cor­rect­ly. In my case the par­ti­tions are mount­ed by uuid and every­thing was fine.

lsblk -o NAME,SERIAL,UUID | grep '^.*sd.'
sda    2006E28969E3
├─sda1                84D2-3327
└─sda2                09e7c8ed-fb55-4a44-8be4-18b1696fc714
sdb    JP1572JE180URK
├─sdb1                A793-2FD3
└─sdb2                b6ec78e6-bf2e-4cf0-bb1e-0042645af13b
sdc    WCC4M3SYTXLJ
└─sdc1                9aa5e7bb-a0ca-463c-856b-86a3b9b4944a
sdd    WCC4M1YCNLUP
└─sdd1                5d0b4c70-8936-41ef-8ac5-eba6f3142d9d
sudo nano /etc/fstab
/dev/disk/by-uuid/84D2-3327 /boot/efi vfat defaults 0 0
/dev/disk/by-uuid/09e7c8ed-fb55-4a44-8be4-18b1696fc714 / ext4 discard,async,noatime,nodiratime,errors=remount-ro 0 1
/dev/disk/by-uuid/9aa5e7bb-a0ca-463c-856b-86a3b9b4944a /mnt/2TB-1 auto nosuid,nodev,nofail 0 0
/dev/disk/by-uuid/5d0b4c70-8936-41ef-8ac5-eba6f3142d9d /mnt/2TB-2 auto nosuid,nodev,nofail 0 0
/dev/disk/by-uuid/b6ec78e6-bf2e-4cf0-bb1e-0042645af13b /mnt/500GB auto nosuid,nodev,nofail 0 0
/swap.4G.img   none    swap    sw  0   0

Fi­nal­ly I've re­moved the IOM­MU which is not longer need­ed at this in­stance.

sudo nano /etc/default/grub
#GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt kvm.ignore_msrs=1 kvm.report_ignored_msrs=0 irqpoll vfio-pci.ids=10de:1021"
GRUB_CMDLINE_LINUX=""
sudo nano /etc/modprobe.d/blacklist.conf
# Blacklist NVIDIA Modules: 'lsmod', 'dmesg' and 'lspci -nnv'
#blacklist nvidiafb
#blacklist nouveau
#blacklist nvidia_drm
#blacklist nvidia
#blacklist rivatv
#blacklist snd_hda_intel

In ad­di­tion you can in­spect the sys­tem logs for any er­rors or un­nec­es­sary mod­ules – like dri­vers Wi-Fi adapters and so on.

cat /var/log/syslog | grep -Pi 'failed|error'
sudo journalctl -xe

Part 2: Con­vert the Phys­i­cal In­stal­la­tion to a Prox­moxVE Im­age

In this sec­tion is used the mi­gra­tion ap­proach "Clonezil­la Live CD" de­scribed in the Prox­moxVE doc­u­men­ta­tion (see the ref­er­ences be­low). All per­formed steps are shown at Video 1 and the es­sen­tial parts of them are ex­plained with­in the sub-sec­tions be­low the video. The mi­gra­tion will be done from VM with Id:100 to a VM with Id:201.

Video 1. Convert an existing Ubuntu Server native installation into a ProxmoxVE image.
Video 1. Con­vert an ex­ist­ing Ubun­tu Serv­er na­tive in­stal­la­tion in­to a Prox­moxVE im­age.

Pre­pare the Source OS for Mi­gra­tion

In this step the swap file of the source in­stal­la­tion of Ubun­tu Serv­er 20.04 is shrunk. Al­so some un­nec­es­sary ser­vices are re­moved. Fi­nal­ly, as it is shown at Video 1[0:15] is cre­at­ed a back­up copy of the en­tire source SSD by the help of Clonezil­la run­ning on a ded­i­cat­ed vir­tu­al ma­chine where are at­tached the SSD along with one ad­di­tion­al HDD to store the back­up.

Shrink the Source Par­ti­tion be­fore the mi­gra­tion

For this task is used the tool gpart­ed on an­oth­er vir­tu­al ma­chine run­ning Ubun­tu Desk­top – Video 1[0:45]. You can use live ses­sion of Ubun­tu Desk­top in­stead. Again, the SSD is at­tached to the VM as it is shown in the pre­vi­ous sec­tion.

Test Whether the Source In­stance is Op­er­a­tional

Pow­er-up the vir­tu­al ma­chine with Id:100 and check whether every­thing works fine af­ter the shrink­ing of the main par­ti­tion – Video 1[3:30].

Pre­pare an Ex­ist­ing Dri­ve as New LVM Pool

Ac­tu­al­ly this step is not part of the mi­gra­tion but it wasn't done in ad­vance, so I've de­cid­ed to in­clude it here – Video 1[4:00]. So our new VM's im­age will be cre­at­ed on lo­cal-lvm-CX400.

Cre­ate the Des­ti­na­tion Vir­tu­al Ma­chine

In this step is cre­at­ed a new vir­tu­al ma­chine with Id:201 that will be used fur­ther – Video 1[4:55]. The (im­age) dri­ve of this VM is the des­ti­na­tion point of the trans­fer that will be per­formed in the next steps. An im­por­tant thing is – the new (vir­tu­al) disk should be larg­er than the par­ti­tion shrank in the step above.

Note the new VM Id:201 us­es SeaBIOS while the source VM Id:100 is run­ning on UE­FI mode, so af­ter the par­ti­tion trans­fer we will need to do boot re­pair in or­der to run the OS in BIOS mode. The fi­nal con­fig­u­ra­tion file of the new VM with Id:201 is shown be­low – some of the list­ed op­tions are not con­fig­ured yet.

nano /etc/pve/qemu-server/201.conf
agent: 1,fstrim_cloned_disks=1
boot: order=scsi0
cipassword: user$password
ciuser: user
cores: 8
cpu: kvm64,flags=+pcid
ide0: local-lvm-cx400:vm-201-cloudinit,media=cdrom
ipconfig0: ip=172.16.1.100/24,gw=172.16.1.1
machine: q35
memory: 12288
meta: creation-qemu=6.1.0,ctime=1647864994
name: Ubuntu.Server
net0: virtio=E2:88:AC:E7:FD:C2,bridge=vmbr0,firewall=1
numa: 1
onboot: 1
ostype: l26
scsi0: local-lvm-cx400:vm-201-disk-0,discard=on,iothread=1,size=84G,ssd=1
scsi1: /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M1YCNLUP,size=1953514584K,serial=WCC4M1YCNLUP
scsi2: /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M3SYTXLJ,size=1953514584K,serial=WCC4M3SYTXLJ
scsi3: /dev/disk/by-id/ata-Hitachi_HDS721050CLA662_JP1572JE180URK,size=488386584K,serial=JP1572JE180URK
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=21f444b4-0222-402c-9b88-36h397021aa4
sockets: 2
sshkeys: ssh-rsa...
vmgenid: a9183f95-00a4-40be-8bad-59ed6164d144

In­stall Ubun­tu Serv­er as Place­hold­er

In­stall Ubun­tu Serv­er at the new VM with Id:201 as place­hold­er – use the en­tire disk, and don't use the LVM op­tion – Video 1[6:45].

Set­up Clonezil­la for the Des­ti­na­tion VM

This step is where the ac­tu­al trans­fer set­up be­gin – Video 1[8:15]. For this pur­pose the Clonezilla's ISO im­age is at­tached to the (vir­tu­al) CD dri­ve of VM Id:201 and it is marked as pri­ma­ry boot de­vice.

With­in Clonezilla's wiz­ard are cho­sen the fol­low­ing op­tions:

  • Choose Lan­guage > Keep the de­fault key­board lay­out > Start Clonezil­la >
  • re­mote-dest – En­ter des­ti­na­tion mode of re­mote de­vice cloning >
  • dhcp – af­ter that Clonezil­la will ask you for the re­mote-source VM Id:100 serv­er IP ad­dress.

Set­up Clonezil­la for the Source VM

At the source VM Id:100 we should load the Clonezilla's ISO im­age as CD dri­ve too and se it as boot dri­ve. Al­so we need to switch to SeaBIOS in or­der to boot the Clonezilla's im­age with no ad­di­tion­al headaches. Then we can run the VM – Video 1[9:35].

With­in Clonezilla's wiz­ard are cho­sen the fol­low­ing op­tions:

  • Choose Lan­guage > Keep the de­fault key­board lay­out > Start Clonezil­la >
  • re­mote-source – En­ter source mode of re­mote de­vice cloning > Be­gin­ner >
  • part_​​​to_​​​remote_​​​part – Lo­cal par­ti­tion to re­mote par­ti­tion clone >
  • dhcp > then choice the par­ti­tion shrank in the step above – sdd2 > -fsck > -p >
  • Once the IP ad­dress of the source in­stance is ob­tained we must go back to the re­mote-des­ti­na­tion in­stance VM Id:201.

Start the Re­mote Source-Des­ti­na­tion Trans­fer

Go back to the re­mote-des­ti­na­tion in­stance VM Id:201Video 1[12:15] and com­plete the wiz­ard:

  • En­ter the IP ad­dress of the source serv­er >
  • re­storeparts – Re­store an im­age on­to a disk par­ti­tion >
  • Choose the par­ti­tion – in this case it is sda2 > then hit En­ter and con­firm the Warn­ing! ques­tions.

Shut­down the Vir­tu­al Ma­chines

The trans­fer will take few min­utes af­ter that you have to shut­down the vir­tu­al ma­chines – Video 1[13:15].

In­spect the Size of the New Dri­ve

Us the Prox­moxVE GUI and check does every­thing looks cor­rect – Video 1[13:25]. The new image's size will be small­er than the giv­en vir­tu­al disk's size this is be­cause we are us­ing LVM-Thin.

Re­pair the Boot loader: Part 1

In this step is used Ubun­tu Desk­top live ses­sion in or­der Grub – this is be­cause we are mi­grat­ing the OS from UE­FI to BIOS mode – Video 1[13:55]. How­ev­er this step and the next step too could be sim­pli­fied and made with­in the Clonezilla's ses­sion…

In this ap­proach is used the tool boot-re­pair (check the ref­er­ences), so with­in the live ses­sion it is in­stalled by the fol­low­ing com­mands:

sudo parted -l # test GPT
sudo add-apt-repository ppa:yannubuntu/boot-repair
sudo apt update
sudo apt install -y boot-repair
boot-repair

Re­pair the Boot loader: Part 2

It is time to boot the new VM Id:201 by the al­ready cloned OS. Re­move the CD im­ages and set the right boot or­der of the VM – Video 1[18:15]. First we will go in­to the Re­cov­ery mode – if the Grub menu is not en­abled, hit the Esc but­ton at the be­gin­ning of the boot process. Choose the op­tion Drop to root shell prompt and use the fol­low­ing com­mands to tweak the Grub op­tions and fi­nal­ly pow­er-off the VM.

sudo nano /etc/default/grub
sudo update-grub
sudo poweroff

Fi­nal: At­tach the Phys­i­cal Hard Dri­ves

In this step are used the same com­mands as in the above sec­tion – but we will at­tach on­ly the HDDs be­cause the OS is al­ready mi­grat­ed to Prox­moxVE im­age – Video 1[21:25].

qm set 201 -scsi1 /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M1YCNLUP
qm set 201 -scsi2 /dev/disk/by-id/ata-WDC_WD20PURZ-85GU6Y0_WD-WCC4M3SYTXLJ
qm set 201 -scsi3 /dev/disk/by-id/ata-Hitachi_HDS721050CLA662_JP1572JE180URK

Now we can boot the VM as nor­mal and test whether every­thing works fine :)

Ref­er­ences