QEMU/KVM on AMD Ryzen 9 Desktop with Dual-boot and Passthrough
I was in need to access Windows 10 from Kali Linux on my dual booted AMD Ryzen 9 based Desktop PC.
Hardware | Commands to detect |
---|---|
AMD Ryzen 9 5900X 12-Core Zen Processor | lscpu , sudo dmidecode -t processor , sudo lshw -class cpu
|
32 GB RAM | free -h , sudo dmidecode -t mamory , sudo lshw -class memory
|
ASRock X570 Phantom Gaming 4 Motherboard | sudo dmidecode -t baseboard , sudo lshw -class bus
|
NVIDIA TU117GL Quadro T600 and NVIDIA GF119 NVS 315 | lspci | grep ‑i 'vga' , sudo lshw ‑class display
|
SSD Crucial CT250MX 250GB and NVMe Samsung 980 1TB | sudo fdisk -l , lsblk -d , lsblk -dD , ls -l /dev/disk/by-id
|
Actually I will passthrough one of the physical storage devices where Windows 10 is already installed (SSD Crucial 250GB) and one of the video cards (NVS 315). So here are the things I've done to achieve that. The SSD will be passed as block device.
Test the Virtualization Capabilities of the System
Check weather the system supports virtualization and weather it is enabled via the UEFI/BIOS. The following command must return at least 1
:
egrep -c '(vmx|svm)' /proc/cpuinfo
Install QEMU, KVM, LIBVIRT
Within the older versions of Debian based OS, like as Ubuntu 20.04, we was in need to install the packages qemu qemu-kvm
, but in mot recent operating systems as Kali 2022 we need to install qemu-system-x86
instead.
sudo apt install qemu-system-x86 libvirt-daemon bridge-utils
sudo apt install libvirt-clients virtinst libosinfo-bin ovmf
sudo apt install virt-manager virt-viewer remmina # For desktop user
In order to get rid of the password dialogue for virt-manager
– "System policy prevents management of local virtualization systems" – I've added my Linux user to the libvirt
group.
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER
grep "$USER" /etc/group
Setting-up the PCI Passthrough
This section is dedicated to AMD based systems. For Intel based systems check the article QEMU/KVM and GPU Passthrough in Details.
Enabling IOMMU
In order to enabling the IOMMU feature we must edit the configuration file /etc/default/grub
, as follow:
sudo nano /etc/default/grub # cat /etc/default/grub | grep 'GRUB_CMDLINE_LINUX_DEFAULT'
# For AMD CPU
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
- Note the the host hardware is quiet modern, so within the UEFI (BIOS) settings there are IOMMU option and when it is enabled the above options are not needed.
Update the boot manager configuration and reboot the system.
sudo update-grub
sudo systemctl reboot
After the reboot verify does IOMMU is enabled:
sudo dmesg | grep -i 'IOMMU'
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-5.18.0-kali5-amd64 root=/dev/mapper/kali--x--vg-root ro quiet amd_iommu=on iommu=pt splash
[ 0.009753] Kernel command line: BOOT_IMAGE=/vmlinuz-5.18.0-kali5-amd64 root=/dev/mapper/kali--x--vg-root ro quiet amd_iommu=on iommu=pt splash
[ 0.590958] iommu: Default domain type: Passthrough (set via kernel command line)
[ 0.607751] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 0.607771] pci 0000:00:01.0: Adding to iommu group 0
[ 0.607776] pci 0000:00:01.2: Adding to iommu group 1
[ 0.607781] pci 0000:00:01.3: Adding to iommu group 2
[ 0.607788] pci 0000:00:02.0: Adding to iommu group 3
[ 0.607795] pci 0000:00:03.0: Adding to iommu group 4
[ 0.607800] pci 0000:00:03.1: Adding to iommu group 5
[ 0.607806] pci 0000:00:04.0: Adding to iommu group 6
[ 0.607812] pci 0000:00:05.0: Adding to iommu group 7
[ 0.607818] pci 0000:00:07.0: Adding to iommu group 8
[ 0.607823] pci 0000:00:07.1: Adding to iommu group 9
[ 0.607829] pci 0000:00:08.0: Adding to iommu group 10
[ 0.607834] pci 0000:00:08.1: Adding to iommu group 11
[ 0.607842] pci 0000:00:14.0: Adding to iommu group 12
[ 0.607846] pci 0000:00:14.3: Adding to iommu group 12
[ 0.607863] pci 0000:00:18.0: Adding to iommu group 13
[ 0.607867] pci 0000:00:18.1: Adding to iommu group 13
[ 0.607872] pci 0000:00:18.2: Adding to iommu group 13
[ 0.607875] pci 0000:00:18.3: Adding to iommu group 13
[ 0.607879] pci 0000:00:18.4: Adding to iommu group 13
[ 0.607883] pci 0000:00:18.5: Adding to iommu group 13
[ 0.607887] pci 0000:00:18.6: Adding to iommu group 13
[ 0.607891] pci 0000:00:18.7: Adding to iommu group 13
[ 0.607895] pci 0000:01:00.0: Adding to iommu group 14
[ 0.607944] pci 0000:02:02.0: Adding to iommu group 15
[ 0.607972] pci 0000:02:06.0: Adding to iommu group 16
[ 0.607977] pci 0000:02:08.0: Adding to iommu group 17
[ 0.607981] pci 0000:02:09.0: Adding to iommu group 18
[ 0.607987] pci 0000:02:0a.0: Adding to iommu group 19
[ 0.608018] pci 0000:03:00.0: Adding to iommu group 20
[ 0.608045] pci 0000:03:00.1: Adding to iommu group 20
[ 0.608073] pci 0000:04:00.0: Adding to iommu group 21
[ 0.608075] pci 0000:05:00.0: Adding to iommu group 17
[ 0.608077] pci 0000:05:00.1: Adding to iommu group 17
[ 0.608079] pci 0000:05:00.3: Adding to iommu group 17
[ 0.608081] pci 0000:06:00.0: Adding to iommu group 18
[ 0.608083] pci 0000:07:00.0: Adding to iommu group 19
[ 0.608088] pci 0000:08:00.0: Adding to iommu group 22
[ 0.608096] pci 0000:09:00.0: Adding to iommu group 23
[ 0.608102] pci 0000:09:00.1: Adding to iommu group 23
[ 0.608108] pci 0000:0a:00.0: Adding to iommu group 24
[ 0.608114] pci 0000:0b:00.0: Adding to iommu group 25
[ 0.608121] pci 0000:0b:00.1: Adding to iommu group 26
[ 0.608127] pci 0000:0b:00.3: Adding to iommu group 27
[ 0.608133] pci 0000:0b:00.4: Adding to iommu group 28
[ 0.608569] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 0.609034] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[ 0.890747] AMD-Vi: AMD IOMMUv2 loaded and initialized
sudo dmesg | grep 'AMD-Vi'
[ 0.607751] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 0.608569] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 0.608569] AMD-Vi: Extended features (0x58f77ef22294a5a): PPR NX GT IA PC GA_vAPIC
[ 0.608572] AMD-Vi: Interrupt remapping enabled
[ 0.890747] AMD-Vi: AMD IOMMUv2 loaded and initialized
Identification of the Group Controllers
In order to generate a tidy list of your grouped devices create a script as the follow.
sudo nano /usr/local/bin/get_iommu_groups.sh && sudo chmod +x /usr/local/bin/get_iommu_groups.sh
#!/bin/bash
# https://mathiashueber.com/pci-passthrough-ubuntu-2004-virtual-machine/
# change the 9999 if needed
shopt -s nullglob
for d in /sys/kernel/iommu_groups/{0..9999}/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done
Run the script and filter the output.
get_iommu_groups.sh | grep -iP 'VGA compatible controller|SATA controller|USB controller|NVIDIA'
IOMMU Group 17 05:00.1 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU Group 17 05:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU Group 18 06:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 19 07:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 20 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [NVS 315] [10de:107c] (rev a1)
IOMMU Group 20 03:00.1 Audio device [0403]: NVIDIA Corporation GF119 HDMI Audio Controller [10de:0e08] (rev a1)
IOMMU Group 23 09:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU117GL [T600] [10de:1fb1] (rev a1)
IOMMU Group 23 09:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
IOMMU Group 27 0b:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
In this case I need to isolate the NVS 315 video controller: IOMMU Group 20
that contains PCI-bus 03:00.0
[device ID 10de:107c
] and 03:00.1
[device ID 10de:0e08
].
Enabling VFIO Kernel modules
To load VFIO and other required modules at boot, edit the /etc/initramfs-tools/modules
file. If you run Ubuntu 20.04, Linux Mint 20 or similar, then the following modules have been integrated into the kernel and you do not need to load them (reference).
sudo nano /etc/initramfs-tools/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vhost-net
sudo update-initramfs -u -k all
Blacklist the default nouveau
driver. Note in my case it is disabled earlier because of the other NVIDIA driver.
sudo nano /etc/modprobe.d/blacklist-nouveau.conf
blacklist nouveau
options nouveau modeset=0
sudo update-initramfs -u
Isolation of the Guest GPU
In order to isolate the GPU we have two options. Select the devices by PCI bus address or by device ID. Both options have pros and cons. Here we will isolate VFIO-pci driver by device id. This option should only be used, in case the graphic cards (or other devices that will be isolated) in the system are not exactly the same model, otherwise we need to use isolation by PCI bus, because the devices will have an identical IDs.
sudo nano /etc/default/grub # cat /etc/default/grub | grep 'GRUB_CMDLINE_LINUX_DEFAULT'
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt kvm.ignore_msrs=1 kvm.report_ignored_msrs=0 irqpoll vfio-pci.ids=10de:107c,10de:0e08 vfio-pci.disable_vga=1"
- Explanation about the used options could be found in the article listed within the references.
Update the boot manager configuration and reboot the system.
sudo update-grub
sudo systemctl reboot
After this reboot the isolated GPU will be ignored by the host OS. Now, you have to use the other GPU for the host OS. After the reboot, verify the Isolation of the guest GPU by analyze the output of the following command:
sudo lspci -nnv -s 03:00
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [NVS 315] [10de:107c] (rev a1) (prog-if 00 [VGA controller])
Subsystem: NVIDIA Corporation GF119 [NVS 315] [10de:102f]
Flags: bus master, fast devsel, latency 0, IRQ 255, IOMMU group 20
Memory at f9000000 (32-bit, non-prefetchable) [size=16M]
Memory at d8000000 (64-bit, prefetchable) [size=128M]
Memory at e0000000 (64-bit, prefetchable) [size=32M]
I/O ports at e000 [size=128]
Expansion ROM at fa000000 [disabled] [size=512K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [b4] Vendor Specific Information: Len=14 <?>
Capabilities: [100] Virtual Channel
Capabilities: [128] Power Budgeting <?>
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Kernel driver in use: vfio-pci
Kernel modules: nouveau, nvidia_drm, nvidia
03:00.1 Audio device [0403]: NVIDIA Corporation GF119 HDMI Audio Controller [10de:0e08] (rev a1)
Subsystem: NVIDIA Corporation GF119 HDMI Audio Controller [10de:102f]
Flags: fast devsel, IRQ 255, IOMMU group 20
Memory at fa080000 (32-bit, non-prefetchable) [disabled] [size=16K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
Congratulations, the hardest part is done!
Setup the Virtual Machines
My current setup has two virtual machines. Both virtual machines have access to the same physical SSD drive, where Windows 10 is previously installed and fully operational via the dual boot option and I want to keep this way of accessing Windows 10 too.
The first "special" thing according to my setup is that both operating systems are installed in UEFI mode, so the virtual machine should have UEFI firmware and with chipset Q35.
Identify the SSD
There are two ways to pass the SSD as block device as it is described in the article QEMU/KVM on ThinkPad X230T Laptop with Dual-boot and and by passing the SATA controller as it is described in this section.
At first glance there is not any significant performance difference. However when you use the block device approach you can use the write cache option for the device, which will increase the speed of handling of the large files. On the other hand, when passing the SATA controller approach is in use, Windows 10 will use the same driver within the VM environment and within the native boot. In both cases the SSD shouldn't be mounted at the host's side.
The SATA ports are identified as controllers and each of them has its own IOMMU group, so we don't need any additional setup except to check at which port is attached the device. This can be done by the commands below. Note in this case we are searching for the controller where is attached /dev/sda
.
lspci | grep SATA # Just for check - which controllers (SATA ports) have attached drives
06:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
07:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
udevadm info -q path -n /dev/sda
/devices/pci0000:00/0000:00:01.2/0000:01:00.0/0000:02:09.0/0000:06:00.0/ata2/host1/target1:0:0/1:0:0:0/block/sda
udevadm info -q path -n /dev/sda | awk -F'/' '{print $7}'
0000:06:00.0
Create and Configure the Virtual machines
Create a virtual machine via wizard of the virt-manager
GUI tool:
- Step 1/5: Choice "Manual Install".
- Step 2/5: Type
win10
and choose the entry "Microsoft Windows 10". - Step 3/5: Provide suitable to your system amount of Memory and number of CPUs.
- Step 4/5: Don't enable any storage devices.
- Step 5/5: Tick the checkbox Customize configuration before install and choose the the type of the network connection – I prefer to use bridged device.
Before proceed by clicking the button Begin install (upper left corner – see Video 1), choose the following options:
- Chipset:
Q35
, - Firmware:
UEFI x86_64: /usr/share/OVMF/OVMF_CODE_4M.ms.fd
- The use the button Add Hardware and add the passthrough devices – see Video 1.
On Video 1, at the right side is shown the VM Win10PT
which uses GPU, USB and SATA passthrough. At the left side – the VM Win10
which uses only SATA passthrough. Both cannot be started at the same time, because they share the same SATA controller and physical SSD.
I'm accessing the left one Win10
via virt-viewer
. And, the right one Win10PT
by a dedicated monitor, keyboard and mouse (or via RDP).
For the VM Win10PT
which uses GPU, USB and SATA passthrough, I wasn't able to remove all SPICE devices (as it is recommended for best performance within GPU passthrough) via virt-manager
. So I've removed them by editing the configuration after the VM was created via virsh
.
virsh --connect qemu:///system edit Win10PT
The Final Configuration of the Virtual machines
The final configuration of the both virtual machines Win10
and Win10PT
is available at github.com/metalevel–tech/qemu–kvm–scripts/vm-dumpxml, where the files that start with kali-x_date
are these related to the current article.
The repository github.com/metalevel-tech/qemu-kvm-scripts contains also a script that allows me to menage the two virtual machines via .desktop shortcuts. More information about the script is provided within the repository itself.
Install the Guest Tools and Troubleshooting
Once the guest OS is running successfully, the final step of the setup is installing the QEMU/KVM Guest tools for Windows, thus the screen will be automatically resized within the SPICE client of virt-manager
and virt-viewer
. And also allows you to gracefully shutdown the guest.
There was experienced the problem Guest HDMI Audio Crackling and was able to fix it by enabling the MSISupported option for the device via RegEdit in the Windows 10 guest.
It is not mentioned anywhere above but you need to Enable the QEMU Guest Agent within the virtual machine configuration, otherwise the agent will not work and, for example, you will not be able to shutdown the VM from the host's side.
Additional Guides
- Setup Linux Network Bridge
- Virt-manager Setting-up Windows Virtual Machines
- Install QEMU/KVM Guest tools for Windows
- QEMU/KVM GPU Passthrough Setup Windows Guest for RDP
- RDP use Remmina with Windows Account
- QEMU/KVM and GPU Passthrough Troubleshooting
Kernel 6.0.x Troubleshooting
After upgrade to Kernel 6.0.x the system booted but there was no screen output. Initially I was thinking it is NVIDIA driver issue, but finally I've found it is issue with VFIO. The same trouble is reported in the following blog and forum posts:
- Heiko's Blog: Kernel 6.0 and VFIO
- Level 1 Tech: Linux Kernel 6 seems to be incompatible with the vfio_pci module needed for PCI passthrough
There as workaround is proposed: Using the “driver_override” feature. And also it is reported that with Kernel 6.1.x everything work well with the GRUB method.
In my particular case the PT video card is NVS 315 which is not compatible with the NVIDIA driver for T600 and it is ignored at the boot time. So I just modified GRUB_CMDLINE_LINUX_DEFAULT
in the following way to get the system operational.
#GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt kvm.ignore_msrs=1 kvm.report_ignored_msrs=0 irqpoll vfio-pci.ids=10de:107c,10de:0e08"
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt kvm.ignore_msrs=1 kvm.report_ignored_msrs=0 irqpoll"
At this point when I run the virtual machine vfio-pci
takes control over NVS 315. And the guest os works as it is expected – strange :) So I will not apply the “driver_override” feature and will wait for Kernel 6.1.x to became default on my distro.
References
- QEMU/KVM and GPU Passthrough in Details
- QEMU/KVM on ThinkPad X230T Laptop with Dual-boot
- Kernel.org: VFIO – Virtual Function I/O
- Super User: How to tell which SATA controller a drive is using?
- Arch Linux Wiki: Passing through other devices > USB controller
- Chris Titus Tech Site: Setting up Windows inside Linux > Enable QEMU Guest Agent
- Unix and Linux: How to find the PCI slot of an USB controller in Linux?
- Unix and Linux: Can I pass through a USB Port via qemu Command Line?
- Heiko's Blog: Running Windows 10 on Linux using KVM with VGA Passthrough
- Heiko's Blog: Creating a Windows 10 kvm VM on the AMD Ryzen 9 3900X using VGA Passthrough
- Arch Wiki: PCI passthrough via OVMF > Setting up IOMMU
- Arch BBS: libvirsh 8.4.0 can't start vm as spicevmc needs spice graphic [posts #5 and #6]
- Debian Wiki: VGA Passthrough
- Mathias Hueber: All you need to know about PCI passthrough in Windows 11 virtual machines on Ubuntu 22.04 based distributions
- Mathias Hueber: Storage setup for virtual machines
- Mathias Hueber: Comprehensive guide to performance optimizations for gaming on virtual machines with KVM/QEMU and PCI passthrough