PVE IOMMU Isolation for Passthrough: Difference between revisions
mNo edit summary |
m (Text replacement - "mlw-continue" to "code-continue") |
||
Line 9: | Line 9: | ||
nano /etc/default/grub | nano /etc/default/grub | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="bash" class=" | <syntaxhighlight lang="bash" class="code-continue" line="1" start="9"> | ||
#GRUB_CMDLINE_LINUX_DEFAULT="quiet" | #GRUB_CMDLINE_LINUX_DEFAULT="quiet" | ||
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt" | GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt" | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="shell" line="1" class="root-prompt | <syntaxhighlight lang="shell" line="1" class="root-prompt code-continue"> | ||
update-grub | update-grub | ||
systemctl reboot | systemctl reboot | ||
Line 57: | Line 57: | ||
nano /etc/modules | nano /etc/modules | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="bash" class=" | <syntaxhighlight lang="bash" class="code-continue" line="1"> | ||
# /etc/modules: kernel modules to load at boot time. | # /etc/modules: kernel modules to load at boot time. | ||
# | # | ||
Line 76: | Line 76: | ||
nano /etc/modprobe.d/blacklist.conf | nano /etc/modprobe.d/blacklist.conf | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="apacheconf" class=" | <syntaxhighlight lang="apacheconf" class="code-continue" line="1"> | ||
# Blacklist NVIDIA Modules - 'lsmod', 'dmesg' and 'lspci -nnv' | # Blacklist NVIDIA Modules - 'lsmod', 'dmesg' and 'lspci -nnv' | ||
blacklist nvidiafb | blacklist nvidiafb | ||
Line 88: | Line 88: | ||
options nouveau modeset-0 | options nouveau modeset-0 | ||
</syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt | </syntaxhighlight><syntaxhighlight lang="shell" line="1" class="root-prompt code-continue"> | ||
update-initramfs -u | update-initramfs -u | ||
reset | reset | ||
Line 105: | Line 105: | ||
nano /etc/modprobe.d/vfio.conf | nano /etc/modprobe.d/vfio.conf | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="bash" class=" | <syntaxhighlight lang="bash" class="code-continue" line="1"> | ||
softdep nouveau pre: vfio-pci | softdep nouveau pre: vfio-pci | ||
softdep snd_hda_intel pre: vfio-pci | softdep snd_hda_intel pre: vfio-pci | ||
options vfio-pci ids=10de:1021 disable_vga=1 | options vfio-pci ids=10de:1021 disable_vga=1 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
<syntaxhighlight lang="shell" line="1" class="root-prompt | <syntaxhighlight lang="shell" line="1" class="root-prompt code-continue"> | ||
update-initramfs -u | update-initramfs -u | ||
reset | reset |
Revision as of 07:29, 26 September 2022
IOMMU Hardware Isolation at the Proxmox host
Here we will setup IOMMU at Intel based host system. More details (and information about the host's hardware in use) are provided in the manual KVM and GPU Passthrough to Windows VM, here are listed only the steps used to isolate one NVIDIA Tesla K20Xm, that will be used as GPU at a Windows Guest, but this is another story. So let's begin. One important thing in the current case is that the Tesla K20Xm is PCI‑E 2.0, so to guarantee a stable work of the server, I was needed to go into its BIOS and set the link speed PCI‑E 2.0 for the slot in use. Also the option Above 4G decoding in the BIOS is enabled.
Enable IOMMU
Enable IOMMU isolation by performing the following steps.
nano /etc/default/grub
#GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
update-grub
systemctl reboot
Find the devices to be isolated
Find the devices to be isolated by using the following script – source.
nano /usr/local/bin/get_iommu_groups.sh && chmod +x /usr/local/bin/get_iommu_groups.sh
#!/bin/bash
# https://mathiashueber.com/pci-passthrough-ubuntu-2004-virtual-machine/
# change the 9999 if needed
shopt -s nullglob
for d in /sys/kernel/iommu_groups/{0..9999}/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done
Run the script and filter the output.
get_iommu_groups.sh | grep -iP 'Ethernet controller|NVIDIA'
IOMMU Group 40 02:00.0 3D controller [0302]: NVIDIA Corporation GK110GL [Tesla K20Xm] [10de:1021] (rev a1)
IOMMU Group 43 08:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
IOMMU Group 44 09:00.0 Ethernet controller [0200]: Intel Corporation I210 Gigabit Network Connection [8086:1533] (rev 03)
We will isolate IOMMU Group 40
that contains PCI-bus 02:00.0
,device Id: 10de:1021
.
VIFO Modules
You'll need to add a few VFIO modules to your Proxmox system.
nano /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
Blacklisting Drivers
nano /etc/modprobe.d/blacklist.conf
# Blacklist NVIDIA Modules - 'lsmod', 'dmesg' and 'lspci -nnv'
blacklist nvidiafb
blacklist nouveau
blacklist nvidia_drm
blacklist nvidia
blacklist rivafb
blacklist rivatv
blacklist snd_hda_intel
blacklist radeon
options nouveau modeset-0
update-initramfs -u
reset
reboot
Adding GPU to VIFO
lspci -n -s 02:00
02:00.0 0302: 10de:1021 (rev a1)
nano /etc/modprobe.d/vfio.conf
softdep nouveau pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci
options vfio-pci ids=10de:1021 disable_vga=1
update-initramfs -u
reset
systemctl reboot
Verify the Isolation
Verify the Isolation of the GPU by the help of the following command. Note the line Kernel driver in use: vfio-pci
.
sudo lspci -nnv -s 02:00
02:00.0 3D controller [0302]: NVIDIA Corporation GK110GL [Tesla K20Xm] [10de:1021] (rev a1)
Subsystem: NVIDIA Corporation GK110GL [Tesla K20Xm] [10de:097d]
Physical Slot: 1
Flags: fast devsel, IRQ 255, NUMA node 0, IOMMU group 40
Memory at fa000000 (32-bit, non-prefetchable) [disabled] [size=16M]
Memory at 23fe0000000 (64-bit, prefetchable) [disabled] [size=256M]
Memory at 23ff0000000 (64-bit, prefetchable) [disabled] [size=32M]
Expansion ROM at fb000000 [disabled] [size=512K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [100] Virtual Channel
Capabilities: [128] Power Budgeting <?>
Capabilities: [420] Advanced Error Reporting
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
References
- Proxmox PVE: Nested Virtualization
- Proxmox PVE: Documentation
- Proxmox Wiki: Pci passthrough
- Proxmox HHS at YouTube: Proxmox VE 6.0 Beginner Tutorial – Installing Proxmox & Creating a virtual machine
- Craft Computing at YouTube: Plex on ProxMox Tutorial WITH nVidia Hardware Encoding
- Reddit: The Ultimate Beginner's Guide to GPU Passthrough (Proxmox, Windows 10)
- The Passthrough Post: Binding a GPU to vfio-pci in Debian Buster
- GitHub Gist: Davesilva / KVM GPU Passthrough on Debian