Getting started with Virtualization#
AMD’s virtualization solution, MxGPU, specifically leverages SR-IOV (Single Root I/O Virtualization) to enable sharing of GPU resources with multiple virtual machines (VMs). This technology allows VMs direct access to GPU resources, significantly improving workload performance while maintaining high levels of resource efficiency.
AMD’s MxGPU approach unlocks additional capabilities for a wide range of applications, from high-performance computing (HPC) and artificial intelligence (AI) to machine learning (ML) and graphics-intensive tasks. The SR-IOV architecture, facilitated by MxGPU, supports fine-grained resource allocation and isolation, enabling efficient sharing of GPU resources among multiple workloads. This not only enhances performance for compute-heavy applications but also allows for optimal scalability in multi-tenant environments.
In this guide, we will explore how to implement AMD’s MxGPU technology in QEMU/KVM environments. We will cover the architecture, configuration steps, and best practices for leveraging these advanced virtualization solutions to achieve superior performance and efficiency in your workloads.
Understanding SR-IOV#
To expand the capabilities of AMD GPUs, this guide focuses on enabling SR-IOV, a standard developed by the PCI-SIG (PCI Special Interest Group) that facilitates efficient GPU virtualization. AMD’s MxGPU technology utilizes SR-IOV to allow a single GPU to appear as separate devices on the PCIe bus, presenting virtual functions (VFs) to the operating system and applications. This implementation enables direct access to GPU resources without the need for software emulation, thereby enhancing performance.
The term “Single Root” indicates that SR-IOV operates within a single PCI Express root complex, connecting all PCI devices in a tree-like structure. A key goal of SR-IOV is to streamline data movement by minimizing the hypervisor’s involvement, providing each VM with independent copies of memory space, interrupts, and Direct Memory Access (DMA) streams. This direct communication with hardware allows VMs to achieve near-native performance.
The SR-IOV standard is maintained by the PCI-SIG foundation, ensuring its relevance and effectiveness through cross-industry collaboration and funding in the evolving landscape of virtualization technology.
KVM and QEMU#
KVM (Kernel-based Virtual Machine) and QEMU (Quick Emulator) are integral components of the virtualization stack that will be used in conjunction with the MxGPU. KVM transforms the Linux kernel into a hypervisor, enabling the creation and management of VMs, while QEMU provides the necessary user-space tools for device emulation and management. Together, they facilitate the effective use of SR-IOV, allowing multiple VMs to share AMD GPUs efficiently, enhancing resource utilization and performance.
Overview of Instinct GPUs#
AMD Instinct MI210X, MI300X and MI350X GPUs officially support the MxGPU technology, enabling enhanced GPU virtualization capabilities. Additionally, AMD plans to announce support for more architectures in the future, further expanding the versatility and application of its GPU solutions.
Supported AMD Instinct GPU Models#
MI210X
MI300X
MI350X
AMD Instinct MI210X Architecture#
The AMD Instinct MI210X series accelerators, built on the 2nd Gen AMD CDNA architecture, excel in high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) tasks, particularly in double-precision (FP64) computations. These accelerators leverage AMD Infinity Fabric technology to provide high-bandwidth data transfer, supporting PCIe Gen4 for efficient connectivity across multiple GPUs. Equipped with 64GB of HBM2e memory at 1.6 GHz, the MI210X ensures effective handling of large data sets. AMD’s Matrix Core technology enhances mixed-precision capabilities, making the MI210X ideal for deep learning and versatile AI applications.
AMD Instinct MI300X Architecture#
The AMD Instinct MI300X series accelerators, based on the advanced AMD CDNA 3 architecture, offer substantial improvements in AI and HPC workloads. With 304 high-throughput compute units and cutting-edge AI-specific functions, the MI300X integrates 192 GB of HBM3 memory and utilizes die stacking for enhanced efficiency. Delivering significantly higher performance, it features 13.7x peak AI/ML workload improvement using FP8 and a 3.4x advantage for HPC with FP32 calculations compared to previous models. AMD’s 4th Gen Infinity Architecture provides superior I/O efficiency and scalability, with PCIe Gen 5 interfaces and robust multi-GPU configurations. The MI300X also incorporates SR-IOV capabilities for effective GPU partitioning, providing coherent shared memory and caches to support data-intensive machine-learning models across GPUs, with 5.3 TB/s bandwidth and 128 GB/s inter-GPU connectivity.
AMD Instinct MI350X Architecture#
The AMD Instinct MI350X series accelerators, leveraging the latest CDNA 3 architecture, are engineered for the most demanding AI and HPC workloads. Featuring up to 288 compute units, the MI350X harnesses a massive 252 GB of HBM3 memory to manage extensive data loads efficiently. It achieves impressive bandwidth, facilitating up to 7.1 TB/s data throughput, ideal for complex, data-driven applications. The MI350X integrates advanced interconnects through next-generation AMD Infinity Fabric, supporting up to 64 GB/s per link, enhancing multi-GPU scalability and performance. With a focus on energy efficiency, the design supports a 750W thermal envelope suitable for high-performance settings while maintaining robust computational output. With comprehensive support for various precision modes and data types, the MI350X offers unparalleled computational flexibility and efficiency for a wide range of scientific and AI-driven tasks.
MxGPU Software Stack#
Recommended Linux Distributions#
For recommendations on Host and Guest OS distributions, configuration details, and the latest updates, please visit the AMD MxGPU Virtualization GitHub releases page. The following documentation provides detailed setup instructions for Ubuntu 22.04.5 and RHEL 9.4 Linux distributions. While additional distributions may function effectively, Ubuntu 22.04.5 and RHEL 9.4 are continuously validated by AMD for both host and guest installations.
AMD Software Stack Components#
To set up the MxGPU solution and ensure seamless operation, the following software stack components are needed:
PF Driver - AMD host driver for virtualized environments, which enables the powerful integration of MxGPU technologies for effective GPU resource management.
AMD SMI - The AMD SMI (System Management Interface) library and tool provide robust management and monitoring capabilities for AMD Virtualization Enabled GPUs, bundled with the PF driver package. Designed specifically for SR-IOV host environment setups, this version of AMD SMI functions as a cross-platform utility compatible with various operating systems. This comprehensive library, known for its thread safety and extensibility, offers both C and Python API interfaces. Through these interfaces, users can query static GPU information such as ASIC and framebuffer details, as well as retrieve vital data regarding firmware, virtual functions, temperature, clocks, and GPU usage. The library facilitates application development in both C/C++ and Python, offering function declarations respective to each language for seamless integration. Complementing the library, the AMD SMI tool is a command line utility that leverages the library APIs to effectively monitor GPU status across different types of environments. It provides versatile output options, displaying or saving GPU status in plain text, JSON, or CSV formats. Notably, this SR-IOV-specific version of AMD SMI differs from the ROCm-specific tool that is intended for use in non-virtualized setups. For a detailed description of its capabilities and access to the complete documentation, please visit this page.
VF Driver - The ROCm GPU driver is one of the components of the latest ROCm stack that can effectively be used in guest virtual machines with proper configuration. As a guest driver, it allows virtual machines to efficiently access GPU resources.
Host Configuration#
This section provides step-by-step instructions for configuring SR-IOV to enable device assignment to QEMU/KVM guests. Proper configuration is essential for ensuring that the GPU device can be passed through to VMs effectively.
System BIOS Setting#
To enable virtualization extensions, you need to set up System BIOS. For example (depending on BIOS version and vendor), sample System BIOS settings will look like this:
SR-IOV Support: Enable this option in the Advanced → PCI Subsystem Settings page.
Above 4G Decoding: Enable this option in the Advanced → PCI Subsystem Settings page.
PCIe ARI Support: Enable this option in the Advanced → PCI Subsystem Settings page.
IOMMU: Enable this option in the Advanced → NB Configuration page.
ACS Enabled: Enable this option in the Advanced → NB Configuration page.
Note: AER must be enabled for ACS enablement to work. PCI AER Support can be enabled in the Advanced → ACPI Settings page.
GRUB File Update#
After configuring the BIOS settings, you need to modify the GRUB configuration to apply the necessary changes.
To assign a device to a QEMU/KVM guest, the device needs to be managed by a VFIO (Virtual Function I/O) kernel driver. However, by default, the host device binds to its native driver, which is not a VFIO driver. Therefore, the device must be unbound from its native driver (blacklist amdgpu driver on OS booting up) before it can be passed to the libvirt to assign it to the guest.
Additionally, to enable PCI SRIOV functionality, you need to enable virtualization extensions, IOMMU, etc. All these configurations can be set in /etc/default/grub and then applied to the OS boot grub configuration.
Use the following commands to update the GRUB settings:
Edit GRUB Configuration File:
Use a text editor to modify the /etc/default/grub file (Following example uses “nano” text editor). Open the terminal and run the following command:
# sudo nano /etc/default/grub
Modify the GRUB_CMDLINE_LINUX Line:
Look for the line that begins with GRUB_CMDLINE_LINUX. It should look like this initially:
GRUB_CMDLINE_LINUX=""
Modify it to include following parameters:
GRUB_CMDLINE_LINUX="modprobe.blacklist=amdgpu iommu=on amd_iommu=on"
If there are already parameters in the quotes, append your new parameters separated by spaces.
Save the changes by pressing CTRL + O, press Enter, then exit with CTRL + X.
After modifying the configuration file, you need to update the GRUB settings by running the following command:
Ubuntu:
# sudo update-grub
RHEL:
# sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot Your System:
For the changes to take effect, reboot your system using the following command:
# sudo reboot
Verifying changes:
After the system reboots, confirm that the GRUB parameters were applied successfully by running:
# cat /proc/cmdline
When you run the command above, you should see a line that includes:
modprobe.blacklist=amdgpu iommu=on amd_iommu=on
This indicates that your changes have been applied correctly.
Note: In case host machine is running Intel CPU, replace amd_iommu with intel_iommu.
Installing Libvirt, KVM, QEMU Packages on Host#
Follow the steps below to install Libvirt, KVM, QEMU Packages on Ubuntu Host:
Install the required packages
Ubuntu:
# sudo apt update
# sudo apt install qemu-kvm virtinst libvirt-daemon virt-manager -y
RHEL:
# sudo dnf install @virt Management -y
# sudo systemctl start libvirtd
# sudo systemctl enable libvirtd
Configure non-root user to access virt-manager and add user account to the libvirt group:
Ubuntu:
# sudo groupadd --system libvirt
# sudo usermod -a -G libvirt $(whoami)
# newgrp libvirt
RHEL:
# sudo usermod -a -G libvirt $(whoami)
Log out of your current user session. Then, reopen the terminal or log back into your user account. This refreshes your session, ensuring the group changes take effect for access to libvirt resources.
Edit libvirtd configuration file:
# sudo nano /etc/libvirt/libvirtd.conf
Set the UNIX domain socket group ownership to libvirt, (around line 81):
unix_sock_group = "libvirt"
Set the UNIX socket permissions for the R/W socket (around line 104):
unix_sock_rw_perms = "0770"
Restart libvirt daemon after making the change.
# sudo systemctl restart libvirtd.service
Host Driver Setup#
To enable the AMD GPUs to operate in SR-IOV mode, the MxGPU PF driver is essential. This guide presents two effective approaches for installing the PF driver: via a pre-packaged .deb/.rpm file or by building from source code.
Using the package offers a convenient and streamlined installation process, making it accessible even for users with limited technical experience. This method allows for easy dependency management but may not always include the most up-to-date features available in the source code.
On the other hand, building the PF driver from source provides users with the flexibility to customize their driver installation and access the latest enhancements. This approach, however, requires more technical expertise and manual handling of dependencies, which might present a steeper learning curve for some users.
Both methods will effectively prepare your system for GPU virtualization, and the choice between them depends on your specific needs and familiarity with the installation process. The following sections will detail each approach to guide you through the setup.
Installing the PF Driver via Package#
On Ubuntu systems, PF driver can be installed using the latest .deb package.
On RHEL systems, PF driver can be installed using the latest .rpm package.
Here are the steps:
Install dependencies:
Ubuntu:
# sudo apt update
# sudo apt install build-essential dkms autoconf automake
RHEL:
# sudo dnf install @development-tools kernel-devel dkms autoconf automake
To install the package, use the following command:
Ubuntu:
# sudo dpkg -i gim_driver_package.deb
RHEL:
# sudo rpm -ivh gim_driver_package.rpm
Once the PF driver is installed, reboot the system and load the driver after reboot by using the following command:
# sudo modprobe gim
The PF driver is loaded with one VF by default, which means that if we use a platform with eight GPUs, each of the GPUs can support one VF. This results in a total of eight VFs.
To ensure that the PF driver has loaded correctly, check if the Instinct GPU VF devices are visible by running:
MI210X:
# lspci -d 1002:7410
03:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
26:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
43:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
63:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
83:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
a3:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
c3:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
e3:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 7410 (rev 02)
MI300X:
# lspci -d 1002:74b5
05:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
26:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
46:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
65:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
85:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
a6:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
c6:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
e5:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5
MI350X:
# lspci -d 1002:75b0
05:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
15:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
65:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
75:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
85:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
95:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
e5:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
f5:02.0 Processing accelerators: Advanced Micro Devices, Inc. [AMD/ATI] Device 75b0
This commands have been executed on MI210X/MI300X/MI350X systems that have 8 physical GPUs each. When the PF driver is loaded the system presents 8 additional virtual GPUs.
Way to check if driver loaded successfully:
# sudo dmesg | grep GIM.*Running
Building the PF Driver from Source#
Alternatively, you can build the PF driver from the source code if you prefer to have the latest features or customize the installation. Here are the steps:
Install dependencies:
Ubuntu:
# sudo apt update
# sudo apt install build-essential autoconf
RHEL:
# sudo dnf groupinstall "Development Tools"
# sudo dnf install kernel-devel autoconf
Obtain the Source Code: Clone or download the source code for the PF driver.
Navigate to the Source Directory: Change to the directory containing the PF driver source code.
Build the PF Driver:
To build and install the driver from the source, run the following commands:
# make clean
# make all –j
# sudo make install
Insert the Driver: Once the PF driver is installed, load it using the following command
# sudo modprobe gim
To verify that the PF driver is loaded properly, use the same verification commands mentioned in the previous section.
Note: Newer Ubuntu kernel versions require gcc and g++ version 12 installed. If needed install gcc-12, g++-12 and update the alternatives system to point to the correct version:
# sudo apt install gcc-12 g++-12
# sudo update-alternatives --config gcc
# sudo update-alternatives --config g++
Guest VM Initial Setup#
VM Setup#
The initial VM setup can be performed using QEMU/libvirt command line utilities. Creation of each guest OS VM is similar, and steps are mostly common for each of them. This chapter will reference Ubuntu22.04 setup as an example.
Install dependencies:
# sudo apt update
# sudo apt install cloud-utils
Dowload Ubuntu base Image:
# sudo wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
Set password for new VM:
# cat >user-data1.txt <<EOF
# > #cloud-config
# > password: user1234
# > chpasswd: { expire: False }
# > ssh_pwauth: True
# > EOF
# sudo cloud-localds user-data1.img user-data1.txt
Create a disk for new VM:
# sudo qemu-img create -b ubuntu-22.04-server-cloudimg-amd64.img -F qcow2 -f qcow2 ubuntu22.04-vm1-disk.qcow2 100G
Install new VM and login to check the IP:
# sudo virt-install --name ubuntu22.04-vm1 --virt-type kvm --memory 102400 --vcpus 20 --boot hd,menu=on --disk path=ubuntu22.04-vm1-disk.qcow2,device=disk --disk path=user-data1.img,format=raw --graphics none --os-variant ubuntu22.04
# Login: ubuntu
# Password: user1234
# ip addr
# sudo passwd root (set root password as `user1234`)
# sudo usermod -aG sudo ubuntu
# sudo vi /etc/default/grub
# GRUB_CMDLINE_LINUX="modprobe.blacklist=amdgpu"
# sudo update-grub
# sync
# sudo shutdown now
# sudo virsh start ubuntu22.04-vm1
# sudo virsh domifaddr ubuntu22.04-vm1
# ssh [email protected] (password: user1234) - verify access
# exit
GPU VF device nodes can be added to VM XML configuration using sudo virsh edit <VM_NAME> command and by modifying devices section:
# sudo virsh list --all
# sudo virsh shutdown ubuntu22.04-vm1
# sudo virsh edit ubuntu22.04-vm1 (add hostdev entry under devices section)
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x<DEVICE_BUS_ID>' slot='0x<DEVICE_SLOT>' function='0x0'/>
</source>
</hostdev>
Repeat this step for every virtual GPU that is being added to the VM (one node per virtual device). DEVICE_BUS_ID and DEVICE_SLOT for each of targeted device can be obtained from output of lspci -d 1002:74b5 command which prints out devices VF BDF address in format DEVICE_BUS_ID:DEVICE_SLOT.function.
As an example, this is how all eight GPU VF device nodes can be added to the VM config. If this is the output of the command:
# lspci -d 1002:74b5
03:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
26:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
43:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
63:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
83:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
a3:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
c3:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
e3:02.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Device 74b5 (rev 02)
VF BDF address is shown at the beginning of every line in the mentioned format: DEVICE_BUS_ID:DEVICE_SLOT.function
Based on that data, GPU VFs device nodes should be added to the VM XML configuration under
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x03' slot='0x02' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x26' slot='0x02' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x43' slot='0x02' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x63' slot='0x02' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x83' slot='0x02' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0xa3' slot='0x02' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x0'/>
</source>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0xe3' slot='0x02' function='0x0'/>
</source>
</hostdev>
Check added GPUs are visible on the guest:
# sudo virsh start ubuntu22.04-vm1
# sudo virsh domifaddr ubuntu22.04-vm1
# ssh [email protected] (password: user1234)
# lspci
Guest Driver Setup#
Connect to the VM to install ROCm AMDGPU VF Driver:
# sudo virsh start ubuntu22.04-vm1
# sudo virsh domifaddr ubuntu22.04-vm1
# ssh [email protected] (password: user1234)
The ROCm™ software stack and other Radeon™ software for Linux components are installed using the amdgpu-install script to assist you in the installation of a coherent set of stack components. For installation steps and after-install verification please refer to Radeon software for Linux with ROCm installation guide.
Note: Loading AMDGPU VF Driver should be done with command:
# sudo modprobe amdgpu
Post-install verification check#
To confirm that the entire setup is functioning correctly and that VM can efficiently execute tasks on the GPU, check output from rocminfo and clinfo tools in the VM.
# sudo rocminfo
Output should be as follows:
[...]
*******
Agent 2
*******
Name: gfx942
Uuid: GPU-664b52e347835f94
Marketing Name: AMD Instinct MI300X
Vendor Name: AMD
Feature: KERNEL_DISPATCH
[...]
Also try following:
# sudo clinfo
Output should be as follows:
[...]
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.1 AMD-APP (3649.0)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_event_callback
Platform Extensions function suffix AMD
Platform Host timer resolution 1ns
[...]
This marks the final step in setting up the AMD GPUs with MxGPU in KVM/QEMU environments. By following the outlined steps, users can effectively allocate GPU resources across virtual machines, optimizing performance and resource utilization for demanding workloads.
With your environment now configured, consider deploying high-performance computing applications, artificial intelligence models, or machine learning tasks that can fully leverage the compute capabilities of the AMD GPUs. These applications can benefit significantly from the enhanced resource allocation that MxGPU provides.
Removing MxGPU#
To remove the MxGPU, unload the driver first:
# sudo modprobe -r gim
If the driver was installed as a .deb package, remove it using:
Ubuntu:
# sudo dpkg -r <gim-driver-package>
RHEL:
# sudo rpm -e <gim-driver-package>
If the driver was built from source, navigate to the source directory and run:
# make clean
To delete VMs, use virsh to list all defined virtual machines:
# virsh list --all
For each VM you want to delete, run:
# virsh destroy <vm-name>
# virsh undefine <vm-name> --remove-all-storage
If the disk image (.qcow2 file) associated whit the VM is not managed by libvirt, remove it manually:
# rm –f <path_to_qcow2_disk_image>
Host configuration can be reverted by updating the grub file to its previous state (added parameters should be deleted) and returning System BIOS settings to their initial values. By doing this SR-IOV environment will be disabled.
For any further support or questions, please don’t hesitate to raise a Github PR.