Use GPUs with Clustered VMs through Direct Device Assignment
In the rapidly evolving landscape of artificial intelligence (AI), the demand for more powerful and efficient computing resources is ever-increasing. Microsoft is at the forefront of this technological revolution, empowering customers to harness the full potential of their AI workloads with their GPUs. GPU virtualization makes the ability to process massive amounts of data quickly and efficiently possible. Using GPUs with clustered VMs through DDA (Discrete Device Assignment) becomes particularly significant in failover clusters, offering direct GPU access.
Using GPUs with clustered VMs through DDA allows you to assign one or more entire physical GPUs to a single virtual machine (VM). DDA allows virtual machines (VMs) to have direct access to the physical GPUs. This results in reduced latency and full utilization of the GPU’s capabilities, which is crucial for compute-intensive tasks.
Figure 1: This diagram shows users using GPU with clustered VMs via DDA, where full physical GPU are assigned to VMs.
Using GPUs with clustered VMs enables these high-compute workloads to be executed within a failover cluster. A failover cluster is a group of independent nodes that work together to increase the availability of clustered roles. If one or more of the cluster nodes fail, the other nodes begin to provide service, meaning high availability by failover clusters. By integrating GPU with clustered VMs, these clusters can now support high-compute workloads on VMs. Failover clusters use GPU pools, which are managed by the cluster. An administrator creates these GPU pools name and declares a VM’s GPU needs. Pools are created on each node with the same name. Once GPUs and VMs are added to the pools, the cluster then manages VM placement and GPU assignment. Although live migration is not supported, in the event of a server failure, workloads can automatically restart on another node, minimizing downtime and ensuring continuity.
Using GPU with clustered VMs through DDA will be available in Windows Server 2025 Datacenter and was initially enabled in Azure Stack HCI 22H2.
To use GPU with clustered VMs, you are required to have a Failover Cluster that operates on Windows Server 2025 Datacenter edition and ensure the functional level of the cluster is at the Windows Server 2025 level. Each node in the cluster must have the same set up, and same GPUs in order to enable GPU with clustered VMs for failover cluster functionality . DDA does not currently support live migration. DDA is not supported by every GPU. In order to verify if your GPU works with DDA, contact your GPU manufacturer. Ensure you adhere to the setup guidelines provided by the GPU manufacturer, which includes installing the GPU manufacturer specific drivers on each server of the cluster and obtaining manufacturer-specific GPU licensing where applicable.
For more information on using GPU with clustered VMs, please review our documentation below:
Use GPUs with clustered VMs on Hyper-V | Microsoft Learn
Deploy graphics devices by using Discrete Device Assignment | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More