Step by step instructions to deploy the high GPU VM’s in AVD (Azure Virtual Desktop)

In the real VDI world sometimes higher graphics performance makes a lot of sense for thousands of users who need them as the use of graphics is increasing in many industries every day.

These design decision for the need for a GPU is subject to the user-base requirements (graphics Applications) due to cost.

In the design for on-premises deployment if this part is sold to the customer by the pre-sales team it will be generally considered an important cost driver in the deal. However, during the implementation time if it’s included in the deal/design it may become a nightmare for the implementation team as it requires a lot of effort to create and configure the GPU infra correctly. I have significantly deployed High GPU infra in multiple On-Premises Citrix VDI environments and am currently deployed a few in Azure so I am in the best position to inform you about the high GPU VM.

Picture Credit: Pexel.com

For the users with higher performance needs, a graphic card acceleration is required, and they significantly improve user experience and reduce server processor load.

Certain limitations and additional components must be accounted for when deploying this feature on-premises these are the requirements.

  • You have a server platform that can host your chosen hypervisor and NVIDIA GPUs that support NVIDIA vGPU software. For a list of validated server platforms, refer to NVIDIA GRID Certified Servers.
  • One or more NVIDIA GPUs that support NVIDIA vGPU software are installed in your server platform.
  • A supported virtualization software stack is installed according to the instructions in the software vendor’s documentation.
  • A virtual machine (VM) running a supported Windows guest operating system (OS) is configured in your chosen hypervisor.
  • A license server for Nvidia is required with primary and secondary nodes for high availability
  • Certain features and performance depend on the endpoint device.

For example the supported use cases for Citrix XenServer is as follows:

Citrix Hypervisor Support

Driver Package Hypervisor or Bare-Metal OS Software Product Deployment Hardware Supported Guest OS Support
NVIDIA vGPU for XenServer 8.25 Citrix Hypervisor 8.2
  • NVIDIA vGPU
  • GPU pass through
  • A2
  • A10
  • A16
  • A30, A30X
  • A40, A100, A100X
  • M6, M10, M60
  • P4, P6, P40, P100, P100 12GB
  • T4
  • V100
  • RTX A5000
  • RTX A6000
  • RTX 6000, RTX 6000 passive, RTX 8000, RTX 8000 passive
  • Windows 10 21H2
  • Windows Server 2016 1607, 1709
  • Windows Server 2019
  • Windows Server 2022
  • Red Hat Enterprise Linux 8.2, 8.4, 8.5
  • Red Hat Enterprise Linux 7.9
  • Ubuntu 14.04 LTS/16.04/18.04 LTS/20.04 LTS
NVIDIA vGPU for XenServer 7.15 Citrix XenServer 7.1
  • NVIDIA vGPU
  • GPU pass through
  • M6, M10, M60
  • P4, P6, P40, P100
  • P100 12GB
  • V100
  • RTX 6000, RTX 80006
  • Windows 10 21H2
  • Windows Server 2016 1607, 1709
  • Windows Server 2019
  • Windows Server 2022
  • Red Hat Enterprise Linux 7.9
  • Ubuntu 14.04 LTS/16.04/18.04 LTS

This entire process can take a very long time to deploy depending on logistics and hardware procurement.

If you compare this with an On-premises solution one key benefit of Azure Virtual Desktop (AVD) is the ability to spin up Windows 10 instances with Graphics Cards (GPU’s) on the fly. Whereas traditionally it would take significantly longer to deliver GPU graphics to on-premises VDI solutions, due to procurement, logistics, and implementation timelines.

Recently I’ve tested multiple NV series VM’s with Windows 10 Multisession Images, the last test which we have done is with the new promo release is with Standard NV6_Promo (6 vcpus, 56 GiB memory), we have used NVIDIA drivers to test our workload which is available from the Azure VM extension.

I have divided the installation and configuration into 7 steps and they are as follows:

  • Select an appropriate GPU optimized Azure virtual machine size
  • Install the NVIDIA Graphics Driver in the VM
  • Restart the VM
  • Verify the driver installation status
  • Configure GPU-accelerated app rendering
  • Enable the WDDM graphics display driver for Remote Desktop Connections
  • Restart the VM

The first step in this process is to deploy a high GPU VM Windows 10 Multi-session instance with the correct graphics cards selected (instance selected).

Step 1: Select an appropriate GPU optimized Azure virtual machine size

Select one of Azure’s NV-series, NVv3-series, NVv4-series, or NcasT4_v3-series VM sizes. These are tailored for app and desktop virtualization and enable most apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality. Consider NV series VM retirement when selecting VM, details on NV retirement.

Step 2: Install the NVIDIA Graphics Driver in the VM

Once you have deployed the NV series Azure instance, you will need to install the graphics drivers before being able to use the GPU. In my case I have installed a NV6 VM. Now the next step is to Install supported graphics drivers in your virtual machine

Let’s see how we can do that, first go to the control panel and open the device manager, open the Display adapters and you can see that the High GPU VM display adapter is missing.

Graphical user interface, text, application

Description automatically generated

Next login to the Azure Portal, I have decided to install drivers using the Azure VM extension, GRID drivers will automatically be installed for these VM sizes.

A screenshot of a computer screen

Description automatically generated with medium confidence

Step 3: Restart the VM

Step 4: Verify the driver installation status

Connect to the desktop of the VM using Azure Virtual Desktop client. Click on NVIDIA icon in the taskbar and check the NVIDIA Setup if it’s correctly installed or not

Graphical user interface, text, application

Description automatically generated

Check the details info about the NVIDIA hardware

Graphical user interface, text, application

Description automatically generated

Again, go to the device manager and you can see that it’s already showing in the list of the display drivers.

Graphical user interface

Description automatically generated

To query the GPU device state, run the nvidia-smi command-line utility installed with the driver.

Open a command prompt and change to the C:\Program Files\NVIDIA Corporation\NVSMI directory.

Graphical user interface, text, application

Description automatically generated

Run nvidia-smi. If the driver is installed, you will see output similar to the following. The GPU-Util shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.

A screenshot of a computer

Description automatically generated with low confidence

Graphical user interface

Description automatically generated

Graphical user interface, text

Description automatically generated

Step 5: Configure GPU-accelerated app rendering

By default, apps and desktops running on Windows Server are rendered with the CPU and do not leverage available GPUs for rendering.

This policy setting enables system administrators to change the graphics rendering for all Remote Desktop Services sessions.

If you enable this policy setting, all Remote Desktop Services sessions use the hardware graphics renderer instead of the Microsoft Basic Render Driver as the default adapter.

If you disable this policy setting, all Remote Desktop Services sessions use the Microsoft Basic Render Driver as the default adapter.

If you do not configure this policy setting, Remote Desktop Services sessions on the RD Session Host server use the Microsoft Basic Render Driver as the default adapter. In all other cases, Remote Desktop Services sessions use the hardware graphics renderer by default.

To Configure Group Policy for the session host to enable GPU-accelerated rendering:

  1. Connect to the desktop of the VM using an account with local administrator privileges.
  2. Open the Start menu and type “gpedit.msc” to open the Group Policy Editor.
  3. Navigate the tree to Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Remote Session Environment.
  4. Select policy Use hardware graphics adapters for all Remote Desktop Services sessions and set this policy to Enabled to enable GPU rendering in the remote session.

Graphical user interface, text, application

Description automatically generated

If you want to automate this, the below is the registry key details.

Registry Hive HKEY_LOCAL_MACHINE
Registry Path SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Value Name bEnumerateHWBeforeSW
Value Type REG_DWORD
Enabled Value 1
Disabled Value 0

Step 6: Enable the WDDM graphics display driver for Remote Desktop Connections

This policy setting lets you enable WDDM graphics display driver for Remote Desktop Connections.

If you enable or do not configure this policy setting, Remote Desktop Connections will use WDDM graphics display driver.

If you disable this policy setting, Remote Desktop Connections will NOT use WDDM graphics display driver. In this case, the Remote Desktop Connections will use XDDM graphics display driver.

For this change to take effect, you must restart Windows.

Graphical user interface, text, application

Description automatically generated

If you want to automate this, the below is the registry key details.

Registry Hive HKEY_LOCAL_MACHINE
Registry Path SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Value Name fEnableWddmDriver
Value Type REG_DWORD
Enabled Value 1
Disabled Value 0

Step 7: Restart the VM

That’s all you are done, let’s perform some testing.

For the testing I have used Geeks3D furMark application to test the performance of the GPU using remote applications (RemoteApp) Affinity as a Remote Application and a asteroid video

Graphical user interface

Description automatically generated

Stress Test Results average 57 frames per seconds.

A picture containing text, cat, indoor, orange

Description automatically generated

Another test is done.

A screenshot of a video game

Description automatically generated

My testing did show that using GPU’s does provide significant improvements to graphics and user experience for AVD users. However, the need for a GPU is subject to the user-base requirements (graphics Applications) due to cost.  Based on my experience I have seen that on-premises Graphics based VM cost is still much cheaper than the AVD cost still now, but, the high GPU VM cost may decrease in the future as the demand for the same is increasing every day in many enterprises and MS is bringing more Azure VM’s to cater this need.

I hope you have liked this post. Thanks for reading this. You have a great day ahead.

0.00 avg. rating (0% score) - 0 votes
Tags: