Dec 282023
 
Synology DS923+

Today we’re going to cover a powerful little NAS being used with VMware; the Synology DS923+ VMware vSphere Use case and Configuration.

This little (but powerful) NAS is perfect for your VMware vSphere homelab and numerous other scenarios and uses. Let’s go over this specific use case, and how to best configure it with your VMware environment so you can fully take advantage of it.

Keep in mind that this post reviews only one of many potential uses, specifically with VMware vSphere (and ESXi). I’m hoping with time to review some other uses for this NAS.

Synology DS923+ VMware vSphere Use Case

The Synology DS923+ is a tiny yet powerful 4-Bay NAS, offering 2x1Gb NICs built-in, with the ability to add in a user-installable 10Gb NIC module. You can also add 2 x NVME drives for NVME SSD cache, giving you the perfect iSCSI target, in our case particularly for VMware vSphere and ESXi.

Synology DS923+ w/ 10Gb NIC, Disks, and 2 x NVME SSD for Cache

The highlights of this specific unit and configuration:

  • NVME SSD Cache – Provides high speed storage (also good at random I/O)
  • Redundant NICs – 1 x 10Gig (add-on) and 2 x 1GB (built-in)

Looking at the networking capabilities, we have 3 NICs when the optional 10Gb NIC is installed. This gives us a number of different potential configurations, but for VMware vSphere, we’ll map out the following:

  • NIC #1 – 10Gig: iSCSI Primary *(and SMB if using VLAN interfaces)*
  • NIC #2 – 1Gig: Management (and SMB w/o VLAN interfaces)
  • NIC #3 – 1Gig: iSCSI Fallback

Note: You could add VLAN interfaces to your Synology device on the 10Gig interface, and use VLANs to provide SMB and other services over the 10Gig link as well. Please note that adding VLAN interfaces is unsupported and may cause issues (including when performing upgrades).

What’s particularly nice about this NAS is that for the price point you’re able to provide 10Gb iSCSI to your ESXi hosts, while also having a fallback connection for redundancy. While the fallback NIC is limited to 1Gig which is substantially slower, it does allow your workload to continue to run, and most importantly without corruption or loss of data due to an iSCSI paths down situation.

Synology DS923+ iSCSI Configuration for VMware vSphere

So now that we’ve established the use case for the Synology DS923+, lets go over how to best configure it for your VMware vSphere environment, and get it connected to your ESXi hosts.

HPE Proliant Server running VMware ESXi with Synology DS923+

There’s a few things to note for the design of the configuration:

  • iSCSI should be using Jumbo Frames
    • Both the ESXi vmk iSCSI adapters and the iSCSI NIC on the Synology NAS
    • All iSCSI networking (switches) should have jumbo frames enabled
  • iSCSI Multi-pathing policy will be VMW_PSP_FIXED (Fixed Pathing)
    • We will NOT be using Round-Robin MPIO (VMW_PSP_RR)
    • Fixed pathing will be used with the 10Gig link being preferred, and 1Gig link acting as fallback
  • The Synology NAS iSCSI target should only be configured to listen and advertise on the iSCSI NICs (primary active and fallback)

Configure iSCSI on the Synology DS923+

To configure iSCSI on your Synology:

  1. Perform Basic Configuration
    • Configure NAS
    • Configure Static IP for Management on 1Gb NIC Interface
  2. Enable the 10Gb NIC Interface (For use with iSCSI Primary)
    • Configure a Static IP
    • Configure Jumbo Frames
  3. Enable the 1Gb NIC Interface (For use as fallback iSCSI)
    • Configure a Static IP
    • Configure Jumbo Frames
  4. Use the Synology “SAN Manager” to Configure the iSCSI Target
    • Create an iSCSI LUN and Target
      • Configure a LUN with your preferences (Thin provisioned, etc)
      • Configure the iSCSI Target
        • Enable “Allow multiple sessions from one or more iSCSI initiators” to allow multiple initiators to access (both from single hosts and/or multiple hosts)Allow multiple sessions from one or more iSCSI initiators
        • Configure “Network Binding” to the 10Gig Primary link and 1Gb fallback NIC. We do not want it to advertise on the management interfaceSynology iSCSI Target Network Binding
      • Configure “Host” initiator settings
        • This is where you will add your iSCSI host initiator IQNs, and provide “Read/Write” access

Overall, this is a basic iSCSI target configuration, with the only exception is that we are only using select interfaces for iSCSI connections. While we can use both the 10Gb and 1Gb connections, we’ll use the host settings to only use the primary and have the secondary as a fall back.

Note that the networks (and IPs) used above for iSCSI are on a network dedicated to iSCSI. We do not want to use our data networks for storage related traffic. They are separated not only for security, but also because they are using different frame/MTU sizes.

Configure ESXi to connect to the Synology DS923+

To configure the Synology NAS iSCSI Target on your ESXi hosts:

  1. Configure your ESXi host networking on your iSCSI Network
    • Configure Networking on your hosts
      • Configure your storage vSwitch and create a portgroup for each physical NIC
      • Configure a vmk adapter with IP for each portgroup you haveiSCSI Port Groups for iSCSI VMK adapters
      • Configure each portgroup to only use one physical NIC as active, the rest unused
        • Each physical NIC should be used by only one portgroup
  2. Configure your iSCSI Initiator
    • If not already enabled, “Add Software Adapter” under “Storage Adapters” to add the iSCSI Software Adapter initiator.
    • Note the “iSCSI Name”. This is your initiator IQN, and needs to be added to the Synology iSCSI Target “Host” settings to provide access and add permissions (last item listed in the previous section configuring the Synology NAS).
    • Add your Synology’s Primary iSCSI interface and Secondary Fallback iSCSI interface IP addresses to your ESXi hosts “Dynamic Discovery” list. Do not use “Static Discovery” as this will auto-populate.
    • If you’re using the same IP subnet for all your iSCSI vmk adapters, enable iSCSI Port Binding.
      • Under “Network Port Binding”, click add, and select all your iSCSI vmk adapters which should auto-bind to the physical NIC owned by the port group they are using. They will not show active until you have completed all steps in this guide.iSCSI Port Binding
  3. Configure your LUN
    • Rescan your storage adapters
      • If you already have a VMFS volume, it should auto-mount and be added to the host.
    • If you haven’t already, create a new datastore by right clicking on the host, “Storage”, and “New Datastore”. Follow the wizard to create a new VMFS volume on your Synology iSCSI target.
  4. Configure proper fallback for the 10Gb and 1Gb link
    • On your ESXi hosts, under “Configure”, navigate to the “Storage Devices” tab, and identify all your “SYNOLOGY iSCSI Disk” devices.
    • For each “SYNOLOGY iSCSI Disk” device, under “Properties”, go to “Multipathing Policies”, “ACTIONS”, “Edit Multipathing”, and set it to “Fixed (VMware)”, while also setting the 10Gb path below under “Select the preferred path for this policy”.
  5. Repeat steps for each ESXi host.

As always, I recommend doing a “Rescan Storage” after any storage related changes. You may need to restart the host after enabling iSCSI Port binding.

Conclusion

You have now configured your VMware ESXi host(s) to connect to your Synology DS923+ with multiple paths for redundancy while favoring the faster 10Gb connection.

Dec 082023
 
vCenter-Root-CA-Missing

Today we’ll go over how to install the vSphere vCenter Root Certificate on your client system.

Certificates are designed to verify the identity of the systems, software, and/or resources we are accessing. If we aren’t able to verify and authenticate what we are accessing, how do we know that the resource we are sending information to, is really who they are?

Installing the vSphere vCenter Root Certificate on your client system, allows you to verify the identity of your VMware vCenter server, VMware ESXi hosts, and other resources, all while getting rid of those pesky certificate errors.

Certificate warning when connecting to vCenter vCSA
Certificate warning when connecting to vCenter vCSA

I see too many VMware vSphere administrators simply dismiss the certificate warnings, when instead they (and you) should be installing the Root CA on your system.

Why install the vCenter Server Root CA

Installing the vCenter Server’s Root CA, allows your computer to trust, verify, and validate any certificates issued by the vSphere Root Certification authority running on your vCenter appliance (vCSA). Essentially this translates to the following:

  • Your system will trust the Root CA and all certificates issued by the Root CA
    • This includes: VMware vCenter, vCSA VAMI, and ESXi hosts
  • When connecting to your vCenter server or ESXi hosts, you will not be presented with certificate issues
  • You will no longer have vCenter OVF Import and Datastore File Access Issues
    • This includes errors when deploying OVF templates
    • This includes errors when uploading files directly to a datastore
File Upload in vCenter to ESXi host operation failed

In addition to all of the above, you will start to take advantage of certificate based validation. Your system will verify and validate that when you connect to your vCenter or ESXi hosts, that you are indeed actually connecting to the intended system. When things are working, you won’t be prompted with a notification of certificate errors, whereas if something is wrong, you will be notifying of a possible security event.

How to install the vCenter Root CA

To install the vCenter Root CA on your system, perform the following:

  1. Navigate to your VMware vCenter “Getting Started” page.
    • This is the IP or FQDN of your vCenter server without the “ui” after the address. We only want to access the base domain.
    • Do not click on “Launch vSphere Client”.
  2. Right click on “Download trusted root CA certificates”, and click on save link as.
    Link to download vCenter trusted root CA Certificates
  3. Save this ZIP file to your computer, and extract the archive file
    • You must extract the ZIP file, do not open it by double-clicking on the ZIP file.
  4. Open and navigate through the extracted folders (certs/win in my case) and locate the certificates.
    VMware vCenter Root Certificates
  5. For each file that has the type of “Security Certificate”, right click on it and choose “Install Certificate”.
  6. Change “Store Location” to “Local Machine”
    • This makes your system trust the certificate, not just your user profile
  7. Choose “Place all certificates in the following store”, click Browse, and select “Trusted Root Certification Authorities”.
    Screenshot to Place in Trusted Root Certification Authorities
  8. Complete the wizard. If successful, you’ll see: “The import was successful.”.
  9. Repeat this for each file in that folder with the type of “Security Certificate”.

Alternatively, you can use a GPO with Active Directory or other workstation management techniques to deploy the Root CAs to multiple systems or all the systems in your domain.

Dec 012023
 
Microsoft Teams Phone running on VMware Horizon

Every organization is looking for ways to equip their mobile workforce, whether remote employees, travelling sales staff/representatives, or just providing more ways employees can work efficiently. Today I want to talk about Microsoft Teams Phone and VDI – a match made in the Cloud.

I’m one of those people who travel frequently and rely not only on having a reliable working environment, but also having access to telecommunications.

Running Teams Phone on VDI is a clear win in these regards!

VDI and VoIP, a common struggle

As most of you know, VDI and VoIP applications can be a major struggle with 3rd party applications not providing audio optimizations for environments that use VDI. This commonly results in in sluggish, jolty, delayed, and/or poor audio quality, in addition to audio processing in your VDI environment which uses resources on your VDI cluster.

For years, the most common applications including Microsoft Teams, Zoom, and even Skype for Business provided VDI optimizations to allow high quality (optimized) audio processing, resulting in almost perfect video/audio telecommunications via VDI sessions, when implemented properly.

Microsoft Teams Phone running on VMware Horizon
Teams Phone running on a VMware Horizon VDI Session

I was tired of using a 3rd party VoIP app, and wanted a more seamless experience, so I migrated over to Teams Phone for my organization, and I’m using it on VDI with VMware Horizon.

Microsoft Teams Phone

While I’ve heard a lot about Teams phone, Microsoft’s Phone System, and PSTN capabilities, I’ve only ever seen it deployed once in a client’s production environment. This put it on my list of curiosities to investigate in the future a few years back.

This past week I decided to migrate over to Microsoft Teams Phone for my organizations telephony and PSTN connectivity requirements. Not only did this eliminate my VoIP app on my desktops and laptops, but it also removed the requirement for a problematic VoIP client on my smartphone.

Teams Phone Benefits

  • Single app for team collaboration and VoIP
  • Single phone number (eliminates multiple extensions for multiple computers and devices)
  • Microsoft Phone System provides PBX capabilities
  • Cloud Based – No on-premise infrastructure required (except device & internet for client app)

I regularly use Microsoft Teams on all my desktops, laptops, and VDI sessions, along with my mobile phone, so the built-in capabilities for VoIP services, in an already fairly reliable app was a win-win!

I’ll go in to further detail on Teams Phone in a future blog post.

Teams Phone on VDI

Microsoft Teams already has VDI optimizations for video and audio in the original client and the new client. This provides an amazing high quality experience for users, while also offloading audio and video processing from your VDI environment to Microsoft Teams (handled by the endpoints and Microsoft’s servers).

When implementing Teams Phone on VDI, you take advantage of these capabilities providing an optimized and enhanced audio session for voice calls to the PSTN network.

This means you can have Teams running on a number of devices including your desktop, laptop, smartphone, VDI session, and have a single PSTN phone number that you can make and receive calls from, seamlessly.

Pretty cool, hey?

The Final Result

In my example, the final result will:

  • Reduce my corporate telephony costs by 50%
  • Eliminate the requirement for an on-prem PBX system
  • Remove the need for a 3rd party VoIP app on my workstations and mobile phone
  • Provide a higher quality end-user experience
  • Utilize existing VDI audio optimizations for a better experience
Oct 072023
 
Installing VDI optimized New Teams client application on Windows VDI

In this guide we will deploy and install the new Microsoft Teams for VDI (Virtual Desktop Infrastructure) client, and enable Microsoft Teams Media Optimization on VMware Horizon.

This guide replaces and supersedes my old guide “Microsoft (Classic) Teams VDI Optimization for VMware Horizon” which covered the old Classic Teams client and VDI optimizations. The new Microsoft Teams app requires the same special considerations on VDI, and requires special installation instructions to function VMware Horizon and other VDI environments.

You can run the old and new Teams applications side by side in your environment as you transition users.

New Teams client with toggle for old version running on VMware Horizon VDI with optimization
Switch between New Teams and old Teams on VDI

Let’s cover what the new Microsoft Teams app is about, and how to install it in your VDI deployment.

Please note: VDI (Virtual Desktop Infrastructure) support for the new Teams client went G.A. (Generally Availabile) on December 05, 2023. Additionally, Classic teams will go end of support on June 30, 2024.

Table of Contents

Please see below for a table of contents:

The New Microsoft Teams App

On October 05, 2023, Microsoft announced the availability of the new Microsoft Teams application for Windows and Mac computers. This application is a complete rebuild from the old client, and provides numerous enhancements with performance, resource utilization, and memory management.

New Microsoft Teams app VDI optimized with Toggle for new/old version

Ultimately, it’s way faster, and consumes way less memory. And fortunately for us, it supports media optimizations for VDI environments.

My close friend and colleague, mobile jon, did a fantastic in-depth Deep Dive into the New Microsoft Teams and it’s inner workings that I highly recommend reading.

Interestingly enough, it uses the same media optimization channels for VDI as the old client used, so enablement and/or migrating from the old version is very simple if you’re running VMware Horizon, Citrix, AVD, and/or Windows 365.

Install New Microsoft Teams for VDI

While installing the new Teams is fairly simple for non-VDI environment (by simply either enabling the new version in the Teams Admin portal, or using your application manager to deploy the installer), a special method is required to deploy on your VDI images, whether persistent or non-persistent.

Do not include and bundle the Microsoft Teams install with your Microsoft 365 (Office 365) deployment as these need to be installed separately.

Please Note: If you have deployed non-persistent VDI (Instant Clones), you’ll want to make sure you disable auto-updates, as these should be performed manually on the base image. For persistent VDI, you will want auto updates enabled. See below for more information on configurating auto-updates.

You will also need to enable Microsoft Teams Media Optimization for the VDI platform you are using (in my case and example, VMware Horizon).

Considerations for New Teams on VDI

  • Auto-updates can be disabled via a registry key
  • New Teams client app uses the same VDI media optimization channels as the old teams (for VMware Horizon, Citrix, AVD, and W365)
    • If you have already enabled Media Optimization for Teams on VDI for the old version, you can simply install the client using the special bulk installer for all users as shown below, as the new client uses the existing media optimizations.
  • While it is recommended to uninstall the old client and install the new client, you can choose to run both versions side by side together, providing an option to your users as to which version they would like to use.

Enable Media Optimization for Microsoft Teams on VDI

If you haven’t previously for the old client, you’ll need to enable the Teams Media Optimizations for VDI for your VDI platform.

For VMware Horizon, we’ll create a GPO and set the “Enable HTML5 Features” and “Enable Media Optimization for Microsoft Teams”, to “Enabled”. If you have done this for the old Teams app, you can skip this.

Please see below for the GPO setting locations:

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> Enable VMware HTML5 Features
Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> VMware WebRTC Redirection Features -> Enable Media Optimization for Microsoft Teams

When installing the VMware Horizon client on Windows computers, you’ll need to make sure you check and enable the “Media Optimization for Microsoft Teams” option on the installer if prompted. Your install may automatically include Teams Optimization and not prompt.

Screenshot of VMware View Client Install with Microsoft Teams Optimization
VMware Horizon Client Install with Media Optimization for Microsoft Teams

If you are using a thin client or zero client, you’ll need to make sure you have the required firmware version installed, and any applicable vendor plugins installed and/or configurables enabled.

Install New Microsoft Teams client on VDI

At this time, we will now install the new Teams app on to both non-persistent images, and persistent VDI VM guests. This method performs a live download and provisions as Administrator. If running this un-elevated, an elevation prompt will appear:

  1. Download the new Microsoft Teams Bootstrapper: https://go.microsoft.com/fwlink/?linkid=2243204&clcid=0x409
  2. On your persistent or non-persistent VM, run the following command as an administrator: teamsbootstrapper.exe -p
  3. Restart the VM (and/or seal your image for deployment)
Installing
Install the new Teams for VDI (Virtual Desktop Infrastructure) with teamsbootstrapper.exe

See below for an example of the deployment:

C:\Users\Administrator.DOMAIN\Downloads>teamsbootstrapper.exe -p
{
  "success": true
}

You’ll note that running the command returns success equals true, and Teams is now installed for all users on this machine.

Install New Microsoft Teams client on VDI (Offline Installer using MSIX package)

Additionally, you can perform an offline installation by also downloading the MSI-X packages and running the following command:

teamsbootstrapper.exe -p -o "C:\LOCATION\MSTeams-x64.msix"
New Teams admin provisioned offline install for VDI
New Teams admin provisioned offline install for VDI

For the offline installation, you’ll need to download the appropriate MSI-X file in additional to the bootstrapper above. See below for download links:

Disable New Microsoft Teams Client Auto Updates

For non-persistent environments, you’ll want to disable the auto update feature and install updates manually on your base image.

To disable auto-updates for the new Teams client, configure the registry key below on your base image:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams

Create a DWORD value called “disableAutoUpdate”, and set to value of “1”.

New Teams app disappears after Optimization with OSOT

If you are using the VMware Operating System Optimization Tool (OSOT), you may notice that after installing New Teams in your base or golden image, that it disappears when publishing and pushing the image to your desktop pool.

The New Teams application is a Windows Store app, and organizations commonly choose to remove all Windows Store apps inside the golden image using the OSOT tool when optimizing the image. Doing this will remove New Teams from your image.

To workaround this issue, you’ll need to choose “Keep all Windows Store Applications” in the OSOT common options, which won’t remove Teams.

Using New Microsoft Teams with FSLogix Profile Containers

When using the new Teams client with FSLogix Profile Containers on non-persistent VDI, you must upgrade to FSLogix version 2.9.8716.30241 to support the new teams client.

Confirm New Microsoft Teams VDI Optimization is working

To confirm that VDI Optimization is working on New Teams, open New Teams, click the “…” in the top right next to your user icon, click “Settings”, then click on “About Teams” on the far bottom of the Settings menu.

New Teams showing “VMware Media Optimized”

You’ll notice “VMware Media Optimized” which indicates VDI Optimization for VMware Horizon is functioning. The text will reflect for other platforms as well.

Uninstall New Microsoft Teams on VDI

The Teams Boot Strap utility can also remove teams for all users on this machine as well by using the “-x” flag. Please see below for all the options for “teamsbootstrapper.exe”:

C:\Users\Administrator.DOMAIN\Downloads>teamsbootstrapper.exe --help
Provisioning program for Microsoft Teams.

Usage: teamsbootstrapper.exe [OPTIONS]

Options:
  -p, --provision-admin    Provision Teams for all users on this machine.
  -x, --deprovision-admin  Remove Teams for all users on this machine.
  -h, --help               Print help

Install New Microsoft Teams on VMware App Volumes / Citrix App Layering

As of April 9th, 2024, you can now deploy the New Teams (Teams 2.0) via VMware App Volumes, using the workflow provided at Capturing new teams as a package in App Volumes 4.x (97141) (vmware.com).

Previously, using the New Teams bootstrapper, it appeared that it evaded and didn’t work with App Packaging and App attaching technologies such as VMware App Volumes and Citrix Application layering, however following the instructions on KB97141 will work.

The New Teams bootstrapper downloads and installs an MSIX app package to the computer running the bootstrapper.

Conclusion

It’s great news that we finally have a better performing Microsoft Teams client that supports VDI optimizations. With new Teams support for VDI reaching GA, and with the extensive testing I’ve performed in my own environment, I’d highly recommend switching over at your convenience!

Jul 282023
 
NVIDIA GPU Manager

In May of 2023, NVIDIA released the NVIDIA GPU Manager for VMware vCenter. This appliance allows you to manage your NVIDIA vGPU Drivers for your VMware vSphere environment.

Since the release, I’ve had a chance to deploy it, test it, and use it, and want to share my findings.

In this post, I’ll cover the following (click to skip ahead):

  1. What is the NVIDIA GPU Manager for VMware vCenter
  2. How to deploy and configure the NVIDIA GPU Manager for VMware vCenter
    • Deployment of OVA
    • Configuration of Appliance
  3. Using the NVIDIA GPU Manager to manage, update, and deploy vGPU drivers to ESXi hosts

Let’s get to it!

What is the NVIDIA GPU Manager for VMware vCenter

The NVIDIA GPU Manager is an (OVA) appliance that you can deploy in your VMware vSphere infrastructure (using vCenter and ESXi) to act as a driver (and update) repository for vLCM (vSphere Lifecycle Manager).

In addition to acting as a repo for vLCM, it also installs a plugin on your vCenter that provides a GUI for browsing, selecting, and downloading NVIDIA vGPU host drivers to the local repo running on the appliance. These updates can then be deployed using LCM to your hosts.

In short, this allows you to easily select, download, and deploy specific NVIDIA vGPU drivers to your ESXi hosts using vLCM baselines or images, simplifying the entire process.

Supported vSphere Versions

The NVIDIA GPU Manager supports the following vSphere releases (vCenter and ESXi):

  • VMware vSphere 8.0 (and later)
  • VMware vSphere 7.0U2 (and later)

The NVIDIA GPU Manager supports vGPU driver releases 15.1 and later, including the new vGPU 16 release version.

How to deploy and configure the NVIDIA GPU Manager for VMware vCenter

To deploy the NVIDIA GPU Manager Appliance, we have to download an OVA (from NVIDIA’s website), then deploy and configure it.

See below for the step by step instructions:

Download the NVIDIA GPU Manager

  1. Log on to the NVIDIA Application Hub, and navigate to the “NVIDIA Licensing Portal” (https://nvid.nvidia.com).
  2. Navigate to “Software Downloads” and select “Non-Driver Downloads”
  3. Change Filter to “VMware vCenter” (there is both VMware vSphere, and VMware vCenter, pay attention to select the correct).
  4. To the right of “NVIDIA GPU Manager Plug-in 1.0.0 for VMware vCenter”, click “Download” (see below screenshot).
Screenshot of download link for NVIDIA GPU Manager for VMware vCenter
NVIDIA GPU Manager Download Page

After downloading the package and extracting, you should be left with the OVA, along with Release Notes, and the User Guide. I highly recommend reviewing the documentation at your leisure.

Deploy and Configure the NVIDIA GPU Manager

We will now deploy the NVIDIA GPU Manager OVA appliance:

  1. Deploy the OVA to either a cluster with DRS, or a specific ESXi host. In vCenter either right click a cluster or host, and select “Deploy OVF Template”. Choose the GPU Manager OVA file, and continue with the wizard.NVIDIA GPU Manager OVA Deploy
  2. Configure Networking for the Appliance
    • You’ll need to assign an IP Address, and relevant networking information.
    • I always recommend creating DNS (forward and reverse entries) for the IP.NVIDIA GPU Manager OVA Network Configuration
  3. Finally, power on Appliance.

We must now create a role and service account that the GPU Manager will use to connect to the vCenter server.

While the vCenter Administrator account will work, I highly recommend creating a service account specifically for the GPU Manager that only has the required permissions that are necessary for it to function.

  1. Log on to your vCenter Server
  2. Click on the hamburger menu item on the top left, and open “Administration”.
  3. Under “Access Control” select Roles. vCenter-Roles
  4. Select New to create a new role. We can call it “NVIDIA Update Services”.
  5. Assign the following permissions:
    • Extension Privileges
      • Register Extension
      • Unregister Extension
      • Update Extension
    • VMware vSphere Lifecycle Manager Configuration Priveleges
      • Configure Service
    • VMware vSphere Lifecycle Manager Settings Priveleges
      • Read
    • Certificate Management Privileges
      • Create/Delete (Admins priv)
      • Create/Delete (below Admins priv)
    • ***PLEASE NOTE: The above permissions were provided in the documentation and did not work for me (resulted in an insufficient privileges error). To resolve this, I chose “Select All” for “VMware vSphere Lifecycle Manager”, which resolved the issue.***
  6. Save the Role
  7. On the left hand side, navigate to “Users and Groups” under “Single Sign On”
  8. Change the domain to your local vSphere SSO domain (vsphere.local by default)
  9. Create a new user account for the NVIDIA appliance, as an example you could use “nvidia-svc”, and choose a secure password.
  10. Navigate to “Global Permissions” on the left hand side, and click “Add” to create a new permission.
  11. Set the domain, and choose the new “nvidia-svc” service account we created, and set the role to “NVIDIA Update Services”, and check “Propagate to Children”.
  12. You have now configured the service account.

Now, we will perform the initial configuration of the appliance. To configure the application, we must do the following:

  1. Access the appliance using your browser and the IP you configured above (or FQDN) GPU Manager Account Creation
  2. Create a new password for the administrative “vcp_admin” account. This account will be used to manage the appliance.
    • A secret key will be generated that will allow the password to be reset, if required. Save this key somewhere safe.
  3. We must now register the appliance (and plugin) with our vCenter Server. Click on “REGISTER”. NVIDIA GPU Manager Register
  4. Enter the FQDN or IP of your vCenter server, the NVIDIA Service account (“nvidia-svc” from example), and password.
  5. Once the GPU Manager is registered with your vCenter server, the remainder of the configuration will be completed from the vCenter GPU.
    • The registration process will install the GPU Manager Plugin in to VMware vCenter
    • The registration process will also configure a repository in LCM (this repo is being hosted on the GPU manager appliance).

We must now configure an API key on the NVIDIA Licensing portal, to allow your GPU Manager to download updates on your behalf.

  1. Open your browser and navigate to https://nvid.nvidia.com. Then select “NVIDIA LICENSING PORTAL”. Login using your credentials.
  2. On the left hand side, select “API Keys”.
  3. On the upper right hand, select “CREATE API KEY”.
  4. Give the key a name, and for access type choose “Software Downloads”. I would recommend extending the key validation time, or disabling key expiration. NVIDIA Download API Create Key
  5. The key should now be created.
  6. Click on “view api key”, and record the key. You’ll need to enter this in later in to the vCenter GPU Manager plugin.

And now we can finally log on to the vCenter interface, and perform the final configuration for the appliance.

  1. Log on to the vCenter client, click on the hamburger menu, and select “NVIDIA GPU Manager”.
  2. Enter the API key you created above in to the “NVIDIA Licensing Portal API Key” field, and select “Apply”.
  3. The appliance should now be fully configured and activated. GPU Manager Activated API Key
  4. Configuration is complete.

We have now fully deployed and completed the base configuration for the NVIDIA GPU Manager.

Using the NVIDIA GPU Manager to manage, update, and deploy vGPU drivers to ESXi hosts

In this section, I’ll be providing an overview of how to use the NVIDIA GPU Manager to manage, update, and deploy vGPU drivers to ESXi hosts. But first, lets go over the workflow…

The workflow is a simple one:

  1. Using the vCenter client plugin, you choose the drivers you want to deploy. These get downloaded to the repo on the GPU Manager appliance, and are made available to Lifecycle Manager.
  2. You then use Lifecycle Manager to deploy the vGPU Host Drivers to the applicable hosts, using baselines or images.

As you can see, there’s not much to it, despite all the configuration we had to do above. While it is very simple, it simplifies management quite a bit, especially if you’re using images with Lifecycle Manager.

To choose and download the drivers, load up the plugin, use the filters to filter the list, and select your driver to download.

GPU Manager downloading vGPU Driver
NVIDIA GPU Manager downloading vGPU Driver

As you can see in the example, I chose to download the vGPU 15.3 host driver. Once completed, it’ll be made available in the repo being hosted on the appliance.

Once LCM has a changed to sync with the updated repos, the driver is then made available to be deployed. You can then deploy using baselines or host images.

LCM Image Update with NVIDIA vGPU Driver from NVIDIA GPU Manager
LCM Image Update with NVIDIA vGPU Driver from NVIDIA GPU Manager

In the example above, I added the vGPU 16 (535.54.06) host driver to my clusters update image, which I will then remediate and deploy to all the hosts in that cluster. The vGPU driver was made available from the download using GPU Manager.