Jan 112024
 
ESXi-Host-Decommission

While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host.

Some might be surprised to learn that you cannot simply just power down and remove the host from the vCenter server, as there are a number of steps that must be taken beforehand to ensure a proper successful decommission. Properly decommissioning the ESXi host avoids orphaned objects in the vCenter database, which can sometimes cause problems in the future.

Today we’ll go over how to properly decommission a VMware ESXi host in an environment with VMware vCenter Server.

The Process – How to decommission ESXi

We will detail the process and considerations to decommission an ESXi host. We will assume that you have since migrated all your VMs, templates, and files from the host, and it contains no data that requires backup or migration.

VMware ESXi Host Decommission Procedures
VMware ESXi Host Decommission Procedures

Process in Short:

  1. Enter Maintenance Mode
  2. Remove Host from vDS Switches
  3. Unmount and Detach iSCSI LUNs
  4. Move host from cluster to datacenter as standalone host
  5. Remove Host from Inventory

Please read further for extended procedures and more information.

Enter Maintenance Mode

We enter maintenance mode to confirm that no VMs are running on the host. You can simply right click the host, and enter maintenance mode.

Remove Host from vDS Switches

You must gracefully remove the host from any vDS switches (VMware Distributed Switches) before removing the host from vCenter Server.

You can create a standard vSwitch and migrate vmk (VMware Kernel) adapters from the vDS switch to standard vSwitch, to maintain communication with the vCenter server and other networks.

Please Note: If you are using vDS switches for iSCSI connectivity, you must gracefully develop a plan to deal with this beforehand, either by unmounting/detaching the iSCSI LUNs on the vDS before removing the switch, or gracefully migrating the vmk adapters to a standard vSwitch, using MPIO to avoid losing connectivity during the process.

Unmount and Detach iSCSI LUNs

You can now proceed to unmount and detach iSCSI LUNs from the selected system:

  1. Unmount the iSCSI LUN(s) from the host
  2. Detach the iSCSI LUN(s) from the host

You will unmount only on the selected host to be decommissioned, and then detach the LUNs (again only on the host you are decommissioning).

Move Host from Cluster to Datacenter as standalone host

While this may not be required, I usually do this to let vSphere Cluster Services (HA/DRS) adjust for the host removal, and also deal with reconfiguration of the HA agent on the ESXi Host. You can simply move the host from the cluster to the parent datacenter level.

Remove Host from Inventory

Once the host has been moved and a moment or two have elapsed, you can now proceed to remove the host from inventory.

While the host is powered on and still connected to vCenter, right click on the host and choose “Remove from Inventory”. This will gracefully remove objects from vCenter, and also uninstall the HA agent from the ESXi host.

Host Repurposing

At this point, you can now log directly on to the ESXi host using the local root password, and shutdown the host.

Jan 072024
 
VMware Horizon View Logo

This guide will outline the instructions to Disable the Omnissa Horizon (formerly VMware Horizon) Session Bar. These instructions can be used to disable the Horizon Session Bar (also known as the Horizon Client Menu Bar or Shade Bar) for full screen Horizon VDI sessions.

Horizon Client Menu Bar (Shade)

The Horizon Client Menu Bar, or “Shade”, is the Session bar at the top of full screen VMware Horizon VDI Sessions.

This Menu Bar provides information on the connection, ability to send key sequences, connect USB devices, restart a VDI guest VM and more.

In same cases, users or administrators may want to disable the Shade.

Disable the Horizon Client Menu Bar (Shade)

There are multiple ways that you can disable the shade including using GPOs as well as the registry on client systems. Please note that if you are setting up clients in Kiosk mode, the shade will be automatically disabled and these instructions aren’t required.

Disable Horizon Shade using GPO

To disable the Shade with GPOs, create a Group Policy Object (or edit the local group policy on the client system), and navigate to the following location:

User Configuration -> Policies -> Administrative Templates -> VMware Horizon Client Configuration

Here, we will set Enable the shade to “Disabled”, as show below:

Disable VMware Horizon Shade using GPO to set “Enable the Shade” to Disabled

Disable Horizon Shade using Registry

To disable the Shade using registry on the client system, navigate to the following registry key:

HKEY_LOCAL_MACHINE\Software\VMware, Inc.\VMware VDM\Client\

Here, we can create a String (REG_SZ) value called EnableShade and set it to False which will disable the Shade.

Additional Information

Jan 062024
 
vMotion with vGPU

Normally, any VMs that are NVIDIA vGPU enabled have to be manually migrated with manual vMotion if a host is placed in to maintenance mode, to evacuate the host. While we may have grown accustomed to this, there is a better way, with vGPU Enabled VM DRS Evacuation during Maintenance mode!

A new feature that was introduced with vSphere 7.0 U3f, was the ability to configure and allow automatic vMotion of VMs with vGPUs, meaning that DRS can now migrate your VDI and AI/ML vGPU enabled workloads when hosts are placed in to maintenance mode. This also allows you to streamline remediation with vLCM when updating vGPU enabled hosts running vGPU enabled VMs.

Additionally, as of vSphere 8.0 U2, DRS can now estimate the STUN times required for vMotion of vGPU enabled VMs, and control whether automatic DRS vMotion’s are allowed. This STUN time limit can be set buy an administrator.

Enable automatic vMotion evacuation of vGPU enabled VMs

To enable the automatic vMotion of vGPU enabled VMs on your vSphere Cluster:

  1. Navigate to your vSphere Cluster.vSphere Cluster Selected
  2. Click on the “Configure” Tab, and then select “vSphere DRS”, and click “Edit”.vSphere DRS Cluster - DRS Advanced Settings
  3. Navigate to the “Advanced Options” tab.
  4. Add “VgpuMMAutomationTimeoutSecs” and set to “-1”.vSphere DRS set VgpuMMAutomationTimeoutSecs

After performing the above, when you place a host with vGPU enabled Virtual Machines in to Maintenance Mode, vSphere DRS will evacuate and migrate the VMs to other hosts in the cluster that have the required hardware.

If you attempt to place a host in to Maintenance Mode without enabling automatic vMotion of vGPU enabled VMs, it will fail with the error: “DRS failed to generate a vMotion recommendation for a virtual machine on a host entering Maintenance Mode“.

Enable and Configure vGPU STUN Time Estimate and Limits

If you are running vSphere 8U2 or higher, you can enable vGPU STUN time estimation and limits for DRS on the vGPU enabled cluster. Similar to the instructions above, we can add and configure two variables to the vSphere DRS cluster “Advanced Options”.

To enable STUN time estimation, add PassthroughDrsAutomation and set to “1”.

To override the default vMotion STUN time limit of 100 seconds, add VmDevicesStunTimeTolerated and set it to your preferred maximum number of seconds. Alternatively, you can set this limit Per VM by navigating to the VM in vSphere and adding this variable under the “VM Options” “Advanced Settings” section.

Additional Documentation

Jan 052024
 
NVIDIA vGPU Installed in VMware ESXi Host

You may experience GPU issues with the VMware Horizon Indirect Display Driver in your environment when using 3rd party applications which incorrectly utilize the incorrect display adapter. This results with the inability to use and/or run GPU accelerated workloads including VDI, AI, and ML.

This issue effects NVIDIA vGPU (both vGPU and vDGA passthrough), AMD MxGPU, and Intel Data Center GPU Flex GPUs using SR-IOV, in any deployment where the VMware Indirect Display Driver is installed.

When this issue occurs, the application will incorrectly query the capabilities of the VMware Indirect Display Adapter instead of the GPU that is presented to the VM, resulting in a scenario where the application isn’t aware of the capabilities of the GPU you are utilizing, failing to utilize the GPU, and hardware acceleration, such as hardware encoding (NVENC) and hardware decoding.

What is the VMware Horizon Indirect Display Driver

The VMware Horizon Indirect Display Driver, also known as the VMware Indirect Display Driver, is a “virtual” display driver that isn’t bound to a specific hypervisor, and works with many deployments because of the lack of that limitation.

GPU Issues with the VMware Horizon Indirect Display Driver Enabled

This driver is installed with the VMware Horizon agent, and can work in conjunction with hardware acceleration, including GPUs (such as NVIDIA vGPU, AMD MxGPU, and Intel Data Center GPUs using SR-IOV).

Under normal circumstances, the VMware Horizon Indirect Display Driver is prioritized as a fallback driver for remoting protocols, except in environments where no hypervisor or GPU display drivers are available (like Horizon Cloud on Azure) in which case it would become the priority.

The Problem

Applications designed to use a GPU, may not be able to correctly identify which display adapter to use on the VM. While you may have a GPU, vGPU, or 3D acceleration in your environment, the application may be unaware of the device and/or its capabilities.

This is caused by the application either not correctly using the preferred primary display adapter (GPU and/or vGPU), or not being designed to handle multiple display adapters (and drivers).

Example Scenario:

When using CyberLink PowerDirector 360 in a VMware Horizon environment with an NVIDIA vGPU, the application will query the VM’s Windows instance for hardware acceleration capabilities, specifically hardware encoding, hardware decoding, and use of APIs like NVIDIA’s NVENC encoder. In this scenario, while the VM does have an NVIDIA vGPU workstation profile attached with a valid NVIDIA RTX Virtual Workstation (vWS) license, the application is only aware of the VMware Indirect Display Driver and it’s capabilities. This results in all hardware accelerated encoding and decoding capabilities to be disabled.

Example Symptoms

  • 3D Acceleration not detected by application
  • CUDA Cores not available for application
  • OpenCL not available
  • DirectX and Direct3D usage unavailable

In all scenarios, the VM will appear to have 3D acceleration, however one or multiple applications won’t have access.

The Solution

Thanks to the design of the VMware Indirect Display Driver, it should be prioritized in a fashion that it’s used only when other display drivers aren’t available (including NVIDIA vGPU), or system resources aren’t available; however, some 3rd party application may not be able to reference the prioritization, or support multi-GPU (multi display driver), resulting in the incorrect display adapter being used.

As a workaround, you can remove the VMware Indirect Display Driver from the Windows instance running in the VM.

NVIDIA vGPU with VMware Horizon Indirect Display Driver Removed

Please note that simply disabling the “VMware Horizon Indirect Display Driver” will not suffice. A full removal (Right Click, “Uninstall Device”) is required to workaround this issue. Additionally, upgrading or re-installing the VMware Horizon Agent will re-install the VMware Indirect Display Driver.

Dec 282023
 
Synology DS923+

Today we’re going to cover a powerful little NAS being used with VMware; the Synology DS923+ VMware vSphere Use case and Configuration.

This little (but powerful) NAS is perfect for your VMware vSphere homelab and numerous other scenarios and uses. Let’s go over this specific use case, and how to best configure it with your VMware environment so you can fully take advantage of it.

Keep in mind that this post reviews only one of many potential uses, specifically with VMware vSphere (and ESXi). I’m hoping with time to review some other uses for this NAS.

Synology DS923+ VMware vSphere Use Case

The Synology DS923+ is a tiny yet powerful 4-Bay NAS, offering 2x1Gb NICs built-in, with the ability to add in a user-installable 10Gb NIC module. You can also add 2 x NVME drives for NVME SSD cache, giving you the perfect iSCSI target, in our case particularly for VMware vSphere and ESXi.

Synology DS923+ w/ 10Gb NIC, Disks, and 2 x NVME SSD for Cache

The highlights of this specific unit and configuration:

  • NVME SSD Cache – Provides high speed storage (also good at random I/O)
  • Redundant NICs – 1 x 10Gig (add-on) and 2 x 1GB (built-in)

Looking at the networking capabilities, we have 3 NICs when the optional 10Gb NIC is installed. This gives us a number of different potential configurations, but for VMware vSphere, we’ll map out the following:

  • NIC #1 – 10Gig: iSCSI Primary *(and SMB if using VLAN interfaces)*
  • NIC #2 – 1Gig: Management (and SMB w/o VLAN interfaces)
  • NIC #3 – 1Gig: iSCSI Fallback

Note: You could add VLAN interfaces to your Synology device on the 10Gig interface, and use VLANs to provide SMB and other services over the 10Gig link as well. Please note that adding VLAN interfaces is unsupported and may cause issues (including when performing upgrades).

What’s particularly nice about this NAS is that for the price point you’re able to provide 10Gb iSCSI to your ESXi hosts, while also having a fallback connection for redundancy. While the fallback NIC is limited to 1Gig which is substantially slower, it does allow your workload to continue to run, and most importantly without corruption or loss of data due to an iSCSI paths down situation.

Synology DS923+ iSCSI Configuration for VMware vSphere

So now that we’ve established the use case for the Synology DS923+, lets go over how to best configure it for your VMware vSphere environment, and get it connected to your ESXi hosts.

HPE Proliant Server running VMware ESXi with Synology DS923+

There’s a few things to note for the design of the configuration:

  • iSCSI should be using Jumbo Frames
    • Both the ESXi vmk iSCSI adapters and the iSCSI NIC on the Synology NAS
    • All iSCSI networking (switches) should have jumbo frames enabled
  • iSCSI Multi-pathing policy will be VMW_PSP_FIXED (Fixed Pathing)
    • We will NOT be using Round-Robin MPIO (VMW_PSP_RR)
    • Fixed pathing will be used with the 10Gig link being preferred, and 1Gig link acting as fallback
  • The Synology NAS iSCSI target should only be configured to listen and advertise on the iSCSI NICs (primary active and fallback)

Configure iSCSI on the Synology DS923+

To configure iSCSI on your Synology:

  1. Perform Basic Configuration
    • Configure NAS
    • Configure Static IP for Management on 1Gb NIC Interface
  2. Enable the 10Gb NIC Interface (For use with iSCSI Primary)
    • Configure a Static IP
    • Configure Jumbo Frames
  3. Enable the 1Gb NIC Interface (For use as fallback iSCSI)
    • Configure a Static IP
    • Configure Jumbo Frames
  4. Use the Synology “SAN Manager” to Configure the iSCSI Target
    • Create an iSCSI LUN and Target
      • Configure a LUN with your preferences (Thin provisioned, etc)
      • Configure the iSCSI Target
        • Enable “Allow multiple sessions from one or more iSCSI initiators” to allow multiple initiators to access (both from single hosts and/or multiple hosts)Allow multiple sessions from one or more iSCSI initiators
        • Configure “Network Binding” to the 10Gig Primary link and 1Gb fallback NIC. We do not want it to advertise on the management interfaceSynology iSCSI Target Network Binding
      • Configure “Host” initiator settings
        • This is where you will add your iSCSI host initiator IQNs, and provide “Read/Write” access

Overall, this is a basic iSCSI target configuration, with the only exception is that we are only using select interfaces for iSCSI connections. While we can use both the 10Gb and 1Gb connections, we’ll use the host settings to only use the primary and have the secondary as a fall back.

Note that the networks (and IPs) used above for iSCSI are on a network dedicated to iSCSI. We do not want to use our data networks for storage related traffic. They are separated not only for security, but also because they are using different frame/MTU sizes.

Configure ESXi to connect to the Synology DS923+

To configure the Synology NAS iSCSI Target on your ESXi hosts:

  1. Configure your ESXi host networking on your iSCSI Network
    • Configure Networking on your hosts
      • Configure your storage vSwitch and create a portgroup for each physical NIC
      • Configure a vmk adapter with IP for each portgroup you haveiSCSI Port Groups for iSCSI VMK adapters
      • Configure each portgroup to only use one physical NIC as active, the rest unused
        • Each physical NIC should be used by only one portgroup
  2. Configure your iSCSI Initiator
    • If not already enabled, “Add Software Adapter” under “Storage Adapters” to add the iSCSI Software Adapter initiator.
    • Note the “iSCSI Name”. This is your initiator IQN, and needs to be added to the Synology iSCSI Target “Host” settings to provide access and add permissions (last item listed in the previous section configuring the Synology NAS).
    • Add your Synology’s Primary iSCSI interface and Secondary Fallback iSCSI interface IP addresses to your ESXi hosts “Dynamic Discovery” list. Do not use “Static Discovery” as this will auto-populate.
    • If you’re using the same IP subnet for all your iSCSI vmk adapters, enable iSCSI Port Binding.
      • Under “Network Port Binding”, click add, and select all your iSCSI vmk adapters which should auto-bind to the physical NIC owned by the port group they are using. They will not show active until you have completed all steps in this guide.iSCSI Port Binding
  3. Configure your LUN
    • Rescan your storage adapters
      • If you already have a VMFS volume, it should auto-mount and be added to the host.
    • If you haven’t already, create a new datastore by right clicking on the host, “Storage”, and “New Datastore”. Follow the wizard to create a new VMFS volume on your Synology iSCSI target.
  4. Configure proper fallback for the 10Gb and 1Gb link
    • On your ESXi hosts, under “Configure”, navigate to the “Storage Devices” tab, and identify all your “SYNOLOGY iSCSI Disk” devices.
    • For each “SYNOLOGY iSCSI Disk” device, under “Properties”, go to “Multipathing Policies”, “ACTIONS”, “Edit Multipathing”, and set it to “Fixed (VMware)”, while also setting the 10Gb path below under “Select the preferred path for this policy”.
  5. Repeat steps for each ESXi host.

As always, I recommend doing a “Rescan Storage” after any storage related changes. You may need to restart the host after enabling iSCSI Port binding.

Conclusion

You have now configured your VMware ESXi host(s) to connect to your Synology DS923+ with multiple paths for redundancy while favoring the faster 10Gb connection.