May 062019
 
10ZiG 5948qv Zero Client VMware Horizon View

You have VMware Horizon View deployed along with Duo Multi-Factor Authentication (2FA, MFA), and you’re you having user experience issues with 10ZiG Zero Clients and multiple login dialog boxes and planning on how to deal with the MFA logins.

I spent some time experimenting with numerous different settings trying to find the cleanest workaround that wouldn’t bother the user or mess up the user experience. I’m going to share with you what I came up with below.

If you’re interesting in 10ZiG products and looking to buy, don’t hesitate to reach out to me for information and/or a quote! We can configure and sell 10ZiG Zero Clients (and thin clients), help with solution design and deployment, and provide consulting services! We sell and ship to Canada and the USA!

The Issue

When you have DUO MFA deployed on VMware Horizon, you may experience login issues when using a 10ZiG Zero Client to access the View Connection Server. This is because the authentication string (username, password, and domain) aren’t passed along correctly from the 10ZiG Login Dialog Box to the VMware Horizon View Client application.

Additionally, when DUO is enabled on VMware View (as a RADIUS authentication), there is no domain passed along inside of the DUO login prompt on the view client.

This issue is due to limitations in the VMware Horizon View Linux Client. This issue will and can occur on any system, thin-client, or Zero Client that uses a command string to initialize a VMware View session where DUO is configured on the View Connection Server.

Kevin Greenway, the CTO at 10ZiG, reached out to say that they have previously brought this up with VMware as a feature request (to support the required functionality), and are hopeful it gets committed.

At this point in time, we’d like to recommend everyone to reach out to VMware and ask for this functionality as a feature request. Numerous simultaneous requests will help gain attention and hopefully escalate it on VMware’s priority list.

The Workaround

After troubleshooting this, and realizing that the 10ZiG VMware login details are completely ignored and not passed along to the VMware View client, I started playing with different settings to test the best way to provide the best user experience for logging in.

At first I attempted to use the Kiosk mode, but had issues with some settings not being passed from the 10ZiG Client to the View Client.

Ultimately I found the perfect tweaking of settings that created a seamless login experience for users.

The Settings

On the 10ZiG Zero Client, we view the “Login” details of the “VMware Horizon Settings” dialog box.

10ZiG Zero Client VMware Horizon Settings Login Settings Dialog Box
10ZiG Zero Client VMware Horizon Settings Login Settings
  • Login Mode: Default
  • Username: PRESS LOGIN
  • Password: 1234
  • Domain: YourDomain

Please Note: In the above, because DUO MFA is enabled, the “Username”, “Password” and “Domain” values aren’t actually passed along to the VMware View application on the Zero Client.

We then navigate to the “Advanced” tab, and enable the “Connect once” option. This will force a server disconnection (and require re-authentication) on a desktop pool logoff or disconnection.

10ZiG Zero Client VMware Horizon Settings Advanced Settings Dialog Window
10ZiG Zero Client VMware Horizon Settings Advanced Settings

Please Note: This option is required so that when a user logs off, disconnects, or get’s cut off by the server, the Zero Client fully disconnects from the View Connection Server which causes re-authentication (a new password prompt) to occur.

The Login User Experience

So now that we’ve made the modifications to the Zero Client, I want to outline what the user experience will look like from Boot, to connection, to disconnection, to re-authentication.

  1. Turning on the 10ZiG Zero Client, you are presented with the DUO Login Prompt on the View Connection Server.
    DUO Security Login VMware View Client Dialog Box
  2. You then must pass 2FA/MFA authentication.
    DUO Security MFA authenticate VMware View Client dialog box
  3. You are then presented with the desktop pools available to the user.
  4. Upon logging off, disconnecting, or getting kicked off the server, the session is closed and you are presented to the 10ZiG VDI Login Window.
    10ZiG Zero Client VMware View Login Dialog Window
  5. To re-establish a connection, click “Login” as instruction by the “Username” field.
  6. You are presented with the DUO Login Window.
    DUO Security Login VMware View Client Dialog Box
  7. And the process repeats.

As you can see it’s a simple loop that requires almost no training on the end user side. You must only inform the users to click “Login” where the prompt advises to do so.

Once you configure this, you can add it to a configuration template (or generate a configuration template), and then deploy it to a large number of 10ZiG Zero Clients using 10ZiG Manager.

Let me know if this helps, and/or if you find a better way to handle the DUO integration!

May 022019
 
Nvidia GRID Logo

I can’t tell you how excited I am that after many years, I’ve finally gotten my hands on and purchased an Nvidia Quadro K1 GPU. This card will be used in my homelab to learn, and demo Nvidia GRID accelerated graphics on VMware Horizon View. In this post I’ll outline the details, installation, configuration, and thoughts. And of course I’ll have plenty of pictures below!

The focus will be to use this card both with vGPU, as well as 3D accelerated vSGA inside in an HPE server running ESXi 6.5 and VMware Horizon View 7.8.

Please Note: As of late (late 2020), hardware h.264 offloading no longer functions with VMware Horizon and VMware BLAST with NVidia Grid K1/K2 cards. More information can be found at https://www.stephenwagner.com/2020/10/10/nvidia-vgpu-grid-k1-k2-no-h264-session-encoding-offload/

Please Note: Some, most, or all of what I’m doing is not officially supported by Nvidia, HPE, and/or VMware. I am simply doing this to learn and demo, and there was a real possibility that it may not have worked since I’m not following the vendor HCL (Hardware Compatibility lists). If you attempt to do this, or something similar, you do so at your own risk.

Nvidia GRID K1 Image

For some time I’ve been trying to source either an Nvidia GRID K1/K2 or an AMD FirePro S7150 to get started with a simple homelab/demo environment. One of the reasons for the time it took was I didn’t want to spend too much on it, especially with the chances it may not even work.

Essentially, I have 3 Servers:

  1. HPE DL360p Gen8 (Dual Proc, 128GB RAM)
  2. HPE DL360p Gen8 (Dual Proc, 128GB RAM)
  3. HPE ML310e Gen8 v2 (Single Proc, 32GB RAM)

For the DL360p servers, while the servers are beefy enough, have enough power (dual redundant power supplies), and resources, unfortunately the PCIe slots are half-height. In order for me to use a dual-height card, I’d need to rig something up to have an eGPU (external GPU) outside of the server.

As for the ML310e, it’s an entry level tower server. While it does support dual-height (dual slot) PCIe cards, it only has a single 350W power supply, misses some fancy server technologies (I’ve had issues with VT-d, etc), and only a single processor. I should be able to install the card, however I’m worried about powering it (it has no 6pin PCIe power connector), and having ESXi be able to use it.

Finally, I was worried about cooling. The GRID K1 and GRID K2 are typically passively cooled and meant to be installed in to rack servers with fans running at jet engine speeds. If I used the DL360p with an external setup, this would cause issues. If I used the ML310e internally, I had significant doubts that cooling would be enough. The ML310e did have the plastic air baffles, but only had one fan for the expansion cards area, and of course not all the air would pass through the GRID K1 card.

The Purchase

Because of a limited budget, and the possibility I may not even be able to get it working, I didn’t want to spend too much. I found an eBay user local in my city who had a couple Grid K1 and Grid K2 cards, as well as a bunch of other cool stuff.

We spoke and he decided to give me a wicked deal on the Grid K1 card. I thought this was a fantastic idea as the power requirements were significantly less (more likely to work on the ML310e) on the K1 card at 130 W max power, versus the K2 card at 225 W max power.

NVIDIA GRID K1 and K2 Specifications
NVIDIA GRID K1 and K2 Specification Table

The above chart is a capture from:
https://www.nvidia.com/content/cloud-computing/pdf/nvidia-grid-datasheet-k1-k2.pdf

We set a time and a place to meet. Preemptively I ran out to a local supply store to purchase an LP4 power adapter splitter, as well as a LP4 to 6pin PCIe power adapter. There were no available power connectors inside of the ML310e server so this was needed. I still thought the chances of this working were slim…

These are the adapters I purchased:

Preparation and Software Installation

I also decided to go ahead and download the Nvidia GRID Software Package. This includes the release notes, user guide, ESXi vib driver (includes vSGA, vGPU), as well as guest drivers for vGPU and pass through. The package also includes the GRID vGPU Manager. The driver I used was from:
https://www.nvidia.com/Download/driverResults.aspx/144909/en-us

To install, I copied over the vib file “NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib” to a datastore, enabled SSH, and then ran the following command to install:

esxcli software vib install -v /path/to/file/NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib

The command completed successfully and I shut down the host. Now I waited to meet.

We finally met and the transaction went smooth in a parking lot (people were staring at us as I handed him cash, and he handed me a big brick of something folded inside of grey static wrap). The card looked like it was in beautiful shape, and we had a good but brief chat. I’ll definitely be purchasing some more hardware from him.

Hardware Installation

Installing the card in the ML310e was difficult and took some time with care. First I had to remove the plastic air baffle. Then I had issues getting it inside of the case as the back bracket was 1cm too long to be able to put the card in. I had to finesse and slide in on and angle but finally got it installed. The back bracket (front side of case) on the other side slid in to the blue plastic case bracket. This was nice as the ML310e was designed for extremely long PCIe expansion cards and has a bracket on the front side of the case to help support and hold the card up as well.

For power I disconnected the DVD-ROM (who uses those anyways, right?), and connected the LP5 splitter and the LP5 to 6pin power adapter. I finally hooked it up to the card.

I laid the cables out nicely and then re-installed the air baffle. Everything was snug and tight.

Please see below for pictures of the Nvidia GRID K1 installed in the ML310e Gen8 V2.

Host Configuration

Powering on the server was a tense moment for me. A few things could have happened:

  1. Server won’t power on
  2. Server would power on but hang & report health alert
  3. Nvidia GRID card could overheat
  4. Nvidia GRID card could overheat and become damaged
  5. Nvidia GRID card could overheat and catch fire
  6. Server would boot but not recognize the card
  7. Server would boot, recognize the card, but not work
  8. Server would boot, recognize the card, and work

With great suspense, the server powered on as per normal. No errors or health alerts were presented.

I logged in to iLo on the server, and watched the server perform a BIOS POST, and start it’s boot to ESXi. Everything was looking well and normal.

After ESXi booted, and the server came online in vCenter. I went to the server and confirmed the GRID K1 was detected. I went ahead and configured 2 GPUs for vGPU, and 2 GPUs for 3D vSGA.

ESXi Graphics Settings for Host Graphics and Graphics Devices
ESXi Host Graphics Devices Settings

VM Configuration

I restarted the X.org service (required when changing the options above), and proceeded to add a vGPU to a virtual machine I already had configured and was using for VDI. You do this by adding a “Shared PCI Device”, selecting “NVIDIA GRID vGPU”, and I chose to use the highest profile available on the K1 card called “grid_k180q”.

Virtual Machine Edit Settings with NVIDIA GRID vGPU and grid_k180q profile selected
VM Settings to add NVIDIA GRID vGPU

After adding and selecting ok, you should see a warning telling you that must allocate and reserve all resources for the virtual machine, click “ok” and continue.

Power On and Testing

I went ahead and powered on the VM. I used the vSphere VM console to install the Nvidia GRID driver package (included in the driver ZIP file downloaded earlier) on the guest. I then restarted the guest.

After restarting, I logged in via Horizon, and could instantly tell it was working. Next step was to disable the VMware vSGA Display Adapter in the “Device Manager” and restart the host again.

Upon restarting again, to see if I had full 3D acceleration, I opened DirectX diagnostics by clicking on “Start” -> “Run” -> “dxdiag”.

DirectX Diagnostic Tool (dxdiag) showing Nvidia Grid K1 on VMware Horizon using vGPU k180q profile
dxdiag on GRID K1 using k180q profile

It worked! Now it was time to check the temperature of the card to make sure nothing was overheating. I enabled SSH on the ESXi host, logged in, and ran the “nvidia-smi” command.

nvidia-smi command on ESXi host showing GRID K1 information, vGPU information, temperatures, and power usage
“nvidia-smi” command on ESXi Host

According to this, the different GPUs ranged from 33C to 50C which was PERFECT! Further testing under stress, and I haven’t gotten a core to go above 56. The ML310e still has an option in the BIOS to increase fan speed, which I may test in the future if the temps get higher.

With “nvidia-smi” you can see the 4 GPUs, power usage, temperatures, memory usage, GPU utilization, and processes. This is the main GPU manager for the card. There are some other flags you can use for relevant information.

nvidia-smi with vgpu flag for vgpu information
“nvidia-smi vgpu” for vGPU Information
nvidia-smi with vgpu -q flag
“nvidia-smi vgpu -q” to Query more vGPU Information

Final Thoughts

Overall I’m very impressed, and it’s working great. While I haven’t tested any games, it’s working perfect for videos, music, YouTube, and multi-monitor support on my 10ZiG 5948qv. I’m using 2 displays with both running at 1920×1080 for resolution.

I’m looking forward to doing some tests with this VM while continuing to use vGPU. I will also be doing some testing utilizing 3D Accelerated vSGA.

The two coolest parts of this project are:

  • 3D Acceleration and Hardware h.264 Encoding on VMware Horizon
  • Getting a GRID K1 working on an HPE ML310e Gen8 v2

Highly recommend getting a setup like this for your own homelab!

Uses and Projects

Well, I’m writing this “Uses and Projects” section after I wrote the original article (it’s now March 8th, 2020). I have to say I couldn’t be impressed more with this setup, using it as my daily driver.

Since I’ve set this up, I’ve used it remotely while on airplanes, working while travelling, even for video editing.

Some of the projects (and posts) I’ve done, can be found here:

Leave a comment and let me know what you think! Or leave a question!

May 012019
 
VMware Horizon View Logo

One really cool feature that was released in VMware Horizon View 7.7 (and Horizon 8), is the ability to install the Horizon Agent on to a Physical PC or Physical Workstation and use the Blast Extreme protocol. It even supports 3D Acceleration via a GPU and the direct-connect plugin (so you don’t need to have/use a View Connection Server)!

Update July 20, 2022: With the release of Horizon 8 2206, you can now install the Horizon agent on Windows 10 Pro, and Edu editions. Previous versions of Horizon required Windows 10 Enterprise.

As a system admin, I see value in having some Physical PCs managed by the View connection server. Also, if you have the licensing, this will allow you to set this up as a remote access solution for your business and employees.

I’ll be detailing some information about doing this, what’s required, what works, and what doesn’t below…

The details…

From the “What’s new in Horizon 7.7” doc at https://blogs.vmware.com/euc/2018/12/whats-new-horizon-7-7.html

You can now use the Blast Extreme display protocol to access physical PCs and workstations. Some limitations apply.

Additional information from the “Horizon 7.7 Release Notes” at https://docs.vmware.com/en/VMware-Horizon-7/7.7/rn/horizon-77-view-release-notes.html

Physical PCs and workstations with Windows 10 1803 Enterprise or higher can be brokered through Horizon 7 via Blast Extreme protocol.

Requirements

So here’s what’s required to get going:

  • Windows 10 Enterprise (for Horizon versions before Horizon 8 2206)
  • Windows 10 Pro or Edu (for Horizon 8 2206 and later)
  • Physical PC or Workstation
  • VMware Horizon Licensing
  • VMware Horizon 7.7 or higher (and Horizon 8) Connection Server
  • VMware Horizon 7.7 or higher (and Horizon 8) Agent on Physical PC/Workstation
  • Manual Desktop Pool (Manual is required for Physical PCs to be added)

What Works

  • Blast Extreme
  • 3D Acceleration (via GPU with drivers)
  • 3D Acceleration with Consumer GPUs
  • Multiple Displays
  • Multiple GPUs
  • VMware View Agent Direct-Connection Plug-In

What Doesn’t Work

  • GPU Hardware h.264 encoding on consumer GPUs (h.264 encoding is still handled by the CPU)
  • GPU Hardware h.264/h.265 offload may work in later versions (I still need to test this)

Thoughts

I’ve been really enjoying this feature. Not only have I moved my desktop in to my server room and started remoting in using Blast, but I can think of many use cases for this (machines shops, sharing software licenses, remote access, etc.).

I’ve had numerous discussions with customers of mine who also say they see tremendous value in this after I brought it to their attention. I’ll update this post later on once I hear back about how some of my customers have deployed it.

Update – March 14th 2020 – I’ve been using this on 3 different systems since I wrote this article and love this feature!

3D Acceleration

One thing that is really cool, is the fact that 3D acceleration is enabled and working if the computer has a GPU installed (along with drivers). And no, you don’t need a fancy enterprise GPU. In my setup I’m running a GeForce 550 GTX TI, and a GeForce 640.

Horizon 3D Acceleration dxdiag
Horizon 3D Acceleration Enabled via dxdiag

While 3D acceleration is working, I have to note that the h.264 encoding for the Blast Extreme session is still being handled by the CPU. So while you are getting some great 3D accelerated graphics, depending on your CPU and screen resolution, you may be noticing some choppiness. If you have a higher end CPU, you should be able to get some pretty high resolutions. I’m currently running 2 displays at 1920×1080 on an extremely old Core 2 Quad processor.

H.264 encoding

I spent some time trying to enable the hardware h.264 encoder on the GPUs. Even when using the “NvFBCEnable.exe” (located in C:\Program Files\VMware\VMware Blast\) application to enable hardware encoding, I still notice that the encoding is being done on the CPU. I’m REALLY hoping they change this in future releases.

Hacks?

Another concept that this opens the door for is consumer GPUs providing 3D acceleration without all the driver issues. Technically you could use the CPU settings (to hide the fact the VM is being virtualized), and then install the Horizon Agent as a physical PC (even though it’s being virtualized). This should allow you to use the GPU that you’re passing through, but you still won’t get h.264 encoding on the GPU. This should stop the pesky black screen issue that’s normally seen when using this work around.

Bugs

When upgrading to Horizon View 8 or higher, as part of the process to upgrade the agent on the physical machine, you may notice this functionality stops working. To resolve this, simply uninstall the agent and then re-install it.

Also, on a final note… I did find a bug where if any of the physical PCs are powered down or unavailable on the network, any logins from users entitled to that pool will time out and not work. When this issue occurs, a WoL (Wake on LAN) packet is sent to the desktop during login, and the login will freeze until the physical PC becomes available. This occurs during the login phase, and will happen even if you don’t plan on using that pool. More information can be found here:
https://www.stephenwagner.com/2019/03/19/vmware-horizon-view-stuck-authenticating-logging-in/

More Information

Since the date of this post, VMware Tech Zone released a useful post outlining details on Using Horizon to Access Physical Windows Machines. I highly recommend you check it out!

Mar 192019
 
VMware Horizon View Logo

I noticed after upgrading to VMware Horizon View 7.8 and VMware Unified Access Gateway 3.5, when attempting to log in to a VMware Horizon View Connection Server via the Horizon Client, I would get stuck on “Authenticating”. If using the HTML client, it would get stuck on “Logging in”.

This will either timeout, or eventually (after numerous minutes) finally load. This occurs both with standard authentication, as well as 2FA/MFA/RADIUS authentication.

The Problem

Originally, I thought this issue was related to 2FA and/or RADIUS, however after disabling both, the issue was still present. In the VDM debug logs, you may find something similar to below:

2019-03-19T16:07:44.971-06:00 INFO  (1064-181C)  UnManagedMachineInformation Wake-on-LAN packet sent to machine comp.domain.com

2019-03-19T16:07:34.296-06:00 INFO (1064-17F0) UnManagedMachineInformation wait ended for startup update, returning false
2019-03-19T16:07:34.296-06:00 INFO (1064-17F0) UnManagedMachineInformation Could not wake up PM comp.domain.com within timeout

The Fix

The apparent delay “Authenticating” or “Logging In” is caused by a Wake On LAN packet being sent to an unmanaged physical workstation that has the VMware View Agent installed. This is occurring because the system is powered off.

After powering on all unmanaged View agents running on physical computers, the issue should be resolved.

Aug 262018
 
Fedora Logo

One of the coolest things I love about running VMware Horizon View and VDI is that you can repurpose old computers, laptops, or even netbooks in to perfect VDI clients running Linux! This is extremely easy to do and gives life to old hardware you may have lying around (and we all know there’s nothing wrong with that).

I generally use Fedora and the VMware Horizon View Linux client to accomplish this. See below to see how I do it!

 

Quick Guide

  1. Download the Fedora Workstation install or netboot ISO from here.
  2. Burn it to a DVD/CD if you have DVD/CD drive, or you can write it to a USB stick using this method here.
  3. Install Fedora on to your laptop/notebook/netbook using the workstation install.
  4. Update your Fedora Linux install using the following command
    dnf -y upgrade
  5. Install the prerequisites for the VMware Horizon View Linux client using these commands
    dnf -y install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
    dnf -y install gstreamer-plugins-ugly gstreamer-plugins-bad gstreamer-ffmpeg xine-lib-extras-freeworld xine-lib-extras-freeworld libssl* libcrypto* openssl-devel libpng12 systemd-devel libffi-devel
    
  6. To fix an issue with package versions and dependancies, run the following commands
    ln -s /usr/lib64/libudev.so.1 /usr/lib64/libudev.so.0
    ln -s /usr/lib64/libffi.so.6 /usr/lib64/libffi.so.5
  7. Download the VMware Horizon View Linux client from here
  8. Make the VMware bundle executable and then run the installer using these commands (your file name may be different depending on build version number)
    chmod 777 VMware-Horizon-Client-4.8.0-8518891.x64.bundle
    sudo ./VMware-Horizon-Client-4.8.0-8518891.x64.bundle
  9. Complete the installation wizard
  10. You’re done!

To run the client, you can find it in the GUI applications list as “VMware Horizon Client”, or you can launch it by running “vmware-view”.

VMware Horizon View on Linux in action

Here is a VMware Horizon View Linux client running on HP Mini 220 Netbook

Additional Notes:

-If you’re comfortable, instead of the workstation install, you can install the Fedora LXQt Desktop spin, which is a lightweight desktop environment perfect for low performance hardware or netbooks. More information and the download link for Fedora LXQt Desktop Spin can be found here: https://spins.fedoraproject.org/en/lxqt/

-If you installed Fedora Workstation and would like to install the LXQt window manager afterwards, you can do so by running the following command (after installing, at login prompt, click on the gear to change window managers):

dnf install @lxqt-desktop-environment

-Some of the prerequisites above in the guide may not be required, however I have installed them anyways for compatibility.