May 062019
 
10ZiG 5948q Zero Client

Let’s say you manage numerous 10ZiG Zero clients and your users all have similar USB hardware that needs to be redirected to the VDI session. In most cases the hardware will be redirected without any configuration necesary, but what about when that doesn’t happen. You need to push a configuration template with the device information to your 10ZiG Zero Clients.

In my case, I use a YubiKey Security Key. I regularly use this for logins in Chrome and noticed that it wasn’t being directed via USB redirection.

If you’re interesting in 10ZiG products and looking to buy, don’t hesitate to reach out to me for information and/or a quote! We can configure and sell 10ZiG Zero Clients (and thin clients), help with solution design and deployment, and provide consulting services! We sell and ship to Canada and the USA!

This post is part two of a three part 10ZiG Manager Tutorial series:

Now there’s two ways to do this:

  1. On the 10ZiG Zero Client, go to settings, USB redirection, and change the preference from “Default” to “Include”. This must manually be done on every Zero Client inside of your infrastructure (time consuming).
  2. Add the USB hardware ID to your configuration template inside of the 10ZiG Manager and then push this to all your 10ZiG Zero Clients that you manage (super fast, can be deployed to thousands of devices in seconds).

In this post we’re going to cover the later, and show you how to add this to a config template. In my example, we’ll be adding the YubiKey security key with a hardware identifier (USB Product ID/PID) of 1050/0120 (Vendor ID: 1050, Product ID: 0120). We’ll be manually adding the hardware ID/PID to the config template in this tutorial.

Please Note: You can also add the settings on a 10ZiG Zero Client, and generate a template by pulling the config from that client. You can then push this to others as well.

To find out the Hardware ID/PID, you can either use the “Device Manager” on Windows, or plug in the device in to a 10ZiG Zero Client, go to settings, USB Redirection, and you should see the device name, along with the HID/PID info.

Instructions

  1. Open the 10ZiG Manager.
    10ZiG Manager Logged In Main Window
  2. Randomly choose a 10ZiG Zero Client from the list, right-click on it to open the menu. Expand “Configuration” -> Select “Manage templates”.
    10ZiG Manager Configuration Menu via Right Click
  3. In the “Configuration Templates” window, right-click on your existing template (or create a new one), and select “Edit”.
    10ZiG Manager Configuration Templates Right Click Menu Shown
  4. In the “Template Configuration – Template Name” window, double-click on “USB Device Redirection”.
    10ZiG Manager Template Configuration Window Shown
  5. In the “USB Device Redirection” window, click on “Add”.
  6. Enter in a friendly name, and enter your Vendor ID and Product ID in to the fields. For a YubiKey Security key, I did the following.
    10ZiG Manager Configuration USB Redirection Settings Window and Add Window Selected
  7. Click OK on all the fields, save the template. The configuration has been saved to the configuration template.

You’re done! You can now deploy this template to a single 10ZiG Zero Client, or deploy it as a batch to many 10ZiG Zero Clients.

May 062019
 
10ZiG 5948q Zero Client

So you’ve purchase some 10ZiG Zero Clients, configured the 10ZiG Manager, and want to create a configuration template to deploy to all your devices.

In this post, we’ll be going over how to create a configuration template from a manually configured 10ZiG Zero Client, so that you can edit it, and then deploy it to other 10ZiG Zero Clients (whether it’s a single unit, or 10,000).

Once you have a configuration template, you can add certificates, modify the VDI configuration, configure keyboard/mouse input, USB Redirection, and more! Doing all this with a configuration template allows you to manage and maintain a large amount of 10ZiG Zero Clients with ease.

If you’re interesting in 10ZiG products and looking to buy, don’t hesitate to reach out to me for information and/or a quote! We can configure and sell 10ZiG Zero Clients (and thin clients), help with solution design and deployment, and provide consulting services! We sell and ship to Canada and the USA!

This post is part one of a three part 10ZiG Manager Tutorial series:

Please Note: We are going to assume that you have manually configured at least one of your 10ZiG Zero Clients as a base configuration that you want to generate a template from. If not, make sure you do this before generating a template. We are also assuming that you have configured the 10ZiG Management software so that the Zero Clients can connect to it.

Instructions

  1. Open the 10ZiG Manager.
    10ZiG Manager Logged In Main Window
  2. Choose the 10ZiG Zero Client that you have already configured in the list and right-click on the unit.
  3. In the menu, expand “Configuration” -> and Select “Generate Template”.
    10ZiG Manager Configuration Menu via Right Click
  4. A warning explaining how the configuration is merged is presented, please read and understand this.
    Configuration Template Note on configuration merge
  5. In the “Configuration Templates” window, type in a template name in to the “Template Name” field, and then select “Ok”. I’m calling mine “DA-MainTemplate”.
    Create Configuration Template Name Dialog
  6. A warning explaining changes is presented, please read and understand this.
    Retrieve Device Configuration Warning Dialog Window
  7. You will be brought back to the 10ZiG Manager, and will see the “Generate configuration template” task in the tasks list at the bottom of the window. It should eventually complete and be marked as successful.
    Generate configuration template task list
  8. The configuration template has been created.

You have now created a configuration template inside of 10ZiG Manager! You can edit this, and eventually deploy it to other 10ZiG Zero Clients on your network.

May 052019
 
Ubuntu Orange Logo

After upgrading a computer from Ubuntu 16.04 LTS to Ubuntu 18.04 LTS or Ubuntu 18.04 LTS to Ubuntu 20.04 LTS, during boot the screen goes blank (turns black), all HD disk activity halts, and the system becomes frozen. This event can also occur on a fresh installation or when updates are installed.

This is due to a video mode issue that causes the system to halt or freeze. It’s much like the issue I described here on a Fedora Linux system.

Temporary Fix

To get the system to boot:

  1. After turning on your PC, hold the right SHIFT key to get to the GRUB bootloader if your computer uses a BIOS. If your computer uses EFI or UEFI, continuously tap the “ESC” (escape) key after turning on your PC.
  2. Once GRUB is open, press the “e” key to edit the first highlighted entry “Ubuntu”.
  3. Move your cursor down to the line that starts with “linux”, and use the right arrow key to find the section with the words “ro quiet splash”.
  4. Add “nomodeset” after these words.
    nomodeset
  5. Feel free to remove “quiet” and “splash” for more verbosity to troubleshoot the boot process.
  6. Press “CTRL + X” or “F10” to boot.
  7. The system should now boot.

Permanent Fix

To permanently resolve the issue:

  1. Once the system has booted using the temporary fix, log in.
  2. Open a terminal window (Applications -> Terminal, or press the “Start” button and type terminal).
  3. Either “su” in to root, or use “sudo” to open your favorite text editor and edit the file “/etc/default/grub” (I use nano which can be install by running “apt install nano”):
    nano /etc/default/grub
  4. Locate the line with the variable “GRUB_CMDLINE_LINUX_DEFAULT”, and add “nomodeset” to the variables. Feel free to remove “splash” and “quiet” if you’d like text boot. Here’s an example of my line after editing (yours will look different):
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"
  5. Save the file and exit the text editor (CTRL+X to quit, the press “y” and enter to save).
  6. At the bash prompt, execute the following command to regenerate the grub.conf file on the /boot partition from your new default file:
    update-grub
  7. Restart your system, it should now boot!

Please Note: Always make sure you have a full system backup before modifying any system files!

May 042019
 
Ubuntu Orange Logo

You’re trying to install Ubuntu on your computer, but it freezes due to lack of resources, specifically memory. This can happen when you’re trying to re-purpose old laptops, netbooks, etc.

This recently happened to met as I tried to install Ubuntu on an old HP Netbook. Originally I used Fedora, but had to switch to Ubuntu due to library issues (I wanted to use the VMware Horizon Client on it).

Unfortunately, when I’d kick off the USB installer, the OS would completely freeze (mouse either unresponsive, or extremely glitchy).

The Fix – External SWAP File

In the ~5 minutes where the system is operable, I used the key sequence “CTRL + ALT + F2” to get to a text tty console session. From here I noticed the system eventually uses all the RAM and maxes out the memory. When this occurs, this is when the system becomes unresponsive.

Since this is a Live CD installer, there is no swap file for the system to use once the RAM has filled up.

To fix this and workaround the problem, I grabbed a second blank USB stick and used it as an external swap file. Using this allowed me to run the installer, complete the installer, and successfully install Ubuntu.

Please make sure you are choosing the right device names in the instructions below. Choosing the wrong device name can cause your to write to the wrong USB stick, or worse the hard drive of your system.

Instructions:

  1. Attached USB Installer, boot system.
  2. Once system has booted, press “CTRL + ALT + F2” to open a tty console session.
  3. Login using user: “Ubuntu” with a blank password.
  4. Type “sudo su” to get a root shell.
  5. Type in “tail -f /var/log/kern.log” and connect your spare blank USB stick that you want to use for SWAP space. Note the device name, in my case it was “/dev/sdd”.
  6. Press “CTRL + C” to stop tailing the log file, then run “fdisk /dev/sdd” and replace “/dev/sdd” with whatever your device was. PLEASE MAKE SURE YOU ARE CHOOSING THE RIGHT USB DEVICE NAME.
  7. Use “n” to create a new partition, follow the prompts, when it asks for size I randomly chose “+2G” for a 2GB swap file. Use “w” to write the partition table and then quit the fdisk application.
  8. Run “mkswap /dev/sdd1” and replace “sdd1” with the device and partition number of your USB Swap stick. This will format the partition and mark it as a SWAP filesystem.
  9. Run “swapon /dev/sdd1” and replace “sdd1” with your swap partition you created. This will activate the external swap file on the USB stick.
  10. Press “CTRL + ALT + F1” to return to the Ubuntu installation guide. Continue the install as normal.

This should also work for other Linux distributions, as I have also used this in the past with Fedora (on a Single Board Computer with almost no RAM).

During the install process where the Ubuntu installer formats your hard drive, the install will actually mount the hard drive swap file as well (it’ll use both). Once the installer is complete, shut down the system and remove the USB SWAP stick.

May 022019
 
Nvidia GRID Logo

I can’t tell you how excited I am that after many years, I’ve finally gotten my hands on and purchased an Nvidia Quadro K1 GPU. This card will be used in my homelab to learn, and demo Nvidia GRID accelerated graphics on VMware Horizon View. In this post I’ll outline the details, installation, configuration, and thoughts. And of course I’ll have plenty of pictures below!

The focus will be to use this card both with vGPU, as well as 3D accelerated vSGA inside in an HPE server running ESXi 6.5 and VMware Horizon View 7.8.

Please Note: As of late (late 2020), hardware h.264 offloading no longer functions with VMware Horizon and VMware BLAST with NVidia Grid K1/K2 cards. More information can be found at https://www.stephenwagner.com/2020/10/10/nvidia-vgpu-grid-k1-k2-no-h264-session-encoding-offload/

Please Note: Some, most, or all of what I’m doing is not officially supported by Nvidia, HPE, and/or VMware. I am simply doing this to learn and demo, and there was a real possibility that it may not have worked since I’m not following the vendor HCL (Hardware Compatibility lists). If you attempt to do this, or something similar, you do so at your own risk.

Nvidia GRID K1 Image

For some time I’ve been trying to source either an Nvidia GRID K1/K2 or an AMD FirePro S7150 to get started with a simple homelab/demo environment. One of the reasons for the time it took was I didn’t want to spend too much on it, especially with the chances it may not even work.

Essentially, I have 3 Servers:

  1. HPE DL360p Gen8 (Dual Proc, 128GB RAM)
  2. HPE DL360p Gen8 (Dual Proc, 128GB RAM)
  3. HPE ML310e Gen8 v2 (Single Proc, 32GB RAM)

For the DL360p servers, while the servers are beefy enough, have enough power (dual redundant power supplies), and resources, unfortunately the PCIe slots are half-height. In order for me to use a dual-height card, I’d need to rig something up to have an eGPU (external GPU) outside of the server.

As for the ML310e, it’s an entry level tower server. While it does support dual-height (dual slot) PCIe cards, it only has a single 350W power supply, misses some fancy server technologies (I’ve had issues with VT-d, etc), and only a single processor. I should be able to install the card, however I’m worried about powering it (it has no 6pin PCIe power connector), and having ESXi be able to use it.

Finally, I was worried about cooling. The GRID K1 and GRID K2 are typically passively cooled and meant to be installed in to rack servers with fans running at jet engine speeds. If I used the DL360p with an external setup, this would cause issues. If I used the ML310e internally, I had significant doubts that cooling would be enough. The ML310e did have the plastic air baffles, but only had one fan for the expansion cards area, and of course not all the air would pass through the GRID K1 card.

The Purchase

Because of a limited budget, and the possibility I may not even be able to get it working, I didn’t want to spend too much. I found an eBay user local in my city who had a couple Grid K1 and Grid K2 cards, as well as a bunch of other cool stuff.

We spoke and he decided to give me a wicked deal on the Grid K1 card. I thought this was a fantastic idea as the power requirements were significantly less (more likely to work on the ML310e) on the K1 card at 130 W max power, versus the K2 card at 225 W max power.

NVIDIA GRID K1 and K2 Specifications
NVIDIA GRID K1 and K2 Specification Table

The above chart is a capture from:
https://www.nvidia.com/content/cloud-computing/pdf/nvidia-grid-datasheet-k1-k2.pdf

We set a time and a place to meet. Preemptively I ran out to a local supply store to purchase an LP4 power adapter splitter, as well as a LP4 to 6pin PCIe power adapter. There were no available power connectors inside of the ML310e server so this was needed. I still thought the chances of this working were slim…

These are the adapters I purchased:

Preparation and Software Installation

I also decided to go ahead and download the Nvidia GRID Software Package. This includes the release notes, user guide, ESXi vib driver (includes vSGA, vGPU), as well as guest drivers for vGPU and pass through. The package also includes the GRID vGPU Manager. The driver I used was from:
https://www.nvidia.com/Download/driverResults.aspx/144909/en-us

To install, I copied over the vib file “NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib” to a datastore, enabled SSH, and then ran the following command to install:

esxcli software vib install -v /path/to/file/NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib

The command completed successfully and I shut down the host. Now I waited to meet.

We finally met and the transaction went smooth in a parking lot (people were staring at us as I handed him cash, and he handed me a big brick of something folded inside of grey static wrap). The card looked like it was in beautiful shape, and we had a good but brief chat. I’ll definitely be purchasing some more hardware from him.

Hardware Installation

Installing the card in the ML310e was difficult and took some time with care. First I had to remove the plastic air baffle. Then I had issues getting it inside of the case as the back bracket was 1cm too long to be able to put the card in. I had to finesse and slide in on and angle but finally got it installed. The back bracket (front side of case) on the other side slid in to the blue plastic case bracket. This was nice as the ML310e was designed for extremely long PCIe expansion cards and has a bracket on the front side of the case to help support and hold the card up as well.

For power I disconnected the DVD-ROM (who uses those anyways, right?), and connected the LP5 splitter and the LP5 to 6pin power adapter. I finally hooked it up to the card.

I laid the cables out nicely and then re-installed the air baffle. Everything was snug and tight.

Please see below for pictures of the Nvidia GRID K1 installed in the ML310e Gen8 V2.

Host Configuration

Powering on the server was a tense moment for me. A few things could have happened:

  1. Server won’t power on
  2. Server would power on but hang & report health alert
  3. Nvidia GRID card could overheat
  4. Nvidia GRID card could overheat and become damaged
  5. Nvidia GRID card could overheat and catch fire
  6. Server would boot but not recognize the card
  7. Server would boot, recognize the card, but not work
  8. Server would boot, recognize the card, and work

With great suspense, the server powered on as per normal. No errors or health alerts were presented.

I logged in to iLo on the server, and watched the server perform a BIOS POST, and start it’s boot to ESXi. Everything was looking well and normal.

After ESXi booted, and the server came online in vCenter. I went to the server and confirmed the GRID K1 was detected. I went ahead and configured 2 GPUs for vGPU, and 2 GPUs for 3D vSGA.

ESXi Graphics Settings for Host Graphics and Graphics Devices
ESXi Host Graphics Devices Settings

VM Configuration

I restarted the X.org service (required when changing the options above), and proceeded to add a vGPU to a virtual machine I already had configured and was using for VDI. You do this by adding a “Shared PCI Device”, selecting “NVIDIA GRID vGPU”, and I chose to use the highest profile available on the K1 card called “grid_k180q”.

Virtual Machine Edit Settings with NVIDIA GRID vGPU and grid_k180q profile selected
VM Settings to add NVIDIA GRID vGPU

After adding and selecting ok, you should see a warning telling you that must allocate and reserve all resources for the virtual machine, click “ok” and continue.

Power On and Testing

I went ahead and powered on the VM. I used the vSphere VM console to install the Nvidia GRID driver package (included in the driver ZIP file downloaded earlier) on the guest. I then restarted the guest.

After restarting, I logged in via Horizon, and could instantly tell it was working. Next step was to disable the VMware vSGA Display Adapter in the “Device Manager” and restart the host again.

Upon restarting again, to see if I had full 3D acceleration, I opened DirectX diagnostics by clicking on “Start” -> “Run” -> “dxdiag”.

DirectX Diagnostic Tool (dxdiag) showing Nvidia Grid K1 on VMware Horizon using vGPU k180q profile
dxdiag on GRID K1 using k180q profile

It worked! Now it was time to check the temperature of the card to make sure nothing was overheating. I enabled SSH on the ESXi host, logged in, and ran the “nvidia-smi” command.

nvidia-smi command on ESXi host showing GRID K1 information, vGPU information, temperatures, and power usage
“nvidia-smi” command on ESXi Host

According to this, the different GPUs ranged from 33C to 50C which was PERFECT! Further testing under stress, and I haven’t gotten a core to go above 56. The ML310e still has an option in the BIOS to increase fan speed, which I may test in the future if the temps get higher.

With “nvidia-smi” you can see the 4 GPUs, power usage, temperatures, memory usage, GPU utilization, and processes. This is the main GPU manager for the card. There are some other flags you can use for relevant information.

nvidia-smi with vgpu flag for vgpu information
“nvidia-smi vgpu” for vGPU Information
nvidia-smi with vgpu -q flag
“nvidia-smi vgpu -q” to Query more vGPU Information

Final Thoughts

Overall I’m very impressed, and it’s working great. While I haven’t tested any games, it’s working perfect for videos, music, YouTube, and multi-monitor support on my 10ZiG 5948qv. I’m using 2 displays with both running at 1920×1080 for resolution.

I’m looking forward to doing some tests with this VM while continuing to use vGPU. I will also be doing some testing utilizing 3D Accelerated vSGA.

The two coolest parts of this project are:

  • 3D Acceleration and Hardware h.264 Encoding on VMware Horizon
  • Getting a GRID K1 working on an HPE ML310e Gen8 v2

Highly recommend getting a setup like this for your own homelab!

Uses and Projects

Well, I’m writing this “Uses and Projects” section after I wrote the original article (it’s now March 8th, 2020). I have to say I couldn’t be impressed more with this setup, using it as my daily driver.

Since I’ve set this up, I’ve used it remotely while on airplanes, working while travelling, even for video editing.

Some of the projects (and posts) I’ve done, can be found here:

Leave a comment and let me know what you think! Or leave a question!