Here’s a fun quick VDI Gaming Demo with NVIDIA vGPU and Omnissa Horizon 8, using an NVIDIA L4 GPU and the L4-12Q Profile.
This video is just for fun, and is just to show some of the capabilities of the technology, hardware, and software, in this case, with Cloud Gaming.
The NVIDIA vGPU solution provides the ability to “slice” and create multiple Virtual GPU (vGPU) devices for your Virtual Machines and Virtual workloads.
In this video:
Quick Introduction to NVIDIA vGPU with Omnissa Horizon 8
While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host.
Some might be surprised to learn that you cannot simply just power down and remove the host from the vCenter server, as there are a number of steps that must be taken beforehand to ensure a proper successful decommission. Properly decommissioning the ESXi host avoids orphaned objects in the vCenter database, which can sometimes cause problems in the future.
Today we’ll go over how to properly decommission a VMware ESXi host in an environment with VMware vCenter Server.
The Process – How to decommission ESXi
We will detail the process and considerations to decommission an ESXi host. We will assume that you have since migrated all your VMs, templates, and files from the host, and it contains no data that requires backup or migration.
Please read further for extended procedures and more information.
Enter Maintenance Mode
We enter maintenance mode to confirm that no VMs are running on the host. You can simply right click the host, and enter maintenance mode.
Remove Host from vDS Switches
You must gracefully remove the host from any vDS switches (VMware Distributed Switches) before removing the host from vCenter Server.
You can create a standard vSwitch and migrate vmk (VMware Kernel) adapters from the vDS switch to standard vSwitch, to maintain communication with the vCenter server and other networks.
Please Note: If you are using vDS switches for iSCSI connectivity, you must gracefully develop a plan to deal with this beforehand, either by unmounting/detaching the iSCSI LUNs on the vDS before removing the switch, or gracefully migrating the vmk adapters to a standard vSwitch, using MPIO to avoid losing connectivity during the process.
Unmount and Detach iSCSI LUNs
You can now proceed to unmount and detach iSCSI LUNs from the selected system:
Unmount the iSCSI LUN(s) from the host
Detach the iSCSI LUN(s) from the host
You will unmount only on the selected host to be decommissioned, and then detach the LUNs (again only on the host you are decommissioning).
Move Host from Cluster to Datacenter as standalone host
While this may not be required, I usually do this to let vSphere Cluster Services (HA/DRS) adjust for the host removal, and also deal with reconfiguration of the HA agent on the ESXi Host. You can simply move the host from the cluster to the parent datacenter level.
Remove Host from Inventory
Once the host has been moved and a moment or two have elapsed, you can now proceed to remove the host from inventory.
While the host is powered on and still connected to vCenter, right click on the host and choose “Remove from Inventory”. This will gracefully remove objects from vCenter, and also uninstall the HA agent from the ESXi host.
Host Repurposing
At this point, you can now log directly on to the ESXi host using the local root password, and shutdown the host.
Today we’ll go over how to install the vSphere vCenter Root Certificate on your client system.
Certificates are designed to verify the identity of the systems, software, and/or resources we are accessing. If we aren’t able to verify and authenticate what we are accessing, how do we know that the resource we are sending information to, is really who they are?
Installing the vSphere vCenter Root Certificate on your client system, allows you to verify the identity of your VMware vCenter server, VMware ESXi hosts, and other resources, all while getting rid of those pesky certificate errors.
I see too many VMware vSphere administrators simply dismiss the certificate warnings, when instead they (and you) should be installing the Root CA on your system.
Why install the vCenter Server Root CA
Installing the vCenter Server’s Root CA, allows your computer to trust, verify, and validate any certificates issued by the vSphere Root Certification authority running on your vCenter appliance (vCSA). Essentially this translates to the following:
Your system will trust the Root CA and all certificates issued by the Root CA
This includes: VMware vCenter, vCSA VAMI, and ESXi hosts
When connecting to your vCenter server or ESXi hosts, you will not be presented with certificate issues
This includes errors when uploading files directly to a datastore
In addition to all of the above, you will start to take advantage of certificate based validation. Your system will verify and validate that when you connect to your vCenter or ESXi hosts, that you are indeed actually connecting to the intended system. When things are working, you won’t be prompted with a notification of certificate errors, whereas if something is wrong, you will be notifying of a possible security event.
How to install the vCenter Root CA
To install the vCenter Root CA on your system, perform the following:
Navigate to your VMware vCenter “Getting Started” page.
This is the IP or FQDN of your vCenter server without the “ui” after the address. We only want to access the base domain.
Do not click on “Launch vSphere Client”.
Right click on “Download trusted root CA certificates”, and click on save link as.
Save this ZIP file to your computer, and extract the archive file
You must extract the ZIP file, do not open it by double-clicking on the ZIP file.
Open and navigate through the extracted folders (certs/win in my case) and locate the certificates.
For each file that has the type of “Security Certificate”, right click on it and choose “Install Certificate”.
Change “Store Location” to “Local Machine”
This makes your system trust the certificate, not just your user profile
Choose “Place all certificates in the following store”, click Browse, and select “Trusted Root Certification Authorities”.
Complete the wizard. If successful, you’ll see: “The import was successful.”.
Repeat this for each file in that folder with the type of “Security Certificate”.
Alternatively, you can use a GPO with Active Directory or other workstation management techniques to deploy the Root CAs to multiple systems or all the systems in your domain.
When it comes to virtualized workloads, one thing I commonly see overlooked in the design of the solution, is the placement of workloads. In this post, I want to cover VMware vSphere VM placement rules using the “VM/Host Rules” feature.
This is a feature that I commonly see overlooked and not configured, especially in smaller single cluster environments, however I’ve also seen this happen in very large scale environments as well.
Let’s cover the why, what, who, and how…
VM Workloads
While VMware vSphere does have a number of technologies built in for redundancy, load-balancing, and availability, as part of the larger solution we often find our workloads, specifically 3rd party platforms, with their own solutions that accomplish the same thing.
We need to identify which HA (High Availability) or redundancy solution to use, based on the application, service, and how it works.
For example, using VMware vSphere HA, or High Availability, if vCenter (and/or vCLS) detects a host goes offline, it can restart the workload on other online hosts. There is time associated with the detection and boot time, resulting in a loss of service during this period.
Third party solutions often have their own high availability or redundancy built in to the solution, such as Microsoft Active Directory. In this case with a standard configuration, at any time, any domain controller can respond to a clients request for resources. If one DC goes offline, other DCs can respond to the request resulting in no downtime.
Obviously, in the case of Active Directory Domain Controllers, you’d much prefer to have multiple DCs in your environment, instead of using one with vSphere HA.
Additionally, if you did have multiple domain controllers, you’d want to make sure they aren’t all placed on the same ESXi host. This is where we start to incorporate VM placement in to our solution.
VM Placement
When it comes to 3rd party solutions like mentioned above, we need to identify these workloads and factor them in to the design of the solution we are either implementing, maintaining, or improving.
Example of VM workloads used with VM Placement
A few examples of these workloads with their own load-balancing and availability technologies:
Microsoft Windows Active Directory Domain Controllers
Microsoft Windows Servers running DNS/DHCP Servers
Virtualized Active/Active or Active/Passive Firewall Appliances
VMware Horizon UAG (Unified Access Gateway) configured in HA mode
Other servers/services that have their own availability systems
As you can see, the applications all have their own special solution for availability, so we must insure the different “nodes” or “instances” are running on different ESXi hosts to avoid a host failure bringing down the entire solution.
Unless otherwise specified by the 3rd party vendor, I would recommend using VM/Host Rules in combination with vSphere DRS and HA.
Configuring VM Placement with VM/Host Rules
To configure these rules, follow the instructions below:
Log on to your VMware vCenter Server
Select a Cluster
Click on the “Configure” tab, and then “VM/Host Rules”
Here you can Add/Edit/Delete VM Host Rules
Click on “Add”, and give the rule a new name (Example: Domain Controllers)
For “Type”, select “Separate Virtual Machines”
Click “Add” and select your Domain Controllers and add them to the rule.
After you click “OK”, the rule should now be saved, and DRS will make sure these VMs are now running on separate hosts.
Below you can see another example of a configured system, separating 2 Active/Passive Firewall appliances.
As you can see, VM placement with VM/Host Rules is very easy to configure and deploy.
Additional Considerations
Note, if you implement these rules and do not have enough hosts to fullfill the requirements, the hosts may fail to be evacuated by DRS when placing in maintenance mode, or remediating with vLCM (Lifecycle Manager).
In this case, you’ll need to manually vMotion the VM’s to other hosts (to violate the rule) or shut some down.
A few months ago, you may have seen my post detailing my experience with ESXi 7.0 on HP Proliant DL360p Gen8 servers. I now have an update as I have successfully loaded ESXi 8.0 on HPE Proliant DL360p Gen8 servers, and want to share my experience.
It wasn’t as eventful as one would have expected, but I wanted to share what’s required, what works, and stability observations.
Please note, this is NOT supported and NOT recommended for production environments. Use the information at your own risk.
A special thank you goes out to William Lam and his post on Homelab considerations for vSphere 8, which provided me with the boot parameter required to allow legacy CPUs.
ESXi on the DL360p Gen8
With the release of vSphere 8.0 Update 1, and all the new features and functionality that come with the vSphere 8 release as a whole, I decided it was time to attempt to update my homelab.
In my setup, I have the following:
2 x HPE Proliant DL360p Gen8 Servers
Dual Intel Xeon E5-2660v2 Processors in each server
USB and/or SD for booting ESXi
No other internal storage
NVIDIA A2 vGPU (for use with VMware Horizon)
External SAN iSCSI Storage
Since I have 2 servers, I decided to do a fresh install using the generic installer, and then use the HPE addon to install all the HPE addons, drivers, and software. I would perform these steps on one server at a time, continuing to the next if all went well.
I went ahead and documented the configuration of my servers beforehand, and had already had upgraded my VMware vCenter vCSA appliance from 7U3 to 8U1. Note, that you should always upgrade your vCenter Server first, and then your ESXi hosts.
To my surprise the install went very smooth (after enabling legacy CPUs in the installer) on one of the hosts, and after a few days with no stability issues, I then proceeded and upgraded the 2nd host.
I’ve been running with 100% for 25+ days without any issues.
The process – Installing ESXi 8.0
I used the following steps to install VMware vSphere ESXi 8 on my HP Proliant Gen8 Server:
Download the Generic ESXi installer from VMware directly.
Boot server with Generic ESXi installer media (CD or ISO)
IMPORTANT: Press “Shift + o” (Shift key, and letter “o”) to interrupt the ESXi boot loader, and add “AllowLegacyCPU=true” to the kernel boot parameters.
Continue to install ESXi as normal.
You may see warnings about using a legacy CPU, you can ignore these.
Complete initial configuration of ESXi host
Mount NFS or iSCSI datastore.
Copy HPE Custom Addon for ESXi zip file to datastore.
Enable SSH on host (or use console).
Place host in to maintenance mode.
Run “esxcli software vib install -d /vmfs/volumes/datastore-name/folder-name/HPE-801.0.0.11.3.1.1-Jun2023-Addon-depot.zip” from the command line.
The install will run and complete successfully.
Restart your server as needed, you’ll now notice that not only were HPE drivers installed, but also agents like the Agentless management agent, and iLO integrations.
After that, everything was good to go… Here you can see version information from one of the ESXi hosts:
What works, and what doesn’t
I was surprised to see that everything works, including the P420i embedded RAID controller. Please note that I am not using the RAID controller, so I have not performed extensive testing on it.
All Hardware health information is present, and ESXi is functioning as one would expect if running a supported version on the platform.
Additional Information
Note that with vSphere 8, VMware is deprecating vLCM baselines. Your focus should be to update your ESXi instances using cluster image based update images. You can incorporate vendor add-ons and components to create a customized image for deployment.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.
Do you accept the use of cookies and accept our privacy policy? AcceptRejectCookie and Privacy Policy
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.