Dec 052016
 

In the process of prepping my test environment so I can upgrade from vSphere 6.1 to 6.5, one of the prerequisites is to first upgrade your VDP appliances to version 6.1.3 (6.1.3 is the only version of VDP that supports vSphere 6.5). In my environment I’ll be upgrading VDP from 6.1.2 to 6.1.3.

After downloading the ISO, changing my disks to dependant, creating a snapshot, and attaching the ISO to the VM. My VDP appliances would not recognize the ISO image, showing the dreaded: “To upgrade your VDP appliance, place connect a valid upgrade ISO image to the appliance.”.

NoISODetected

I tried a few things, including trying the old “patch” that was issues for 6.1 when it couldn’t detect, unfortunately it didn’t help. I also tried to manually mount the virtual CD-Rom to the mountpoint but had no luck. The mountpoint /mnt/auto/cdrom is locked by the autofs service. If you try to modify these files (such as delete, create, etc…), you’ll encounter a bunch of errors and have no luck (permission denied, file and/or directory doesn’t exist, etc…).

Essentially the autofs service was not auto-mounting the virtual CD drive to the mount point.

To fix this:

  1. SSH in to the VDP appliance
  2. Run command “sudo su” to run commands as root
  3. Use vi to edit the auto.mnt file using command: “vi /etc/auto.mnt”
  4. At the end of the first line in the file, you will see “/dev/cdrom” (without quotation), change this to “/dev/sr0” (again, without quotation)
  5. Save the file (after editing the text, Ctrl+c, then type “:w” and enter which writes the file, then type “:q” then enter to quit vi.
  6. Reload the autofs config using command: “/etc/init.d/autofs reload”
  7. At the shell, run “mount” to show the active mountpoints, you’ll notice the ISO is now mounted after a few seconds.
  8. You can now initiate the upgrade. Start it.
  9. At 71%, autofs updates via a RPM, and the changes you made to the config are cleared. IMMEDIATELY edit the /etc/auto.mnt file again, change “/dev/cdrom” to “/dev/sr0” and save the file, and issue the command “/etc/init.d/autofs reload”. Do this as fast as possible.
  10. You’re good to go, the install will continue and take some time. The web interface will fail, and become unresponsive. Simply wait, and the vDP appliance will eventually shut down (in my case it took over 30 minutes after the web interface failed to reconnect, in a high performance environment for the vDP VM to shut down).

And done! Leave a comment!

 

Nov 052016
 

Yesterday, I had a reader (Nicolas) leave a comment on one of my previous blog posts bringing my attention to the MTU for Jumbo Frames on the HPE MSA 2040 SAN.

MSA 2040 MTU Comment

Since I first started working with the MSA 2040. Looking at numerous HPE documents outlining configuration and best practices, the documents did confirm that the unit supported Jumbo Frames. However, the documentation on the MTU was never clearly stated and can be confusing. I was under the assumption that the unit supported 9000 MTU, while reserving 100 bytes for overhead. This is not necessarily the case.

Nicolas chimed in and provided details on his tests which confirmed the HPE MSA 2040 does actually have a working MTU of 8900. In my configuration I did the tests (that Nicolas outlined), and confirmed that the MTU would cause packet fragmentation if the MTU was greater than 8900.

ESXi vmkping usage: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003728

This is a big discovery because packet fragmentation will not only degrade performance, but flood the links with lots of packet fragmentation.

I went ahead and re-configured my ESXi hosts to use an MTU of 8900 on the network used with my SAN. This immediately created a MASSIVE performance increase (both speed, and IOPS). I highly recommend that users of the MSA 2040 SAN confirm this on their own, and update the MTUs as they see fit.

Also, this brings up another consideration. Ideally, on a single network, you want all devices to be running the same MTU. If your MSA 2040 SAN is on a storage network with other SAN devices (or any other device), you may want to configure all of them to use the MTU of 8900 if possible (and of course, don’t forget your servers).

A big thank you to Nicolas for pointing this out!

Apr 102016
 

For those of you that use HP’s vibsdepot with VMWare Update Manager, you may have noticed that as of late you have not been able to synchronize patch definitions from the HP vibsdepot source.

I suspected this may have had something to do with the fact that in the past, the hp.com domain was being used to host these files, and with the company split, all enterprise related hosting has now moved to hpe.com

To fix this, simply log in to a vSphere client, jump to the “Admin View”, then “Download Settings” on the left. Right click on the HP related Download sources and simply update the URLs from hp.com to hpe.com and the problem is solved. After clicking on test, connectivity status updates to “Connected”.

Old URLS:

http://vibsdepot.hp.com/index.xml

http://vibsdepot.hp.com/index-drv.xml

New URLS:

http://vibsdepot.hpe.com/index.xml

http://vibsdepot.hpe.com/index-drv.xml

VMWare HPe vibsdepot

VMWare HPE vibsdepot

I later noticed this “notice” on HPE’s website (http://vibsdepot.hpe.com/):

HPE vibsdepot notice

HPE vibsdepot notice

Nov 212015
 
HP MSA2040 Dual Controller SAN with 10Gb DAC SFP+ cables

I’d say 50% of all e-mails/comments I receive from the blog in the last 12 months or so, have been from viewers requesting pictures or proof of the HPE MSA 2040 Dual Controller SAN being connection to servers via 10Gb DAC Cables. This should also apply to the newer generation HPE MSA 2050 Dual Controller SAN.

Decided to finally publicly post the pics! Let me know if you have any questions. In the pictures you’ll see the SAN connected to 2 X HPE Proliant DL360p Gen8 servers via 4 X HPE 10Gb DAC (Direct Attach Cable) Cables.

Connection of SAN from Servers

Connection of SAN from Servers

Connection of DAC Cables from SAN to Servers

Connection of DAC Cables from SAN to Servers

See below for a video with host connectivity:

Nov 172015
 

I recently had a reader reach out to me for some assistance with an issue they were having with a VMWare implementation. They were experiencing issues with uploading files, and performing I/O on Linux based virtual machines.

Originally it was believed that this was due to networking issues, since the performance issues were only one way (when uploading/writing to storage), and weren’t experienced with all virtual machines. Another particular behaviour notice was slow uploading speeds to the vSphere client file browser, and slow Physical to Virtual migrations.

After troubleshooting and exploring the issue with them, it was noticed that cache was not enabled on the RAID array that was providing the storage for the vSphere implementation.

Please note, that in virtual environments with storage based off RAID arrays, RAID cache is a must (for performance reasons). Further, Battery backed RAID cache is a must (for protection and data integrity). This allows write operations to be cached and performed on multiple disks at once, sometimes even optimizing the write procedures as they are processed. This allows writes to occur simultaneously to multiple disks, and also dramatically increases observed performance since the ESXi hosts, and virtual machines aren’t waiting for write operations to commit before proceeding to the next.

You’ll notice that under Windows virtual machines, this issue won’t be observed on writes since the Windows VMs typically cache file transfers to RAM, which then write to disk. This could give the impression that there are no storage issues when typically troubleshooting these issues (making one believe that it’s related to the Linux VMs, the ESXi hosts themselves, or some odd networking issue).

 

Again, I cannot stress enough that you should have a battery backed cache module, or capacitor backed flash module providing cache functions.

If you do implement cache without backing it with a battery, corruption can occur on the RAID array if there is a power failure, or if the RAID controller freezes. The battery backed cache allows cached write procedures to be committed to disk on next restart of the storage unit/storage controller thus providing protection.