Nov 052016
 

Yesterday, I had a reader (Nicolas) leave a comment on one of my previous blog posts bringing my attention to the MTU for Jumbo Frames on the HPE MSA 2040 SAN.

MSA 2040 MTU Comment

Since I first started working with the MSA 2040. Looking at numerous HPE documents outlining configuration and best practices, the documents did confirm that the unit supported Jumbo Frames. However, the documentation on the MTU was never clearly stated and can be confusing. I was under the assumption that the unit supported 9000 MTU, while reserving 100 bytes for overhead. This is not necessarily the case.

Nicolas chimed in and provided details on his tests which confirmed the HPE MSA 2040 does actually have a working MTU of 8900. In my configuration I did the tests (that Nicolas outlined), and confirmed that the MTU would cause packet fragmentation if the MTU was greater than 8900.

ESXi vmkping usage: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003728

This is a big discovery because packet fragmentation will not only degrade performance, but flood the links with lots of packet fragmentation.

I went ahead and re-configured my ESXi hosts to use an MTU of 8900 on the network used with my SAN. This immediately created a MASSIVE performance increase (both speed, and IOPS). I highly recommend that users of the MSA 2040 SAN confirm this on their own, and update the MTUs as they see fit.

Also, this brings up another consideration. Ideally, on a single network, you want all devices to be running the same MTU. If your MSA 2040 SAN is on a storage network with other SAN devices (or any other device), you may want to configure all of them to use the MTU of 8900 if possible (and of course, don’t forget your servers).

A big thank you to Nicolas for pointing this out!

Apr 102016
 

For those of you that use HP’s vibsdepot with VMWare Update Manager, you may have noticed that as of late you have not been able to synchronize patch definitions from the HP vibsdepot source.

I suspected this may have had something to do with the fact that in the past, the hp.com domain was being used to host these files, and with the company split, all enterprise related hosting has now moved to hpe.com

To fix this, simply log in to a vSphere client, jump to the “Admin View”, then “Download Settings” on the left. Right click on the HP related Download sources and simply update the URLs from hp.com to hpe.com and the problem is solved. After clicking on test, connectivity status updates to “Connected”.

Old URLS:

http://vibsdepot.hp.com/index.xml

http://vibsdepot.hp.com/index-drv.xml

New URLS:

http://vibsdepot.hpe.com/index.xml

http://vibsdepot.hpe.com/index-drv.xml

VMWare HPe vibsdepot

VMWare HPE vibsdepot

I later noticed this “notice” on HPE’s website (http://vibsdepot.hpe.com/):

HPE vibsdepot notice

HPE vibsdepot notice

Nov 212015
 
HP MSA2040 Dual Controller SAN with 10Gb DAC SFP+ cables

I’d say 50% of all e-mails/comments I receive from the blog in the last 12 months or so, have been from viewers requesting pictures or proof of the HPE MSA 2040 Dual Controller SAN being connection to servers via 10Gb DAC Cables. This should also apply to the newer generation HPE MSA 2050 Dual Controller SAN.

Decided to finally publicly post the pics! Let me know if you have any questions. In the pictures you’ll see the SAN connected to 2 X HPE Proliant DL360p Gen8 servers via 4 X HPE 10Gb DAC (Direct Attach Cable) Cables.

Connection of SAN from Servers

Connection of SAN from Servers

Connection of DAC Cables from SAN to Servers

Connection of DAC Cables from SAN to Servers

See below for a video with host connectivity:

Nov 172015
 

I recently had a reader reach out to me for some assistance with an issue they were having with a VMWare implementation. They were experiencing issues with uploading files, and performing I/O on Linux based virtual machines.

Originally it was believed that this was due to networking issues, since the performance issues were only one way (when uploading/writing to storage), and weren’t experienced with all virtual machines. Another particular behaviour notice was slow uploading speeds to the vSphere client file browser, and slow Physical to Virtual migrations.

After troubleshooting and exploring the issue with them, it was noticed that cache was not enabled on the RAID array that was providing the storage for the vSphere implementation.

Please note, that in virtual environments with storage based off RAID arrays, RAID cache is a must (for performance reasons). Further, Battery backed RAID cache is a must (for protection and data integrity). This allows write operations to be cached and performed on multiple disks at once, sometimes even optimizing the write procedures as they are processed. This allows writes to occur simultaneously to multiple disks, and also dramatically increases observed performance since the ESXi hosts, and virtual machines aren’t waiting for write operations to commit before proceeding to the next.

You’ll notice that under Windows virtual machines, this issue won’t be observed on writes since the Windows VMs typically cache file transfers to RAM, which then write to disk. This could give the impression that there are no storage issues when typically troubleshooting these issues (making one believe that it’s related to the Linux VMs, the ESXi hosts themselves, or some odd networking issue).

 

Again, I cannot stress enough that you should have a battery backed cache module, or capacitor backed flash module providing cache functions.

If you do implement cache without backing it with a battery, corruption can occur on the RAID array if there is a power failure, or if the RAID controller freezes. The battery backed cache allows cached write procedures to be committed to disk on next restart of the storage unit/storage controller thus providing protection.

Jun 072014
 

I’ve had the HPE MSA 2040 setup, configured, and running for about a week now. Thankfully this weekend I had some time to hit some benchmarks. Let’s take a look at the HPE MSA 2040 benchmarks on read, write, and IOPS.

First some info on the setup:

-2 X HPE Proliant DL360p Gen8 Servers (2 X 10 Core processors each, 128GB RAM each)

-HPE MSA 2040 Dual Controller – Configured for iSCSI

-HPE MSA 2040 is equipped with 24 X 900GB SAS Dual Port Enterprise Drives

-Each host is directly attached via 2 X 10Gb DAC cables (Each server has 1 DAC cable going to controller A, and Each server has 1 DAC cable going to controller B)

-2 vDisks are configured, each owned by a separate controller

-Disks 1-12 configured as RAID 5 owned by Controller A (512K Chunk Size Set)

-Disks 13-24 configured as RAID 5 owned by Controller B (512K Chunk Size Set)

-While round robin is configured, only one optimized path exists (only one path is being used) for each host to the datastore I tested

-Utilized “VMWare I/O Analyzer” (https://labs.vmware.com/flings/io-analyzer) which uses IOMeter for testing

-Running 2 “VMWare I/O Analyzer” VMs as worker processes. Both workers are testing at the same time, testing the same datastore.

Sequential Read Speed:

MSA2040-ReadMax Read: 1480.28MB/sec

Sequential Write Speed:

MSA2040-WriteMax Write: 1313.38MB/sec

See below for IOPS (Max Throughput) testing:

Please note: The MaxIOPS and MaxWriteIOPS workloads were used. These workloads don’t have any randomness, so I’m assuming the cache module answered all the I/O requests, however I could be wrong. Tests were run for 120 seconds. What this means is that this is more of a test of what the controller is capable of handling itself over a single 10Gb link from the controller to the host.

IOPS Read Testing:

MSA2040-MaxIOPSMax Read IOPS: 70679.91IOPS

IOPS Write Testing:

MSA2040-WriteOPSMax Write IOPS: 29452.35IOPS

PLEASE NOTE:

-These benchmarks were done by 2 seperate worker processes (1 running on each ESXi host) accessing the same datastore.

-I was running a VMWare vDP replication in the background (My bad, I know…).

-Sum is combined throughput of both hosts, Average is per host throughput.

Conclusion:

Holy crap this is fast! I’m betting the speed limit I’m hitting is the 10Gb interface. I need to get some more paths setup to the SAN!

Cheers