I’d say 50% of all e-mails/comments I receive from the blog in the last 12 months or so, have been from viewers requesting pictures or proof of the HPE MSA 2040 Dual Controller SAN being connection to servers via 10Gb DAC Cables. This should also apply to the newer generation HPE MSA 2050 Dual Controller SAN.
Decided to finally publicly post the pics! Let me know if you have any questions. In the pictures you’ll see the SAN connected to 2 X HPE Proliant DL360p Gen8 servers via 4 X HPE 10Gb DAC (Direct Attach Cable) Cables.
I’ve had the HPE MSA 2040 setup, configured, and running for about a week now. Thankfully this weekend I had some time to hit some benchmarks. Let’s take a look at the HPE MSA 2040 benchmarks on read, write, and IOPS.
First some info on the setup:
-2 X HPE Proliant DL360p Gen8 Servers (2 X 10 Core processors each, 128GB RAM each)
-HPE MSA 2040 Dual Controller – Configured for iSCSI
-HPE MSA 2040 is equipped with 24 X 900GB SAS Dual Port Enterprise Drives
-Each host is directly attached via 2 X 10Gb DAC cables (Each server has 1 DAC cable going to controller A, and Each server has 1 DAC cable going to controller B)
-2 vDisks are configured, each owned by a separate controller
-Disks 1-12 configured as RAID 5 owned by Controller A (512K Chunk Size Set)
-Disks 13-24 configured as RAID 5 owned by Controller B (512K Chunk Size Set)
-While round robin is configured, only one optimized path exists (only one path is being used) for each host to the datastore I tested
-Running 2 “VMWare I/O Analyzer” VMs as worker processes. Both workers are testing at the same time, testing the same datastore.
Sequential Read Speed:
Max Read: 1480.28MB/sec
Sequential Write Speed:
Max Write: 1313.38MB/sec
See below for IOPS (Max Throughput) testing:
Please note: The MaxIOPS and MaxWriteIOPS workloads were used. These workloads don’t have any randomness, so I’m assuming the cache module answered all the I/O requests, however I could be wrong. Tests were run for 120 seconds. What this means is that this is more of a test of what the controller is capable of handling itself over a single 10Gb link from the controller to the host.
IOPS Read Testing:
Max Read IOPS: 70679.91IOPS
IOPS Write Testing:
Max Write IOPS: 29452.35IOPS
PLEASE NOTE:
-These benchmarks were done by 2 seperate worker processes (1 running on each ESXi host) accessing the same datastore.
-I was running a VMWare vDP replication in the background (My bad, I know…).
-Sum is combined throughput of both hosts, Average is per host throughput.
Conclusion:
Holy crap this is fast! I’m betting the speed limit I’m hitting is the 10Gb interface. I need to get some more paths setup to the SAN!
Over the years I’ve come across numerous posts, blogs, articles, and howto guides that provide information on when to use iSCSI port binding, and they’ve all been wrong! Here, I’ll explain when to use iSCSI Port Binding, and why!
This post and information applies to all versions of VMware vSphere including 5, 5.5, 6, 6.5, 6.7, and 7.0.
See below for a video version of the blog post:
What does iSCSI port binding do
iSCSI port binding binds a software iSCSI initiator interface on a ESXi host to a physical vmknic and configures it accordingly to allow multi-pathing (MPIO) in a situation where both vmknics are residing in the same subnet.
In normal circumstances without port binding, if you have multiple vmkernels on the same subnet (mulithomed), the ESXi host would simply choose one and not use both for transmission of packets, traffic, and data. iSCSI port binding forces the iSCSI initiator to use that adapter for both transmission and receiving of iSCSI packets.
In most simple SAN environments, there are two different types of setups/configurations.
Multiple Subnet – Numerous paths to a storage device on a SAN, each path residing on separate subnets. These paths are isolated from each other and usually involve multiple switches.
Single Subnet – Numerous paths to a storage device on a SAN, each path is on the same subnet. These paths usually go through 1-2 switches, with all interfaces on the SAN and the hosts residing on the same subnet.
IT professionals should be aware of the the issues that occur when you have a host that is multi-homed with multiple NICs on the same subnet.
In a normal typical scenario with Windows and Linux, if you have multiple adapters residing on the same subnet you’ll have issues with broadcasts and transmission of packets, and in most cases you have absolutely no control over what communications are initiated over what NIC due to the way the routing table is handled. In most cases all outbound connections will be initiated through the first NIC installed in the system, or whichever one is inside of the primary route in the routing table.
When to use iSCSI port binding
This is where iSCSI Port Binding comes in to play. If you have an ESXi host that has multiple vmk adapters sitting on the same subnet, you can bind the software iSCSI initiators (vmk adapters) to the physical NICs (vmnics). This allows multiple iSCSI connections on multiple NICs residing on the same subnet to transmit and handle the traffic properly.
So the general rule of thumb is:
One subnet, iSCSI port binding is the way to go!
Two or more subnets (multiple subnets), do not use iSCSI Port Binding! It’s just not needed since all vmknics are residing on different subnets.
Additional Information
Here’s two links to VMWare documentation explaining this in more detail:
And a final troubleshooting note: If you configure iSCSI Port Binding and notice that one of your interfaces is showing as “Not Used” and the other as “Last Used”, this is most likely due to either a physical cabling/switching issue (where one of the bound interfaces can’t connect to the iSCSI target), or you haven’t configured permissions on your SAN to allow a connection from that IP address.
When I first started really getting in to multipath iSCSI for vSphere, I had two major hurdles that really took a lot of time and research to figure out before deploying.
How to configure a vSphere Distributed Switch for iSCSI multipath MPIO with each path being on different subnets (a.k.a. each interface on separate subnets)
So many articles on the internet, and most were wrong.
In this article I’ll be getting in to the first point, how to configure a vSphere Distributed Switch for iSCSI multipath MPIO with multiple subnets. You can find information on the second point here, when to use iSCSI Port Binding, and why.
I’ll start off by saying that when using multiple subnets on multiple isolated networks for SAN connectivity, you DO NOT use iSCSI Port Binding.
Configure the vSphere Distirbuted Switch (vDS) for iSCSI MPIO
Configuring a standard (non distributed) vSphere Standard Switch is easy, but we want to do this right, right? By configuring a vSphere Distributed Switch, it allows you to roll it out to multiple hosts making configuration and provisioning more easy. It also allows you to more easily manage and maintain the configuration as well. In my opinion, in a fully vSphere rollout, there’s no reason to use vSphere Standard switches. Everything should be distributed!
The setup
My configuration consists of two hosts connecting to an iSCSI device over 3 different paths, each with it’s own subnet. Each host has multiple NICs, and the storage device has multiple NICs as well.
As always, I plan the deployment on paper before touching anything. When getting ready for deployment, you should write down:
Which subnets you will use
Choose IP addresses for your SAN and hosts
I always draw a map that explains what’s connecting to what. When you start rolling this out, it’s good to have that image in your mind and on paper. If you lose track it helps to get back on track and avoid mistakes.
For this example, let’s assume that we have 3 connections (I know it’s an odd number):
Subnets to be used:
10.0.1.X
10.0.2.X
10.0.3.X
SAN Device IP Assignment:
10.0.1.1 (NIC 1)
10.0.2.1 (NIC 2)
10.0.3.1 (NIC 3)
Host 1 IP Assignment:
10.0.1.2 (NIC 1)
10.0.2.2 (NIC 2)
10.0.3.2 (Nic 3)
Host 2 IP Assignment:
10.0.1.3 (NIC 1)
10.0.2.3 (NIC 2)
10.0.3.3 (NIC 3)
So no we know where everything is going to sit, and it’s addresses. It’s now time to configure a vSphere Distributed Switch and roll it out to the hosts.
Instructions
Let’s begin!
We’ll start off by going in to the vSphere client and creating a new vSphere Distributed Switch. You can name this switch whatever you want, I’ll use “iSCSI-vDS” for this example. Going through the wizard you can assign a name. Stop when you get to “Add Hosts and Physical Adapter”, on this page we will chose “Add Later”. Also, when it asks us to create a default port group, we will un-check the box and NOT create one.
Now we need to create some “Port Groups”. Essentially we will be creating a Port Group for each subnet and NIC for the storage configuration. In this example case, we have 3 subnets, and 3 NICs per host, so we will be creating 3 port groups. Go ahead and right click on the new vSphere distributed switch we created (“iSCSI-vDS” in my example), and create a new port group. I’ll be naming my first one “iSCSI-01”, second will be called “iSCSI-02”, and so on. You can go ahead and create one for each subnet. After these are created, we’ll end up with this:
After we have this setup, we now need to do some VERY important configuration. As of right now by default, each port group will have all uplinks configured as Active which we DO NOT want. Essentially what we will be doing, is assigning only one Active Uplink per Port Group. Each port group will be on it’s own subnet, so we need to make sure that the applicable uplink is only active, and the remainder are thrown in to the “Unused Uplinks” section. This can be achieved by right clicking on each port group, and going to “Teaming and Failover” underneath “Policies”. You’ll need to select the applicable uplinks and using the “Move Down” button, move them down to “Unused Uplinks”. Below you’ll see some screenshots from the iSCSI-02, and iSCSI-03 port groups we’ve created in this example:
You’ll notice that the iSCSI-02 port group, only has the iSCSI-02 uplink marked as active. Also, the iSCSI-03 port group, only have the iSCSI-03 uplink marked as active. The same applies to iSCSI-01, and any other links you have (more if you have more links). Please ignore the entry for “iSCSI-04”, I created this for something else, pretend the entry isn’t there. If you do have 4 subnets, and 4 NICs, then you would have a 4th port group.
Now we need to add the vSphere Distributed Switches to the hosts. Right click on the “iSCSI-vDS” Distributed switch we created and select “Add Host”. Select ONLY the hosts, and DO NOT select any of the physical adapters. A box will appear mentioning you haven’t selected any physical adapters, simply hit “Yes” to “Do you want to continue adding the hosts”. For the rest of the wizard just keep hitting “Next”, we don’t need to change anything. Example below:
So here we are now, we have a vSphere Distributed Switch created, we have the port groups created, we’ve configured the port groups, and the vDS is attached to our hosts… Now we need to create vmks (vmkernel interfaces) in each port group, and then attach physical adapters to the port groups.
Head over to the Configuration tab inside of your ESXi host, and go to “Networking”. You’ll notice the newly created vSphere Distributed Switch is now inside the window. Expand it. You’ll need to perform these steps on each of your ESXi hosts. Essentially what we are doing, is creating a vmk on each port group, on each host. Click on “Manage Virtual Adapters” and click on “Add”. We’ll select “New Virtual Adapter”, then on the next screen our only option will be “VMKernel”, click Next. In the “Select port group” option, select the applicable port group. You’ll need to do this multiple times as we need to create a vmkernel interface for each port group (a vmk on iSCSI-01, a vmk on iSCSI-02, etc…), on each host, click next. Since this is the first port group (iSCSI-01) vmk we are creating on the first host, we’ll assign the IP address as 10.0.1.2, fill in the subnet box, and finish the wizard. Create another vmk for the second port group (iSCSI-02), since it’s the first host it’ll have an IP of 10.0.2.2, and then again for the 3rd port group with an IP of 10.0.3.2. After you do this for the first host, you’ll need to do it again for the second host, only the IPs will be different since it’s a different host (in this example the second host would have 3 vmks on each port group, example: iSCSI01 – 10.0.1.3, iSCSI02 – 10.0.2.3, iSCSI03 – 10.0.3.3). Here’s an example of iSCSI02 and iSCSI03 on ESXi host 1. Of course there’s also a iSCSI-01 but I cut it from the screenshot.
Now we need to “manage the physical adapters” and attach the physical adapters to the individual port groups. Essentially this will map the physical NIC to the separate subnets port groups we’ve created for storage in the vDS. We’ll need to do this on both hosts. Inside of the “Managed Physical Adapters” box, you’ll see each port group on the left hand side, click on “<Click to Add NIC>”. Now in everyone’s environments the vmnic you add will be different. You should know what the physical adapter you want to map to the subnet/port group is. I’ve removed the vmnic number from the below screenshot just in case… And to make sure you think about this one…
As mentioned above, you need to do this on both hosts for the applicable vmnics. You’ll want to assign all 3 (even though I’ve only assigned 2 in the above screenshot).
Voiala! You’re done! Now all you need to do is go in to your iSCSI initiator and add the IPs of the iSCSI target to the dynamic discovery tab on each host. Rescan the adapter, add the VMFS datastores and you’re done.
If you have any questions or comments, or feel this can be done in a better way, drop a comment on this article. Happy Virtualizing!
Jumbo Frames
There is one additional step if you are using jumbo frames. Please note that to use jumbo frames, all NICs, physical switches, and the storage device itself need to be configured to support this. On the VMWare side of things, you need to apply the following settings:
Under “Inventory” and “Networking”, Right Click on the newly created Distributed Switch. Under the “Properties” tab, select “Advanced” on the left hand side. Change the MTU to the applicable frame size. In my scenario this is 9000.
Under “Inventory” and “Hosts and Clusters”, click on the “Configuration Tab”, then “vSphere Distributed Switch”. Expand the newly created “Distributed Switch”, select “Manage Virtual Adapters”. Select a vmk interface, and click “edit”. Change the MTU to the applicable size, in my case this is 9000. You’ll need to do this for each vmk interface on each physical host.
For the server, we purchased another HPE Proliant DL360p Gen8 (with 2 X 10 Core Processors, and 128Gb of RAM, exact same as our existing server), however I won’t be getting that in to this blog post.
Now for storage, we decided to pull the trigger and purchase an HPE MSA 2040 Dual Controller SAN. We purchased it as a CTO (Configure to Order), and loaded it up with 4 X 1Gb iSCSI RJ45 SFP+ modules (there’s a minimum requirement of 1 4-pack SFP), and 24 X HPE 900Gb 2.5inch 10k RPM SAS Dual Port Enterprise drives. Even though we have the 4 1Gb iSCSI modules, we aren’t using them to connect to the SAN. We also placed an order for 4 X 10Gb DAC cables.
To connect the SAN to the servers, we purchased 2 X HPE Dual Port 10Gb Server SFP+ NICs, one for each server. The SAN will connect to each server with 2 X 10Gb DAC cables, one going to Controller A, and one going to Controller B.
HPE MSA 2040 Configuration
I must say that configuration was an absolute breeze. As always, using intelligent provisioning on the DL360p, we had ESXi up and running in seconds with it installed to the on-board 8GB micro-sd card.
I’m completely new to the MSA 2040 SAN and have actually never played with, or configured one. After turning it on, I immediately went to HPE’s website and downloaded the latest firmware for both the drives, and the controllers themselves. It’s a well known fact that to enable iSCSI on the unit, you have to have the controllers running the latest firmware version.
Turning on the unit, I noticed the management NIC on the controllers quickly grabbed an IP from my DHCP server. Logging in, I found the web interface extremely easy to use. Right away I went to the firmware upgrade section, and uploaded the appropriate firmware file for the 24 X 900GB drives. The firmware took seconds to flash. I went ahead and restarted the entire storage unit to make sure that the drives were restarted with the flashed firmware (a proper shutdown of course).
While you can update the controller firmware with the web interface, I chose not to do this as HPE provides a Windows executable that will connect to the management interface and update both controllers. Even though I didn’t have the unit configured yet, it’s a very interesting process that occurs. You can do live controller firmware updates with a Dual Controller MSA 2040 (as in no downtime). The way this works is, the firmware update utility first updates Controller A. If you have a multipath (MPIO) configuration where your hosts are configured to use both controllers, all I/O is passed to the other controller while the firmware update takes place. When it is complete, I/O resumes on that controller and the firmware update then takes place on the other controller. This allows you to do online firmware updates that will result in absolutely ZERO downtime. Very neat! PLEASE REMEMBER, this does not apply to drive firmware updates. When you update the hard drive firmware, there can be ZERO I/O occurring. You’d want to make sure all your connected hosts are offline, and no software connection exists to the SAN.
Anyways, the firmware update completed successfully. Now it was time to configure the unit and start playing. I read through a couple quick documents on where to get started. If I did this right the first time, I wouldn’t have to bother doing it again.
I used the wizards available to first configure the actually storage, and then provisioning and mapping to the hosts. When deploying a SAN, you should always write down and create a map of your Storage area Network topology. It helps when it comes time to configure, and really helps with reducing mistakes in the configuration. I quickly jaunted down the IP configuration for the various ports on each controller, the IPs I was going to assign to the NICs on the servers, and drew out a quick diagram as to how things will connect.
Since the MSA 2040 is a Dual Controller SAN, you want to make sure that each host can at least directly access both controllers. Therefore, in my configuration with a NIC with 2 ports, port 1 on the NIC would connect to a port on controller A of the SAN, while port 2 would connect to controller B on the SAN. When you do this and configure all the software properly (VMWare in my case), you can create a configuration that allows load balancing and fault tolerance. Keep in mind that in the Active/Active design of the MSA 2040, a controller has ownership of their configured vDisk. Most I/O will go through only to the main controller configured for that vDisk, but in the event the controller goes down, it will jump over to the other controller and I/O will proceed uninterrupted until your resolve the fault.
First part, I had to run the configuration wizard and set the various environment settings. This includes time, management port settings, unit names, friendly names, and most importantly host connection settings. I configured all the host ports for iSCSI and set the applicable IP addresses that I created in my SAN topology document in the above paragraph. Although the host ports can sit on the same subnets, it is best practice to use multiple subnets.
Jumping in to the storage provisioning wizard, I decided to create 2 separate RAID 5 arrays. The first array contains disks 1 to 12 (and while I have controller ownership set to auto, it will be assigned to controller A), and the second array contains disk 13 to 24 (again ownership is set to auto, but it will be assigned to controller B). After this, I assigned the LUN numbers, and then mapped the LUNs to all ports on the MSA 2040, ultimately allowing access to both iSCSI targets (and RAID volumes) to any port.
I’m now sitting here thinking “This was too easy”. And it turns out it was just that easy! The RAID volumes started to initialize.
VMware vSphere Configuration
At this point, I jumped on to my vSphere demo environment and configured the vDistributed iSCSI switches. I mapped the various uplinks to the various portgroups, confirmed that there was hardware link connectivity. I jumped in to the software iSCSI imitator, typed in the discovery IP, and BAM! The iSCSI initiator found all available paths, and both RAID disks I configured. Did this for the other host as well, connected to the iSCSI target, formatted the volumes as VMFS and I was done!
I’m still shocked that such a high performance and powerful unit was this easy to configure and get running. I’ve had it running for 24 hours now and have had no problems. This DESTROYS my old storage configuration in performance, thankfully I can keep my old setup for a vDP (VMWare Data Protection) instance.
HPE MSA 2040 Pictures
I’ve attached some pics below. I have to apologize for how ghetto the images/setup is. Keep in mind this is a test demo environment for showcasing the technologies and their capabilities.
Update: HPE has updated the MSA product line and the 2040 has now been replaced by the HPE MSA 2050 SAN Dual Controller SAN. There are now also SSD Cache models such as the HPE MSA 2052 Dual Controller SAN.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.
Do you accept the use of cookies and accept our privacy policy? AcceptRejectCookie and Privacy Policy
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.