Apr 112014
 

Earlier today I was doing some work in my demonstration vSphere environment, when I had to modify some settings of one of my VMs that are setup as the latest version (which means you can only edit the settings inside of the vSphere Web Client).

To my surprise, when logging in, immediately I received an error: “ManagedObjectReference: type = Datastore, value = datastore-XXXX, serverGuid = XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX refers to a managed object that no longer exists or has never existed“. Also, after clicking OK, I noticed that lots of information being presented inside of the vSphere web client was inaccurate. Some Virtual Machines were being reported as sitting on different datastores (they were at one point weeks ago, however since were moved). Also, it was reporting that some Virtual Machines were off, when in fact they were on and running.

Symptoms:

-Errors about missing datastores on log on to the vSphere Web Client.

-Virtual Machines were being reported as off (turned off) even though they were running.

-Viewing VMs in vSphere client, reporting they are being stored on a different datastore then they actually are.

-Disconnecting and (re) connecting hosts have no effect on issue.

 

This freaked me out, it was a true “Uhh Ohh” moment. Something was corrupt. Keep in mind that ALL information in the vSphere client was correct and accurate, it was only the vSphere Web client that was having issues.

 

Anyways, I tried a bunch of things to fix it, and spent hours working on the problem. FINALLY I came up with a fix. If you are running in to this issue, PLEASE take a snapshot of your vCenter Server before attempting to fix it, so that you can roll back if you screw anything up (which I had to do multiple times, lol).

The Fix:

1) Stop the “VMWare vCenter Inventory Service”.

2) Delete the “data” folder inside of “Program Files\VMware\Infrastructure\Inventory Service”.

3) Open a Command Prompt with elevated privileges. Change your working directory to “Program Files\VMware\Infrastructure\Inventory Service\scripts”.

4) Run “createDB.bat”, this will reset and create a Inventory Service database.

5) Run “is-change-sso.bat https://computername.domain.com:7444/lookupservice/sdk “[email protected]” “SSO_PASSWORD”. Change the computername.domain.com to your FQDN for your vCenter server, and change the SSO_PASSWORD to your Single Signon Admin password.

6) Start the “VMWare vCenter Inventory Service”. At this point, if you try to log on to the vSphere Web Client, it will error with: “Client is not authenticated to VMware Inventory Service”. We’ve already won half the battle.

7) We now need to register the vCenter Server with the newly reset Inventory Service. In an elevated Command Prompt (that we opened above), changed the working path to: “Program Files\VMware\Infrastructure\VirtualCenter Server\isregtool”.

8) Run “register-is.bat https://computername.domain.com:443/sdk https://computername.domain.com:10443 https://computername.domain.com:7444/lookupservice/sdk”. Change computername.domain.com to your FQDN for your vCenter server.

9) Restart the “VMware VirtualCenter Server” service. This will also restart the Management Web services.

 

BAM, it’s fixed! I went ahead and restarted the entire server that the vCenter server was running on. After this, all was good, and everything looked great inside of the vSphere Web Client. I’m actually noticing it’s running WAY faster, and isn’t as glitchy as it was before.

Happy Virtualizing! 🙂

Jul 082013
 

Recently I needed to upgrade and replace my storage system which provides basic SMB dump file services, iSCSI, and NFS to my internal network and vSphere cluster. As most of you know, in the past I have traditionally created and configured my own storage systems. For the most part this has worked fantastic, especially with the NFS and iSCSI target services being provided and built in to the Linux OS (iSCSI thanks to Lio-Target).

A few reasons for the upgrade…

  1. I need more storage
  2. I need a pre-packaged product that comes with warranty.

Taking care of the storage size was easy (buy more drives), however I needed to find a pre-packaged product that fits my requirements for performance, capabilities, stability, support, and of course warranty. iSCSI and NFS support was an absolute must!

Some time ago, when I first started working with Lio-Target before it was incorporated and merged in to the linux kernel, I noticed that the parent company Rising Tide Systems mentioned they also provided the target for numerous NAS and SAN devices available on the market, Synology being one of them. I never thought anything of this as back then I wasn’t interesting in purchasing a pre-packaged product, until my search for a new storage system.

Upon researching, I found that Synology released their 2013 line of products. These products had a focus on vSphere compatibility, performance, and redundant network connections (either through Trunking/Link aggregation, or MPIO iSCSI connections).

The device that caught my attention for my purpose was the DS1813+.

DS1813+
Synology DS1813+

Synology DS1813+ Specifications:

  • CPU Frequency : Dual Core 2.13GHz
  • Floating Point
  • Memory : DDR3 2GB (Expandable, up to 4GB)
  • Internal HDD/SSD : 3.5″ or 2.5″ SATA(II) X 8 (Hard drive not included)
  • Max Internal Capacity : 32TB (8 X 4TB HDD) (Capacity may vary by RAID types) (See All Supported HDD)
  • Hot Swappable HDD
  • External HDD Interface : USB 3.0 Port X 2, USB 2.0 Port X 4, eSATA Port X 2
  • Size (HxWxD) : 157 X 340 X 233 mm
  • Weight : 5.21kg
  • LAN : Gigabit X 4
  • Link Aggregation
  • Wake on LAN/WAN
  • System Fan : 120x120mm X2
  • Easy Replacement System Fan
  • Wireless Support (dongle)
  • Noise Level : 24.1 dB(A)
  • Power Recovery
  • Power Supply : 250W
  • AC Input Power Voltage : 100V to 240V AC
  • Power Frequency : 50/60 Hz, Single Phase
  • Power Consumption : 75.19W (Access); 34.12W (HDD Hibernation);
  • Operating Temperature : 5°C to 35°C (40°F to 95°F)
  • Storage Temperature : -10°C to 70°C (15°F to 155°F)
  • Relative Humidity : 5% to 95% RH
  • Maximum Operating Altitude : 6,500 feet
  • Certification : FCC Class B, CE Class B, BSMI Class B
  • Warranty : 3 Years

This puppy has 4 gigabit LAN ports, and 8 SATA bays. There’s tons of reviews on the internet praising Synology, and their DSM operating system (based on embedded linux) on the internet, so I decided to live dangerously and went ahead and placed an order for this device, along with 8 X Seagate 3TB Barracuda drives.

Unfortunately, it’s extremely difficult to get your hands on a DS1813+ in Canada (I’m not sure why). After numerous orders placed and cancelled with numerous companies, I finally found a distributor who was able to get me one. I’ll just say the wait was totally worth it. Initially I also purchased the 2GB RAM add-on as well, so I had this available when the DS1813+ arrived.

I was hoping to take a bunch of pictures, and do thorough testing with the unit before throwing it in to production, however right from the get go, it was extremely easy to configure and use, so right away I had it running in production. Sorry for the lack of pics! 🙂

I did however get a chance to setup the 8 drives in RAID 5, and configured an iSCSI block based target. The performance was fantastic, no problems whatsoever. Even maxing out one gigabit connection, the resources of the unit were barely touched.

I’m VERY impressed with the DSM operating system. Everything is clearly spelled out, and you have very detailed control of the device and all services. Configuration of SMB shares, iSCSI targets, and NFS exports is extremely simple, yet allows you to configure advanced features.

After testing out the iSCSI performance, I decided to get the unit ready for production. I created 2 shared folders, and exported these via NFS to my ESXi hosts. It was very simple, quick, and the ESXi hosts had absolutely no problems connecting to the exports.

One thing that really blew me away about this unit, is the performance. Immediately after configuring the NFS exports, mounting them and using Storage vMotion to migrate 14 live virtual machines to the DS1813+ I noticed MASSIVE performance gains. The performance gains were so large, it put my old custom storage system to shame. And this is really interesting, considering my old storage system, while custom, is actually spec’d way higher then the storage unit (CPU, RAM, and the SATA controller). I’m assuming the DS1813+ has numerous kernel optimizations for storage, and at the same time does not have the overhead of a fully Linux distribution. This also means it’s more stable since you don’t have tons of applications running in the background that you don’t need.

After migrating the VMs I noticed that the virtual machines were running way faster, and were may more responsive. I’m assuming this is due to increased IOPS.

Either way I’m extremely happy with the device and fully recommend it. I’ll be posting more blog articles later detailing configuration of services in detail such as iSCSI, NFS, and some other things. I’m already planning on picking up an additional DS1513+ (5 bay unit) to act as a storage server for VM backups which I perform using GhettoVCB.

Update – August 16, 2019: Please see these additional posts regarding performance and optimization:

Jun 092012
 

So there I was… Had a custom built vSphere cluster, 3 hosts, iSCSI target (setup with Lio-Target), everything running fine, smoothly, perfect… I’m done right? NOPE!

Most of us who care, are concerned about our Disaster Recovery or Backup solution. With virtualization things get a little bit interesting in the fact that either you have a CRAZY large setup, and can use the VMware backup stuff, or you have a smaller environment and want something simple, easy to use, and with a low footprint. Configuring a backup and/or disaster recovery solution for a virtualized environment may be difficult and complicated, however after it’s fully implemented; management, use, and administration is easy, and don’t forget about the abilities and features you get with virtualization.

Reasons why virtualization rocks when it comes to backup and disaster recovery:

-Unlike traditional backups you do not have to install a bare OS to run the backup software to restore over

-You can restore to hardware that is not like nor similar to the original hardware

-Backups are now simple files that can be easily moved, transported, copied, and saved on to normal or non-normal media (you could save an entire system on to a USB key if it was big enough). On a 2TB external USB drive, you could have a backup of over 16+ virtual machines!

-Ease of recovery: Copy the backed up VM files to host/datastore, and simply hit play. Restore complete and you’re up and running!

 

So with all that in mind, here we go! (Scroll to bottom of post for a quick conclusion).

For my solution, here are some of the requirements I had:

1) Utilize snapshots to take restorable backups while the VMs are running (no downtime).

2) Move the backups to a different location while running (this could be a drive, NFS export, SMB share, etc…).

3) Have the backups stored somewhere easy to access where I can move them to a removable external USB drive to take off-site. This way, I have fast disk-to-disk access to restore backups in the event my storage system goes down or RAID array is lost (downtime would be minimal), or in the event of something more serious like a fire, I would have the USB drive off-site to restore from. Disk-to-disk backups could happen on a daily basis, and disk-to-USB could be done weekly and taken off-site.

4) In the event of a failure, be able to bring USB drive onsite, transfer VMs back and be up and running in no time.

 

So with this all in mind, I started designing a solution. My existing environment (without backup) composed of:

2 X HP DL360 G5 Servers (running ESXi)

1 X HP ML350 G5 Server (running ESXi)

1 X Super Micro Intel Xeon Server (Running CentOS 6 & Lio-Target backports: providing iSCSI VMFS)

2 X HP MSA20 Storage Units

 

First, I need a way to create snapshots of my 16+ virtual machines. After, I would need to have the snapshots moved to another location (such as a backup server). There is a free script available called ghettoVCB. ghettoVCB is a “Free alternative for backup up VM’s for ESX(i) 3.5, 4.x+ & 5.x” and is available (along with tons of documentation) at: http://communities.vmware.com/docs/DOC-8760. This script is generally ran on the ESXi host, generates a snapshot and clones it to a seperate datastore configured on that host. It does this for all virtual machines named in the VM list you specify, or for all VMs on the host if a specific switch is passed to ghettovcb.sh.

So now that we have the software, we need to have a location setup to back up to. We could either create a new iSCSI target, or we could setup a new Linux server and have a RAID array configured and formatted with ext4 and exported via NFS. This would allow us to have the NFS setup as a datastore on ESXi (so we can backup to the NFS export), and afterwards be able to access the backups natively in Linux to copy/move to a external drive formatted with EXT4.

We configure a new server, running CentOS 6 with enough storage to backup all VMs. We create NFS exports and mount these to all the ESXi hosts. We copy the ghettovcb script to a location on the NFS export so it’s accessible to all hosts easily (without having to update the script on each host individually), and we create lists for each physical host containing the names of the virtual machine it virtualizes. We then edit the ghettovcb.sh file to specify the new destination datastore (the backup datastore) and how we want it to back up.

When executing:

./ghettovcb.sh -f esxserverlist01

It creates the snapshot for each VM in the list for that host, clones it to the destination datastore (which in my setup is the NFS export on the new backup server), then deletes the snapshot when the backup is complete, finally moving on to the next VM and repeating the process until done. The script needs to be ran on all hosts, and list files for VMs have to be created for each host.

We now have a backup server and have done a disk-to-disk backup of our virtual machines. We can now plug in a large external USB drive to the backup server, and simply copy over the backups to it.

 

I do everything manually because I like to confirm everything is done and backed up properly, however you can totally create scripts to automate the whole process. After this we have our new backup solution!

 

Quick recap:

1) Setup a new backup server with enough disk space to back up all VMs. Setup an NFS export and mount it to ESXi hosts.

2) Download and configure the ghettoVCB script. Run the script on each ESXi host to disk-to-disk backup your VMs to your new backup server.

3) Copy the backup files from the backup server to a external USB drive that has enough space. Take off-site.

 

I have had to restore a couple VMs in the past due to a damaged RAID array, and I did so using a backup from above. It worked great! I will create a post on the restore process sometime in the future (for now feel free to look at the ghettoVCB documentation)!

Jun 092012
 

Recently, I’ve started to have some issues with the HP MSA20 units attached to my SAN server at my office. These MSA20 units stored all my Virtual Machines inside of a VMFS filesystem which was presented to my vSphere cluster hosts over iSCSI using Lio-Target. In the last while, these logical drive has just been randomly disappearing, causing my 16+ virtual machines to just halt. This always requires me to shut off the physical hosts, shut off the SAN server, shut off the MSA20s, and bring everything all the way back up. This causes huge amounts of downtime, and it just a pain in the butt…

I decided it was time for me to re-do my storage system. Preferably, I would have purchased a couple HP MSA60s and P800 controllers to hook it up to my SAN server, but unfortunately right now it’s not in the budget.

A few years ago, I started using software RAID. In the past I was absolutely scared of it, thought it was complete crap, and would never have touched it, but my opinion drastically changed after playing with it, and regularly using it. While I still recommend businesses to use Hardware based RAID systems, especially for mission critical applications, I felt I could try out software RAID for the above situation since it’s more of a “hobby” setup.

I read that most storage enthusiasts use either the Super Micro AOC-SASLP-MV8, or the LSI SAS 9211-8i. Both are based off different chipsets (both of which are widely used in other well known cards), and both have their own pro’s and con’s.

During my research, I noticed a lot of people who run Windows Home Server were utilizing the AOC Super Micro Card. And while using WHS, most reported no issues whatsoever, however it was a different story when reading posts/blog articles from people using Linux. I don’t know how accurate this was, but apperently a lot of people had issues with this card under heavy load, and some just couldn’t get it running inside of linux.

Then there is the LSA 9211-8i (which is the same as the extremely popular IBM M1015). This bad boy supports basic RAID operations (1, 0, 10), but most people use it with JBOD and simply use Linux MD Software RAID. While there was numerous complaints about users having issues with their systems even detecting their card, other users also reported issues caused by the BIOS of this card (too much memory for the system to boot). When people did get this card working though, I read of mostly NO issues under Linux. Spent a few days confirming what I already had read and finally decided to make the purchase.

Both cards support SAS/SATA, however the LSI card supports 6Gb/sec SAS/SATA. Both also have 2 internal SFF8087 Mini-SAS connectors to hook up a total of 8 drives directly, or more using an SAS expander. The LSI card uses a PCIe (V.2) 8x slot, vs the AOC-SASLP which uses PCIe (V.1) 4x slot.

I went to NCIX.com and ordered the LSI 9211-8i along with 2 breakout cables (Card Part#: LSI00194, Cable Part#: CBL-SFF8087OCF-06M). This would allow me to hook up a total of 8 drives (even though I only plan to use 5). I already have an old computer I already use with an eSATA connector to a Sans Digital SATA Expander for NFS, etc… that I plan on installing the card in to. I also have an old Startech SATABAY5BK enclosure which will hold the drives and connect to the controller. Finished case:

Server with disk enclosures (StarTech SATABAY5BK)

(At this point I have the enclosure installed along with 5 X 1TB Seagate 7200.12 Barracuda drives)

Finally the controller showed up from NCIX:

LSI SAS 9211-8i

I popped this card in the computer (which unfortunately only had PCIe V1), and connected the cables! This is when I ran in to a few issues…

-If no drives were connected, the system would boot and I could succesfully boot to CentOS 6.

-If at all I pressed CTRL+C to get in to the cards interface, the system would freeze during BIOS POST.

-If any drives were connected and detected by the cards BIOS, the system would freeze during BIOS POST.

I went ahead and booted in to CentOS 6. Downloaded the updated firmware and BIOS and flashed the card. The flashing manual was insane, but had to read it all to make sure I didn’t break anything. First I updated both the firmware and BIOS (which went ok), however I couldn’t convert the card from IR firmware to IT firmware due to errors. I google’d this and came up with a bunch of articles, but this one: http://brycv.com/blog/2012/flashing-it-firmware-to-lsi-sas9211-8i/ was the only one that helped and pointed me in the right direction. Essentially just stating you have to use the DOS flasher, erase the card (MAKING SURE NOT TO REBOOT OR YOU’D BRICK IT), and then flashing the IT Firmware. This worked for me, check out his post! Thanks Bryan!

Anyways, after updating the card and converting it to the IT firmware. I still had the BIOS issue. I tried the card in another system, and still had a bunch of issues. I finally removed 1 of 2 video cards and populated the card in a Video Card slot, and I finally could get in to the BIOS. First I enabled staggered spin-up (to make sure I don’t blow the PSU on the computer with a bunch of drives starting up at once), changed some other settings to optimize, and finally disabled the boot BIOS, and changed the option for the adapter to be disabled for boot, and only available to the OS. When removing the card, and putting it in the target computer, this worked. Also noticed that the staggered spin-up started during the Linux kernel startup when initializing the card. Here’s a copy of the kernel log:

mpt2sas version 08.101.00.00 loaded
mpt2sas 0000:06:00.0: PCI INT A -> Link[LNKB] -> GSI 18 (level, low) -> IRQ 18
mpt2sas 0000:06:00.0: setting latency timer to 64
mpt2sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (3925416 kB)
mpt2sas 0000:06:00.0: irq 24 for MSI/MSI-X
mpt2sas0: PCI-MSI-X enabled: IRQ 24
mpt2sas0: iomem(0x00000000dfffc000), mapped(0xffffc900110f0000), size(16384)
mpt2sas0: ioport(0x000000000000e000), size(256)
mpt2sas0: sending message unit reset !!
mpt2sas0: message unit reset: SUCCESS
mpt2sas0: Allocated physical memory: size(7441 kB)
mpt2sas0: Current Controller Queue Depth(3305), Max Controller Queue Depth(3432)
mpt2sas0: Scatter Gather Elements per IO(128)
mpt2sas0: LSISAS2008: FWVersion(13.00.57.00), ChipRevision(0x03), BiosVersion(07.25.00.00)
mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
mpt2sas0: sending port enable !!
mpt2sas0: host_add: handle(0x0001), sas_addr(0x5000000080000000), phys(8)
mpt2sas0: port enable: SUCCESS

SUCCESS! Lot’s of SUCCESS! Just the way I like it! Haha, card intialized, had access to drives, etc…

Configured the RAID 5 Array using a 256kb chunk size. I also changed the “stripe_cache_size” to 2048 (the system has 4GB of RAM) to increase the RAID 5 performance.

cd /sys/block/md0/md/

echo 2048 > stripe_cache_size

At this point I simply formatted the drive using EXT4. Configured some folders, NFS exports, and then used Storage vMotion to migrate the Virtual Machines from the iSCSI target, to the new RAID5 array (currently using NFS). The main priority right now was to get the VMs off the MSA20 so I could at least create a backup after they have been moved. Next step, I’ll be re-doing the RAID5 array, configuring the md0 device as a iSCSI target using Lio-Target, and formatting it with VMFS. The performance of this Software RAID5 array is already blowing the MSA20 out of the water!

So there you have it! Feel free to post a comment if you have any questions or need any specifics. This setup is rocking away now under high I/O with absolutely no problems whatsoever. I think I may go purchase another 1-2 of these cards!

Nov 282011
 

Just thought I’d do up a quick little post about an issue I’ve been having for some time, and just got it all fixed.

I’ve been running Astaro Security Gateway inside of a VMware environment for a few years. When version 8.x came out, I went ahead and simply attached the ISO to the VM and re-installed over the old v7 and restored the config. This worked great, and for the longest time I had no real issues.

I noticed from time to time that with packet sniffs, there was quite a few retransmissions and TCP segments lost. This didn’t really pose any issues, and didn’t cause any problems, however it was odd.

Recently, I had to configure a Site to Site IPsec VPN between my office, and one of my employees to provide exchange, VoIP, etc… With astaro this is fairly easy, few clicks and it should work simple, however I started noticing huge issues with file transfers, whether being transferred over SMB (Windows File Sharing), or SCP/SSH. Transfers would either completely halt when started, transfer a few couple hundred kilobytes, or transfer half of the file until it would simply halt and become unresponsive.

After 3-4 days of troubleshooting, I went ahead and did a packet sniff, noticed there were numerous TCP segments lost, fragmentation, etc… Initially I beleive that maybe MTU configuration may have had something to do with it, however TCP/IP and the Astaro device should have taken care of properly setting the MTU on the IPsec automatically.

After trying fresh installs of ASG, etc… and no behaviour change, I finally decided to take a few days away and give it a shot later. I’ve troubleshot this from every avenue and for some reason the issue is still existing. I finally figured that the only thing I haven’t checked was with my VMware vSphere environment. Checked the settings, all was good, however I did notice that the NICs for the ASG vm (which were created by the v7 appliance) were set as flexible, and inside of the VM were detected as some type of AMD network adapter. I found this odd.

After shutting down the ASG VM, removing the NICs and configuring new ones using E1000, all of a sudden the issue was fixed, the IPsec Site to Site VPN functioned properly, and all the network issues seen in network captures were resolved.

I hope this helps some other people who may be frustrated dealing with the same issue.