Well, it was about time… I just purchased two Ubiquiti UniFi US-8 Gigabit Switches to replace a couple of aged Linksys Routers and Switches.
I’ll be outlining why I purchased these, how they are setup, my impressions, and review.
Make sure you check out the video review below, and read the entire written review below as well!
Now on to the written review…
The back story
Yes, you read the first paragraph correctly, I’m replacing wireless routers with the UniFi US 8 Port switch.
While my core infrastructure in my server room is all Ubiquiti UniFi, I still have a few routers/switches deployed around the house to act as “VLAN breakout boxes“. These are Linksys wireless routers that I have hacked and installed OpenWRT on to act as switches for VLAN trunks and also provide native access to VLANs.
Originally these were working fine (minus the ability to manage them from the UniFi controller), but as time went on the hardware started to fail. I also wanted to fully migrate to an end-to-end UniFi Switching solution.
The goal
In the end, I want to replace all these 3rd party switches and deploy UniFi switches to provide switching with the VLAN trunks and provide native access to VLANs. I also want to be able to manage these all from the UniFi Controller I’m running on a Linux virtual machine.
To meet this goal, I purchased 2 of the Ubiquiti UniFi US-8, 8 port Gigabit manageable switches.
Ubiquiti UniFi US-8 Switch
So I placed an order through distribution for 2 of these switches.
As with all UniFi product, I was very impressed with the packaging.
And here is the entire package unboxed.
Another good looking UniFi Switch!
Specs
The UniFi Switch 8 is available in two variants, the non-PoE and PoE version.
Part#: US-8
Part#: US-8-60W
8Gbps of Non-Blocking Throughput
8Gbps of Non-Blocking Throughput
16Gbps Switching Capacity
16Gbps Switching Capacity
12W Power Consumption
12W Power Consumption
Powered by PoE (Port 1) or AC/DC Power Adapter
Powered by AC/DC Power Adapter
48V PoE Passthrough on Port 8 (Powered by PoE passthrough from Port 1, or DC Power Adapter)
4 Auto-Sensing 802.3af PoE Ports (Ports 5-8)
UniFi Controller Adoption
After plugging in the two switches, they instantly appeared in the UniFi controller and required a firmware update to adopt.
Adoption was easy, and I was ready to configure the devices! Click on the images to view the screenshots.
Configuration and Setup
I went ahead and configured the management VLANs, along with the required VLAN and switch port profiles on the applicable ports.
One of these switches were going in my furnace room which has a direct link (VLAN trunk) from my server room. The other switch is going on my office desk, which will connect back to the furnace room (VLAN trunk). The switch on my desk will provide native access to one of my main VLANs.
I also planed on powering a UniFi nanoHD on my main floor with the PoE passthrough port, so I also enabled that on the switch residing in my furnace room.
Configuration was easy and took minutes. I then installed the switches physically in their designated place.
One things I want to note that I found really handy was the ability to restart and reset PoE devices via the UniFi Controller web interface. I’ve never had to reset any of my nanoHDs, but it’s handy to know I have the ability.
Everything worked perfectly once the switches were configured, setup, and implemented.
Overall Review
These are great little switches, however the price point can be a bit much when compared to the new UniFi USW-Flex-Mini switches. I’d still highly recommend this switch, especially if you have an end-to-end UniFi setup.
One thing I love doing is mixing technology with sport.
In my free time I’m often hiking, cycling, running, or working out. I regularly use technology to supplement and track my activities. It helps to record, remember, track, and compete with myself.
I use a combo of hardware and software to do so, including watches, phones, software, etc but today I wanted to put emphasis on the Snapchat Spectacles.
The Snapchat Spectacles
I’ve had a pair of the 1st generation Snapchat Spectacles since they were released (I had to use my US shipping address to bring them over to Canada). Over the years I’ve used them to collect videos and haven’t really done much with them, with the exception of sending snaps to friends.
Thankfully I save everything I record and as of the past year, incorporating my new hobby with video, I’ve been able to use some of the old footage to generate some AMAZING videos!
See below for a video I put together of 3 beautiful mountain summits I hiked in one month, first person from the Snapchat Spectacles.
If you keep reading through to the end of the post there’s another video.
First person view
As you can see, even the first version of the Snapchat Spectacles generates some beautiful HD video, providing a first person view of the wearers field of vision.
You might say it’s similar to wearing a GoPro, but what I like about the Spectacles is that the camera is mounted beside your eyes, which makes the video capture that much more personal.
My wishlist
What I’d really like is the ability to continuously record HD video non-stop and even possibly record to my mobile device. Even if this couldn’t be accomplished wirelessly and required a wire to my mobile device, I would still be using it all the time.
Another thing that would be nice would be more size options, as the first generation are way too small for my head, LOL! 🙂
Conlusion
Tech is awesome, and I love using tech like this to share personal experiences!
Snapchat, if you’re listening, I’d love to help with the design of future versions of the Snapchat Spectacles…
We all love speed, whether it’s our internet connection or our home network. And as our internet speeds approach gigabits per second, it’s about time our networks hit 10Gb per second…
High speed networking, particularly 10Gig network is becoming more cost-effective day by day, and with vendors releasing affordable switches, there hasn’t been a better time to upgrade.
Today we’re going 10Gig with the Ubiquiti UniFi US-16-XG switch.
I’ll be discussing my configuration and setup, why you should use this switch for your homelab and/or business, as well as providing a review on the equipment.
Make sure you check out the video below and read the entire post!
Let’s get to it!
The back story
Just like the backstory with my original Ubiquiti UniFi Review, I wanted to optimize my network, increase speeds, and remove bottlenecks.
Most of my servers have 10Gig network adapters (through 10GbaseT or SFP+ ports), and I wanted to upgrade my other servers. I always wanted the ability to add more uplinks to allow a single host/server to have redundant connections to my network.
Up until now, I had 2 hosts connected via my Ubiquiti UniFi US-48 switch via the SFP+ ports with a SFP+ to 10GbaseT module. Using both of the 10Gig ports disallows anymore 10Gig devices being connected. Also, the converter module adds latency.
The goal
Ultimately I wanted to implement a solution that included a new 10Gb network switch acting as a backbone for the network, with connections to my servers, storage, 10Gig devices, and secondary 1Gb switches.
While not needed, it would be nice to have access to both SFP+ connections, as well as 10GbaseT as I have devices that use both.
At the same time, I wanted something that would be easy to manage, affordable, and compatible with equipment from other vendors.
I chose the Ubiquiti UniFi US-16-XG Switch for the task, along with an assortment of cables.
Ubiquiti UniFi US-16-XG Switch
After already being extremely please with the Ubiquiti UniFi product line, I was happy to purchase a unit for internal use, as my company sells Ubiquiti products.
Receiving the product, I was very impressed with the packaging and shipping.
And here I present the Ubiquiti UniFi 16 XG Switch…
You’ll notice the trademark UniFi product design. On the front, the UniFi 16 XG switch has 12 x 10Gb SFP+ ports, along with 4 x 10GbaseT ports. All ports can be used at the same time as none are shared.
The backside of the switch has a console port, along with 2 fans, DC power input, and the AC power.
Overall, it’s a good looking unit. It has even better looking specs…
Specs
The UniFi 16 XG switch specifications:
12 x 10Gb SFP+ Ports
4 x 10GbaseT Ports
160 Gbps Total Non-Blocking Line Rate
1U Form Factor
Layer 2 Switching
Fully Managed via UniFi Controller
The SFP+ ports allow you to use a DAC (Direct Attach Cable) for connectivity, or fiber modules. You can also populate them with converters, such as the Ubiquiti 10GBASE-T SFP+ CopperModule.
You can also attach 4 devices to the 10GbaseT ports.
UDC-3 “FiberCable” DAC
I also purchased 2 x Ubiquiti UDC-3 SFP+ DAC cables. These cables provide connectivity between 2 devices with DAC ports. These can be purchased in lengths of 1 meter, 2 meter, and 3 meters with the part numbers of UDC-1, UDC-2, and UDC-3 respectively.
10Gtek Cable DAC
To test compatibility and have cables from other vendors (in case of any future issues), I also purchased an assortment of 10Gtek SFP+ DAC cables. I specifically chose these as I wanted to have a couple of half meter cables to connect the switches with an aggregated LAG.
UniFi Controller Adoption
To get quickly up and running, I setup the US-16-XG on my workbench, plugged in a network cable in to one of the 10GbaseT ports, and powered it on.
Boot-up was quick and it appeared in the UniFi Controller immediately. It required a firmware update before being able to adopt it to the controller.
After a quick firmware update, I was able to adopt and configure the switch.
The device had a “Test date” of March 2020 on the box, and the UniFi controller reported it as a hardware revision 13.
Configuration and Setup
Implementing, configuration, and setup will be an ongoing process over the next few weeks as I add more storage, servers, and devices to the switch.
The main priority was to test cable compatibility, connect the US-16-XG to my US-48, test throughput, and put my servers directly on the new switch.
I decided to just go ahead and start hooking it up. I decided to do this live without shutting anything down. I went ahead and perfomed the following:
Put the US-16-XG on top of the US-48
Disconnect servers from SFP+ CopperModules on US-48 switch
Plug servers in to 10GbaseT ports on US-16-XG
Remove SFP+ to 10GbaseT CopperModule from US-48 SFP+ ports
Connect both switches with a SFP+ DAC cable
Performing these steps only took a few seconds and everything was up and running. One particular thing I’d like to note is that the port auto-negotiation time on the US-16-XG was extremely quick.
Taking a look at the UniFi Controller view of the US-16-XG, we see the following.
Everything is looking good! Ports auto-detected the correct speed, traffic was being passed, and all is good.
After running like this for a few days, I went ahead and tested the 10Gtek cables which worked perfectly.
To increase redundancy and throughput, I used 2 x 0.5-Meter 10Gtek SFP+ DAC cables and configured an aggregated LAG between the two switches which has also been working perfectly!
In the coming weeks I will be connecting more servers as well as my SAN, so keep checking back for updated posts.
Overall Review
This is a great switch at an amazing price-point to take your business network or homelab network to 10Gig speeds. I highly recommend it!
Use Cases:
Small network 10Gig switch
10Gig backbone for numerous other switches
SAN switch for small SAN network
What I liked the most:
10Gig speeds
Easy setup as always with all the UniFi equipment
Beautiful management interface via the UniFi Controller
Near silent running
Ability to use both SFP+ and 10GbaseT
Compatibility with SFP+ DAC Cables
What could be improved:
Redundant power supplies
Option for more ports
Bug with mobile app showing 10Mbps manual speed for 10Gig ports
This month on June 23rd, HPE is hosting their annual HPE Discover event. This year is a little bit different as COVID-19 has resulted in a change of the usual in-person event, and this year’s event is now being hosted as a virtual experience.
I expect it’ll be the same great content as they have every year, only difference is you’ll be able to virtually experience it from the comfort of your own home.
I’m especially excited to say that I’ve been invited to be special VIP Influencer for the event, so I’ll be posting some content on Twitter, LinkedIn, and of course generating some posts on my blog.
Looking at using SSD and NVMe with your FreeNAS or TrueNAS setup and ZFS? There’s considerations and optimizations that must be factored in to make sure you’re not wasting all that sweet performance. In this post I’ll be providing you with my own FreeNAS and TrueNAS ZFS optimizations for SSD and NVMe to create an NVMe Storage Server.
This post will contain observations and tweaks I’ve discovered during testing and production of a FreeNAS ZFS pool sitting on NVMe vdevs, which I have since upgraded to TrueNAS Core. I will update it with more information as I use and test the array more.
Considerations
It’s important to note that while your SSD and/or NVMe ZFS pool technically could reach insane speeds, you will probably always be limited by the network access speeds.
With this in mind, to optimize your ZFS SSD and/or NVMe pool, you may be trading off features and functionality to max out your drives. These optimizations may in fact be wasted if you reach the network speed bottleneck.
Some feature you may be giving up may actually help extend the life or endurance of your SSD such as compression and deduplication, as they reduce the number of writes performed on each of your vdevs (drives).
You may wish to skip these optimizations should your network be the limiting factor, which will allow you to utilize these features with no performance or minimal performance degradation to the final client. You should measure your network throughput to establish the baseline of your network bottleneck.
Deploying SSD and NVMe with FreeNAS or TrueNAS
For reference, the environment I deployed FreeNAS with NVMe SSD consists of:
1 x FreeNAS instance running as VM with PCI passthrough to NVMe
10Gb networking between DL360 Servers and network
1Gb network between ML310 and network
Update (May 1st, 2021): Since this blog post was created, I have since used what was learned in my new NVMe Storage Server Project. Make sure you check it out after reading this post!
As mentioned above, FreeNAS is virtualizatized on one of the HPE DL360 Proliant servers and has 8 CPUs and 32GB of RAM. The NVME are provided by VMware ESXi as PCI passthrough devices. There has been no issues with stability in the months I’ve had this solution deployed. It is also still working amazing since upgrading FreeNAS to TrueNAS core.
Important notes:
VMXNET3 NIC is used on VMs to achieve 10Gb networking
Using PCI passthrough, snapshots on FreeNAS VM are disabled (this is fine)
NFS VM datastore is used for testing as the host running the FreeNAS VM has the NFS datastore store mounted on itself.
There are a number of considerations that must be factored in when virtualization FreeNAS and TrueNAS however those are beyond the scope of this blog post. I will be creating a separate post for this in the future.
Use Case (Fast and Risky or Slow and Secure)
The use case of your setup will depict which optimizations you can use as some of the optimizations in this post will increase the risk of data loss (such as disabled sync writes and RAIDz levels).
Fast and Risky
Since SSDs are more reliable and less likely to fail, if you’re using the SSD storage as temporary hot storage, you could simply using striping to stripe across multiple vdevs (devices). If a failure occurred, the data would be lost, however if you’re were just using this for “staging” or using hot data and the risk was acceptable, this is an option to drastically increase speeds.
Example use case for fast and risky
VDI Pool for clones
VMs that can be restored easily from snapshots
Video Editing
Temporary high speed data dump storage
The risk can be lowered by replicating the pool or dataset to slower storage on a frequent or regular basis.
Slow and Secure
Using RAIDz-1 or higher will allow for vdev (drive) failures, but with each level increase, performance will be lost due to parity calculations.
Example use case for slow and secure
Regular storage for all VMs
Database (SQL)
Exchange
Main storage
Slow and Secure storage is the type of storage found in most applications used for SAN or NAS storage.
SSD Endurance and Lifetime
Solid state drives have a lifetime that’s typically measured in lifetime writes. If you’re storing sensitive data, you should plan ahead to mitigate the risk of failure when the drive reaches it’s full lifetime.
Steps to mitigate failures
Before putting the stripe or RAIDz pool in to production, perform some large bogus writes and stagger the amount of data written on the SSDs individually. While this will reduce the life counter on the SSDs, it’ll help you offset and stagger the lifetime of each drives so they don’t die at the same time.
If using RAIDz-1 or higher, preemptively replace the SSD before the lifetime is hit. Do this well in advance and stagger it to further create a different between the lifetime of each drive.
Decommissioning the drives preemptively and early doesn’t mean you have to throw them away, this is just to secure the data on the ZFS pool. You can can continue to use these drives in other systems with non-critical data, and possibly use the drive well beyond it’s recommended lifetime.
Compression and Deduplication
Using compression and deduplication with ZFS is CPU intensive (and RAM intensive for deduplication).
The CPU usage is negligible when using these features on traditional magnetic storage (traditional magentic platter hard drive storage) because when using traditional hard drives, the drives are the performance bottleneck.
SSD are a total different thing, specifically with NVMe. With storage speeds in the gigabytes per second, CPUs cannot keep up with the deduplication and compression of data being written and become the bottleneck.
I performed a simple test comparing speeds with compression and dedupe with the same VM running CrystalDiskMark on an NFS VMware datastore running over 10Gb networking. The VM was configured with a single drive on a VMware NVME controller.
NVMe SSD with compression and deduplication
NVMe SSD with deduplication only
NVMe SSD with compression only
Now this is really interesting, that we actually see a massive speed increase with compression only. This is because I have a server class CPU with multiple cores and a ton of RAM. With lower performing specs, you may notice a decrease in performance.
NVMe SSD without compression and deduplication
In my case, the 10Gb networking was the bottleneck on read operations as there was virtually no change. It was a different story for write operations as you can see there is a drastic change in write speeds. Write speeds are greatly increased when writes aren’t being compressed or deduped.
Note that on faster networks, read speeds could and will be affected.
If your network connection to the client application is the limiting factor and the system can keep up with that bottleneck then you will be able to get away with using these features.
Higher throughput with compression and deduplication can be reached with higher frequency CPUs (more Ghz), more cores (for more client connections). Remember that large amounts of RAM are required for deduplication.
Using compression and deduplication may also reduce the writes to your SSD vdevs, prolonging the lifetime and reducing the cost of maintaining the solution.
ZFS ZIL and SLOG
When it comes to writes on a filesystem, there a different kinds.
Synchronous – Writes that are made to a filesystem that are only marked as completed and successful once it has actually been written to the physical media.
Asynchronous – Writes that are made to a filesystem that are marked as completed or successful before the write has actually been completed and committed to the physical media.
The type of write performed can be requested by the application or service that’s performing the write, or it can be explicitly set on the file system itself. In FreeNAS (in our example) you can override this by setting the “sync” option on the zpool, dataset, or zvol.
Disabling sync will allow writes to be marked as completed before they actually are, essentially “caching” writes in a buffer in memory. See below for “Ram Caching and Sync Writes”. Setting this to “standard” will perform the type of write requested by the client, and setting to “always” will result in all writes being synchronous.
We can speed up and assist writes by using a SLOG for ZIL.
ZIL stands for ZFS Intent Log, and SLOG standards for Separated Log which is usually stored on a dedicated SLOG device.
By utilizing a SLOG for ZIL, you can have dedicated SSDs which will act as your intent log for writes to the zpool. On writes that request a synchronous write, they will be marked as completed when sent to the ZIL and written to the SLOG device.
Implementing a SLOG that is slower than the combined speed of your ZFS pool will result in a performance loss. You SLOG should be faster than the pool it’s acting as a ZIL for.
Implementing a SLOG that is faster than the combined speed of your ZFS pool will result in a performance gain on writes, as it essentially act as “write cache” for synchronous writes and will possibly even perform more orderly writes when it commits it to the actual vdevs in the pool.
If using a SLOG for ZIL, it is highly recommend to use an SSD that has PLP (power loss protection) as well as a mirrored set to avoid data loss and/or corruption in the event of a power loss, crash, or freeze.
RAM Caching and Sync Writes
In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.
Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.
In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.
You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).
Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.
SSD Sector Size
Traditional drives typically used 512k physical sector sizes. Newer hard drives and SSDs use 4k sectors, but often emulate 512k logical sectors (called 512e) for compatibility. SSD’s specifically sometimes ship with 512e to increase compatibility with operating systems and the ability to clone your old drive to the new SSD during migrations.
When emulating 512k logical sectors on an HD or SSD that uses 4k physical native sectors, an operation that writes 4k will result in 4 operations instead of 1. This increases overhead and could result in reduced IO and speed, as well as create more wear on the SSD when performing writes.
Some HDs and SSDs come with utilities or tools to change the sector size of the drive. I highly recommend changing it to it’s native sector size.
iSCSI vs NFS
Technically faster speeds should possible using iSCSI instead of NFS, however special care must be made when using iSCSI.
If you’re using iSCSI and the host that is virtualizing the FreeNAS instance is also mounting the iSCSI VMFS target that it’s presenting, you must unmount this iSCSI volume every time you go plan to shut down the FreeNAS instance, or the entire host that is hosting it. Unmounting the iSCSI datastore also means unregistering any VMs that reside on it.
If you simply shutdown the FreeNAS instance that’s hosting the iSCSI datastore, this will result in a improper unclean unmount of the VMFS volume and could lead to data loss, even if no VMs are running.
NFS provides a cleaner mechanism, as the FreeNAS handles the unmount of the base filesystem cleanly on shutdown and to the ESXi hosts it appears as an NFS disconnect. If VMs are not running (and no I/O is occuring) when the FreeNAS instance is shut down, data loss is not a concern.
iSCSI MPIO (Multipath I/O)
If your TrueNAS server isn’t virtualized, I’d recommend going with iSCSI because you can configure MPIO (Multipath I/O), which allows redundancy as well as round robin load balancing across multiple connections to the iSCSI target. For example, with 2 x 10Gbe NICs, you should be able to achieve redundancy/failover, as well as 20Gbe combined speeds. If you had 4 x 10Gbe, then you could achieve 40Gbps combined.
Never use LAG or LACP when you want fully optimized NFS, pNFS, or iSCSI MPIO.
Jumbo Frames
Since you’re pushing more data, more I/O, and at a faster pace, we need to optimize all layers of the solution as much as possible. To reduce overhead on the networking side of things, if possible, you should implement jumbo frames.
Instead of sending many smaller packets which independently require acknowledgement, you can send fewer larger packets. This significantly reduces overhead and allows for faster speed.
In my case, my FreeNAS instance will be providing both NAS and SAN services to the network, thus has 2 virtual NICs. On my internal LAN where it’s acting as a NAS (NIC 1), it will be using the default MTU of 1500 byte frames to make sure it can communicate with workstations that are accessing the shares. On my SAN network (NIC 2) where it will be acting as a SAN, it will have a configured MTU of 9000 byte frames. All other devices (SANs, client NICs, and iSCSI initiators) on the SAN network have a matching MTU of 9000.
Additional Notes
Please note that consumer SSDs usually do not have PLP (Power Loss Prevention). This means that in the event of a power failure, any data sitting on the write cache on the SSD may be lost. This could put your data at risk. Using enterprise solid state drives remedies this issue as they often come with PLP.
Conclusion
SSD’s are great for storage, whether it be file, block, NFS, or iSCSI! It’s in my opinion that NVMe and all flash arrays is where the future of storage is going.
I hope this information helps, and if you feel I left anything out, or if anything needs to be corrected, please don’t hesitate to leave a comment!
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.
Do you accept the use of cookies and accept our privacy policy? AcceptRejectCookie and Privacy Policy
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.