#StorageGlory Achieved : 30 Days on a Windows SAN

 Behold, these three remain. File. Block. Object. And the greatest of these is block.  – Sr. Systems Engineer St. Paul, in a letter to confused storage engineers in Thessalonika

Right. So a couple weeks back I teased the hardware specs of the new storage array I built for the Daisetta Lab at home.

Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab
Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab

My idea was to combine all types of disks -rotational 3.5″ & 2.5″ drives, SSDs, mSATAs, hell, I considered USB- into one tight, well-built storage box for my lab and home data needs. A sort of Storage Ark, if you will; all media types were welcome, but only if they came in twos (for mirroring & Parity sake, of course) and only if they rotated at exactly 7200 RPM and/or leveled their wears evenly across the silica.

And onto this unholy motley crue of hard disks I slapped a software architecture that promised to abstract all the typical storage driver, interface, and controller nonsense away, far, far away in fact, to a land where the storage can be mixed, the controllers diverse, and by virtue of the software-definition bits, network & hypervisor agnostic. In short, I wanted to build an agnostic #StorageGlory box in the Daisetta Lab.

Right. So what did I use to achieve this? ZFS and Zpools?

Hell no, that’s so January.

VSAN? Ha! I’m no Chris Wahl.

I used Windows, naturally.

That’s right. Windows. Server 2012 R2 to be specific, running Core + Infrastructure GUI with 8GB of RAM, and some 17TB of raw disk space available to it. And a little technique developed by the ace Microsoft server team called Tiered Storage Spaces.

Was a #StorageGlory Achievement Unlocked, or was it a dud?

Here’s my review after 30 days on my Windows SAN: san.daisettalabs.net.

The Good

It doesn’t make you pick a side in either storage or storage-networking: Do you like abstracted pools of storage, managed entirely by software? Put another way, do you hate your RAID controller and crush on your old-school NetApp filer, which seemingly could do everything but object storage?

When I say block, do you instinctively say file? Or vice-versa?

Well then my friend, have I got a storage system for your lab (and maybe production!) environment: Windows Storage Spaces (now with Tiering!) offers just about everything guys like you or me need in a storage system for lab & home media environments. I love it not just because it’s Microsoft, but also because it doesn’t make me choose between storage & storage-networking paradigms. It’s perhaps the ultimate agnostic storage technology, and I say that as someone who thinks about agnosticism and storage.

A lot.

You know what I’m talking about. Maybe today, you’ll need some block storage for this VM or that particular job. Maybe you’re in a *nix state of mind and want to fiddle with NFS. Or perhaps you’re feeling bold & courageous and decide to try out VMware again, building some datastores on both iSCSI LUNs and NFS shares. Then again, maybe you want to see what SMB 3.0 3.0 is all about, the MS fanboys sure seem to be talking it up.

The point is this: I don’t care what your storage fancy is, but for lab-work (which makes for excellence in work-work) you need a storage platform that’s flexible and supportive of as many technologies as possible and is, hopefully, software-defined.

And that storage system is -hard to believe I’ll grant you- Windows Server 2012 R2.

I love storage and I can’t think of one other storage system -save for maybe NetApp- that let’s me do crazy things like store .vmdks inside of .vhdxs (oh the vIrony!), use SMB 3 multichannel over the same NICs I’m using for iSCSI traffic, create snapshots & clones just like big filers all while giving me the performance-multiplier benefits of SSDs and caching and a reasonable level of resiliency.

File this one under WackWackStorageGloryAchievedWindows boys and girls.

I can do it all with Storage Spaces in 2012 R2.

As I was thinking about how to write about Storage Spaces, I decided to make a chart, if only to help me keep it straight. It’s rough but maybe you’ll find it useful as you think about storage abstraction/virtualization tech:

Storage-Compared
And yes. Ex post facto dedupe is a made up term. By me. It’s latin for “After the fact, dedupe,” because I always scheduled my dedupes for Saturday night, when the IO load on the filer was low. Ex post facto dedupe is in contrast to some newer storage companies that offer inline compression & dedupe, but none of the ones above offer this, sadly.

It’s easy to build and supports your disks & controllers: This is a Microsoft product. Which means it’s easy to deploy & build for your average server guy. Mine’s running on a very skinny, re-re-purposed SanDisk Ready Cache SSD. With Windows 2012 R2 server running the Infrastructure Management GUI (no explorer.exe, just Server Manager + your favorite snap-ins), it’s using about 6GB of space on the boot drive.

And drivers for the Intel C226 SATA controller, the LSI 9218si SAS card, and the extra ASMedia 1061 controller were all installed automagically by Windows during the build.

The only other system that came close to being this easy to install -as a server product- was Oracle Solaris 11.2 Beta. It found, installed drivers for, and exposed all controllers & disks, so I was well on my way to going the ZFS route again, but figured I’d give Windows a chance this time around.

Nexenta 4, in contrast, never loaded past the Install Community Edition screen.

It’s improved a lot over 2012: Storage Spaces almost two years ago now, and I remember playing with it at work a bit. I found it to be a mind-f*** as it was a radically different approach to storage within the Windows server context.

I also found it to be slow, dreadfully slow even, and not very survivable. Though it did accept any disk disk I gave it, it didn’t exactly like it when I removed a USB drive during an extended write test. And it didn’t take the disk back at the conclusion of the test either.

Like everything else in Microsoft’s current generation, Storage Spaces in 2012 R2 is much better, more configurable, easier to monitor, and more tolerant of disk failures.

It also has something for the IOPS speedfreak inside all of us.

Storage Spaces, abstract this away
Storage Spaces, abstract this away

Tiered Storage Spaces & Adjustable write cache: Coming from ZFS & the Adaptive Replacement Cache, the ZFS Intent Log, the SLOG, and L2ARC, I was kind of hooked on the idea of using massive amounts of my ECC RAM to function as a sort of poor-mans NVRAM.

Windows can’t do that, but with Tiered Storage Spaces, you can at least drop a few SSDs in your array (in my case three x 256GB 840 EVO & one 128GB Samsung 830), mix them into your disk pool, and voila! Fast read-cache, with a Microsoft-flavored MRU/LFU algorithm of some type keeping your hottest data on the fastest disks and your old data on the cheep ‘n deep rotationals.

What’s more, going with Tiered Storage Spaces gives you a modest 1GB write cache, but as I found out, you can increase that up to 10GB.

Which i naturally did while building this guy out. I mean, who wouldn’t want more write-cache?

But there’s a huge gotcha buried in the Technet and blogposts I found about this. I wanted to pool all my disks together into as large of a single virtual disk as possible, then pack iSCSI-connected .vhdxs, SMB 3 shares, and more inside that single, durable & tiered virtual disk.What I didn’t want was several virtual disks (it helped me to think of virtual disks as a sort of Aggregate) with SMB 3 shares and vhdx files stored haphazardly between them.

Which is what you get when you adjust the write-cache size. Recall that I have a capacity of about 17TB raw among all my disks. Building a storage pool, then a virtual disk with a 10GB write cache gave me a tiered virtual disk with a maximum size of about 965GB.  More on that below.

It can be wicked fast, but so is RAID 0: Check out my standard SQLIO benchmark routine, which I run against all storage technologies that come my way. The 1.5 hour test is by no means comprehensive -and I’m not saying the IOPS counter is accurate at all (showing max values across all tests by the way)- but I like this test because it lets me kick the tires on my array, take her out for a spin, and see how she handles.

And with a “Simple” layout (no redundancy, probably equivalent to RAID 0), she handles pretty damn well, but even I’m not crazy enough to run tiered storage spaces in a simple layout config:

storage spaces
These three tests (1.5 hours each, identical setup against multiple configs) were done locally on the array, not over my home network

What’s odd is how poorly the array performed with 10GB of “Write Cache.” Not sure what happened here, but as you can see, latency spiked higher during the 10GB write cache write phase of the test than just about every other test segment.

Something to do with parity no doubt.

For my lab & home storage needs, I settled on a Mirror 2-way parity setup that gives me moderate performance with durability in mind, though not much as you’ll see below.

Making the most of my lab/home network and my NICs: Recall that I have six GbE NICs on this box. Two are built into the Supermicro board itself (Intel), and the other four come by way of a quad-port Intel I350-T4 server NIC.

Anytime you’re planning to do a Microsoft cluster in the 1GbE world, you need lots of NICs. It’s a bit of a crutch in some respects, especially in iSCSI. Typically you VLAN off each iSCSI NIC for your Hyper-V hosts and those NICs do one thing and one thing only: iSCSI, or Live Migration, or CSV etc. Feels wasteful.

But on my new storage box at home, I can use them for double-duty: iSCSI (or LM/CSV) as well as SMB 3. Yes!

Usually I turn off Client for Microsoft Networks (the SMB file sharing toggle in NIC properties) on each dedicated NIC (or vEthernet), but since I want my file cake & my block cake at the same time, I decided to turn SMB on on all iSCSI vEthernet adapters (from the physical & virtual hosts) and leave SMB on the iSCSI NICs on san.daisettalabs.net as well.

The end result? This:

[table caption=”Storage Networking-All of the Above Approach” width=”500″ colwidth=”20|100|50″ colalign=”left|left|center|left|right”]
nic,Name,VLAN,IP,Function
1,MGMT,100,192.168.100.15,MGMT & SMB3
2,CLNT,102,192.168.102.15,Home net & SMB3
3,iSCSI-10,10,172.16.10.x,iSCSI & SMB3
4,iSCSI-11,10,172.16.11.x,iSCSI & SMB3
5,iSCSI-12,10,172.16.12.x,iSCSI & SMB3
[/table]

That’s five, count ’em five NICs (or discrete channels, more specifically) I can use to fully soak in the goodness that is SMB 3 multichannel, with the cost of only a slightly unsettling epistemological question about whether iSCSI NICs are truly iSCSI if they’re doing file storage protocols.

Now SMB 3 is so transparent (on by default) you almost forget that you can configure it, but there’s quite a few ways to adjust file share performance. Aidan Finn argues for constraining SMB 3 to certain NICs, while Jose Barreto details how multichannel works on standalone physical NICs, a pair in a team, and multiple teams of NICs.

I haven’t decided which model to follow (though on san.daisettalabs.net, I’m not going to change anything or use Converged switching…it’s just storage), but SMB 3 is really exciting and it’s great that with Storage Spaces, you can have high performance file & block storage. I’ve hit 420MB/sec on synchronous file copies from san to host and back again. Outstanding!

I Finally got iSNS to work and it’s…meh: One nice thing about san.daisettalabs.net is that that’s all you need to know…the FQDN is now the resident iSCSI Name Server, meaning it’s all I need to set on an MS iSCSI Initiator. It’s a nice feature to have, but probably wasn’t worth the 30 minutes I spent getting it to work (hint: run set-wmiinstance before you run iSNS cmdlets in powershell!) as iSNS isn’t so great when you have…

SMI-S, which is awesome for Virtual Machine Manager fans: SMI-S, you’re thinking, what the the hell is that? Well, it’s a standardized framework for communicating block storage information between your storage array and whatever interface you use to manage & deploy resources on your array. Developed by no less an august body than the Storage Networking Industry Association (SNIA), it’s one of those “standards” that seem like a good idea, but you can’t find it much in the wild as it were. I’ve used SMI-S against a NetApp Filer (in the Classic DoT days, not sure if it works against cDoT) but your Nimbles, your Pures, and other new players in the market get the same funny look on their face when you ask them if they support SMI-S.

“Is that a vCenter thing?” they ask.

Sigh.

Microsoft, to its credit, does. Right on Windows Server. It’s a simple feature you install and two or three powershell commands later, you can point Virtual Machine Manager at it and voila! Provision, delete, resize, and classify iSCSI LUNS on your Windows SAN, just like the big boys do (probably) in Azure, only here, we’re totally enjoying the use of our corpulent.vhdx drives, whereas in Azure, for some reason, they’re still stuck on .vhds like rookies. Haha!

Single Pane o' glass in VMM with SMI-S for the Hyper-V set
Single Pane o’ glass in VMM with SMI-S, GUIDs galore and more for the Hyper-V set

It’s a very stable storage platform for Microsoft Clustering: I’ve built a lot of Microsoft Hyper-V clusters. A lot. More than half a dozen in production, and probably three times that in dev or lab environments, so it’s like second nature to me. Stable storage & networking are not just important factors in Microsoft clusters, they are the only factors.

So how is it building out a Hyper-V cluster atop a Windows SAN? It’s the same, and different at the same time, but, unlike so many other cluster builds, I passed the validation test on the first attempt with green check marks everywhere. And weeks have gone by without a single error in the Failover Clustering snap-in; it’s great.

The Bad

It’s expensive and seemingly not as redundant as other storage tech: When you build your storage pool out of offlined disks, your first choice is going to involve (just like other storage abstraction platforms) disk redundancy. Microsoft makes it simple, but doesn’t really tell you the cost of that redundancy until later in the process.

Recall that I have 17TB of raw storage on san.daisettalabs.net, organized as follows:

[table]

Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Cheep ‘n deep

Samsung 840 EVO SSD, 3, 256GB, 512byte, 250MB/read, Tiers not fears

Samsung 830 SSD, 1, 128GB, 512byte, 250MB/read, Tiers not fears

HGST 3.5″ Momentus, 6, 2TB, 512byte, 105MB/r/w, Cheep ‘n deep

[/table]

Now, according to my trusty IOPS Excel calculator, if I were to use traditional RAID 5 or RAID 6 on that set of spinners, I’d get about 16.5TB usable in the former, 15TB usable in the latter (assuming RAID penalty of 5 & 6, respectively)

For much of the last year, I’ve been using ZFS & RAIDZ2 on the set of six WD Red 2.5″ drives. Those have a raw capacity of 6TB. In RAIDZ2 (roughly analogous to RAID 6), I recall getting about 4.2TB usable.

All in all, traditional RAID & ZFS’ RAIDZ cost me between 12% and  35% of my capacity respectively.

So how much does Windows Storage Spaces resiliency model (Mirrored, 2-way parity) cost me? A lot. We’re in RAID-DP territory here people:

 

storagespaces5

Ack! With 17TB of raw storage, I get about 5.7TB usable, a cost of about 66%!

And for that, what kind of resieliency do I get?

I sure as hell can’t pull two disks simultaneously, as I did live during prod in my ZFS box. I can suffer the loss of only a single disk. And even then, other Windows bloggers point to some pain as the array tries to adjust.

Now, I’m not the brightest on RAID & parity and such, so perhaps there’s a more resilient, less costly way to use Storage Spaces with Tiering, but wow…this strikes me as a lot of wasted disk.

Not as easy to de-abstract the storage: When a disk array is under load, one of my favorite things to do is watch how the IO hits the physical elements in the array. Modern disk arrays make what your disks are doing abstract, almost invisible, but to truly understand how these things work, sometimes you just want the modern equivalent of lun stats.

In ZFS, I loved just letting gstat run, which showed me the load my IO was placing on the ARC, the L2ARC and finally, the disks. Awesome stuff:

In this Gifcam, watch ada0-6 as they struggle under load with the "Always Sync" option enabled.
In this Gifcam, watch ada0-6 as they struggle under load with the “Always Sync” option enabled.

As best as I can tell, there’s no live powershell equivalent to gstat for Storage Spaces. There are teases though; you can query your disks, get their SMART vitals, and more, but peeling away the onion layers and actually watching how Windows handles your IO would make Storage Spaces the total package.

Bottom line

So that’s about it: this is the best storage box I’ve built in the Daisetta Lab. No regrets going with Windows. The platform is mature, stable, offers very good performance, and decent resiliency, if at a high disk cost.

I’m so impressed I’ve checked my Windows SAN skepticism at the door and would run this in a production environment at a small/medium business (clustered, in the Scaled Out File Server role). Cost-wise, it’s a bargain. Check out this array: it’s the same exact Hardware a certain upstart Storage vendor I like (that rhymes with Gymbal Porridge) sells, but for a lot less!

#StorageGlory achieved. At home. In my garage.

Labworks 1:4-7 – The Last Word in ZFS Labworks

Greetings to you Labworks readers, consumers, and conversationalists. Welcome to the last  verse of Labworks Chapter 1, which has been all about building a durable and performance-oriented ZFS storage array for Hyper-V and/or VMware.

Let’s review where we’ve been:

[table]

Labworks Chapter, Verse, Subject, Title & URL

Labworks 1:, 1, Storage, Building a Durable and Performance-Oriented ZFS Box for Hyper-V & VMware

,2-3, StorageI Heart the ARC & Let’s Pull Some Drives!

[/table]

Today we’re going to circle back to the very end of Labworks 1:1, where I assigned myself some homework: find out why my writes suck so bad. We’re going to talk about a man named ZIL and his sidekick the SLOG and then we’re going to check out some Excel charts and finish by considering ZFS’ sync models.

But first, some housekeeping: SAN2, the ZFS box, has undergone minor modification. You can find the current array setup below. Also, I have a new switch in the Daisetta Lab, and as switching is intimately tied to storage networking & performance, it’s important I detail a little bit about it.

Labworks 1:4 – Small Business SG300 vs Catalyst 2960S

Cisco’s SG-300 & SG-500 series switches are getting some pretty good reviews, especially in a home lab context. I’ve got an SG-300 and really like it as it offers a solid spectrum of switching options at Layer 2 as well as a nice Layer 3-lite mode all for a tick under $200. It even has a real web-interface if your CLI-shy, which

Small Business Cisco != Linksys
Small Business Cisco != Linksys

I’m not but some folks are.

Sadly for me & the Daisetta Lab, I need more ports than my little SG-300 has to offer. So I’ve removed it from my rack and swapped it for a 2960S-48TS-L from the office, but not just any 2960S.

No, I have spiritual & emotional ties to this 2960s, this exact one. It’s the same 2960s I used in my January storage bakeoff of a Nimble array, the same 2960s on which I broke my Hyper-V & VMware cherry in those painful early days of virtualization, yes, this five year old switch is now in my lab:

The pride of Cisco's 2009 Desktop Switching series, the 2960s
The pride of Cisco’s 2009 Desktop Switching series, the 2960s

Sure it’s not a storage switch, in fact it’s meant for IDFs and end-users and if the guys on that great storage networking podcast from a few weeks back knew I was using this as a storage switch, I’d be finished in this industry for good.

But I love this switch and I’m glad its at the top of my rack. I saved 1U, the energy costs of this switch vs two smaller ones are probably a wash, and though I lost Layer 3 Lite, I gained so much more: 48 x 1GbE ports and full LAN-licensed Cisco IOS v 15.2, which, agnostic computing goals aside for a moment, just feels so right and so good.

And with the increased amount of full-featured switch ports available to me, I’ve now got LACP teams of three on agnostic_node_1 & 2, jumbo frames from end to end, and the same VLAN layout.

Here’s the updated Labworks schematic and the disk layout for SAN2:

Lab 1-4-5 - Daisetta Labs

[table]

Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Zpool Members

Samsung 840 EVO SSD, 1, 128GB, 512byte, SATA 3, L2ARC Read Cache

Samsung 830 SSD, 1, 128GB, 512byte, SATA 3, L2ARC Read Cache

Seagate 2.5″ Momentus, 1, 500GB, 512byte, 80MB/r/w, Boot/swap/system

[/table]

Labworks 1:5 – A Man named ZIL and his sidekick, the SLOG

Labworks 1:1 was all about building durable & performance-oriented storage for Hyper-V & VMware. And one of the unresolved questions I aimed to solve out of that post was my poor write performance.

Review the hardware table and you’ll feel like I felt. I got me some SSD and some RAM, I provisioned a ZIL so write-cache that inbound IO already ZFS, amiright? Show me the IOPSMoney Jerry!

Well, about that. I mischaracterized the ZIL and I apologize to readers for the error. Let’s just get this out of the way: The ZFS Intent Log (ZIL) is not a write-cache device as I implied in Labworks 1:1.

ZFS storage in excellent Good/Better/Best format
ZFS storage layout in excellent Good/Better/Best format courtesy of Nexenta, which has some outstanding documentation & guides

The ZIL, whether spread out among your rotational disks by ZFS design, or applied to a Separate Log Device (a SLOG), is simply a synchronous writes mechanism, a log designed to ensure data integrity and report (IO ACK) back to the application layer that writes are safe somewhere on your rotational media. The ZIL & SLOG are also a disaster recovery mechanisms/devices ; in the event of power-loss, the ZIL, or the ZIL functioning on a SLOG device, will ensure that the writes it logged prior to the event are written to your spinners when your disks are back online.

Now there seem to be some differences in how the various implementations of ZFS look at the ZIL/SLOG mechanism.

Nexenta Community Edition, based off Illumos which is the open source descendant of Sun’s Solaris, says your SLOG should just be a write-optimized SSD, but even that’s more best practice than hard & fast requirement. Nexenta touts the ZIL/SLOG as a performance multiplier, and their excellent documentation has helpful charts and graphics reinforcing that.

In contrast, the most popular FreeBSD ZFS implementations documentation paints the ZIL as likely more trouble than its worth. FreeNAS actively discourages you from provisioning a SLOG unless it’s enterprise-grade, accurately pointing out that the ZIL & a SLOG device aren’t write-cache and probably won’t make your writes faster anyway, unless you’re NFS-focused (which I’m proudly, defiantly even, not) or operating a large database at scale.

ZIL me

What’s to account for the difference in documentation & best practice guides? I’m not sure; some of it’s probably related to *BSD vs Illumos implementations of ZFS, some of it’s probably related to different audiences & users of the free tier of these storage systems.

The question for us here is this: Will you benefit from provisioning a SLOG device if you build a ZFS box for Hyper-V and VMWare storage for iSCSI?

I hate sounding like a waffling storage VAR here, but I will: it depends. I’ve run both Nexenta and NAS4Free; when I ran Nexenta, I saw my SLOG being used during random & synchronous write operations. In NAS4Free, the SSD I had dedicated as a SLOG never showed any activity in zfs-stats, gstat or any other IO disk tool I could find.

One could spend weeks of valuable lab time verifying under which conditions a dedicated SLOG device adds performance to your storage array, but I decided to cut bait. Check out some of the links at the bottom for more color on this, but in the meantime, let me leave you with this advice: if you have $80 to spend on your FreeBSD-based ZFS storage, buy an extra 8GB of RAM rather than a tiny, used SLC or MLC device to function as your SLOG. You will almost certainly get more performance out of a larger ARC than by dedicating a disk as your SLOG.

Labworks 1:6 – Great…so, again, why do my writes suck? 

Recall this SQLIO test from Labworks 1:1:

sqlio lab 1 short test

As you can see, read or write, I was hitting a wall at around 235-240 megabytes per second during much of “Short Test”, which is pretty close to the theoretical limit of an LACP team with two GigE NICs.

But as I said above, we don’t have that limit anymore. Whereas there were once 2x1GbE Teams, there are now 3x1GbE. Let’s see what the same test on the same 4KB block/4KB NTFS volume yields now.

SQLIO short test, take two, sort by Random vs Sequential writes & reads:

labworks147

By jove, what’s going on here? This graph was built off the same SQLIO recipe, but looks completely different than Labworks 1. For one, the writes look much better, and reads look much worse. Yet step back and the patterns are largely the same.

It’s data like this that makes benchmarking, validating & ultimately purchasing storage so tricky. Some would argue with my reliance on SQLIO and those arguments have merit, but I feel SQLIO, which is easy to script/run and automate, can give you some valuable hints into the characteristics of an array you’re considering.

Let’s look at the writes question specifically.

Am I really writing 350MB/s to SAN2?

storagenetworkingforthewinOn the one hand, everything I’m looking at says YES: I am a Storage God and I have achieved #StorageGlory inside the humble Daisetta Lab HQ on consumer-level hardware:

  • SAN2 is showing about 115MB/s to each Broadcom interface during the 32KB & 64KB samples
  • Agnostic_Node_1 perfmon shows about the same amount of traffic eggressing the three vEthernet adapters
  • The 2960S is reflecting all that traffic; I’m definitely pushing about 350 megabytes per second to SAN2; interface port channel 3 shows TX load at 219 out of 255 and maxing out my LACP team

On the other hand, I am just an IT Mortal and something bothers:

  • CPU is very high on SAN2 during the 32KB & 64KB runs…so busy it seems like the little AMD CPU is responsible for some of the good performance marks
  • While I’m a fan of the itsy-bitsy 2.5″ Western Digitial RED 1TB drives in SAN2, under no theoretical IOPS model is it likely that six of them, in RAIDZ-2 (RAID 6 equivalent) can achieve 5,000-10,000 IOPS under traditional storage principles. Each drive by itself is capable of only 75-90 IOPS
  • If something is too good to be true, it probably is

49286241Sr. Storage Engineer Neo feels really frustrated at this point; he can’t figure out why his writes suck, or even if they suck, and so he wanders up to the Oracle to get her take on the situation and comes across this strange Buddha Storage kid.

Labworks 1:7 – The Essence of ZFS & New Storage model

In effect, what we see here is is just a sample of the technology & techniques that have been disrupting the storage market for several years now: compression & caching multiply performance of storage systems beyond what they should be capable of, in certain scenarios.

As the chart above shows, the test2 volume is compressed by SAN2 using lzjb. On top of that, we’ve got the ZFS ARC, L2ARC, and the ZIL in the mix. And then, to make things even more complicated, we have some sync policies ZFS allows us to toggle. They look like this:

sync policy

The sync toggle documentation is out there and you should understand it it is crucial to understanding ZFS, but I want to demonstrate the choices as well.

I’ve got three choices + the compression options. Which one of these combinations is going to give me the best performance & durability for my Hyper-V VMs?

SQLIO Short Test Runs 3-6, all PivotTabled up for your enjoyment and ease of digestion:

compressionsync

As is usually the case in storage, IT, and hell, life in general, there are no free lunches here people. This graph tells you what you already know in your heart: the safest storage policy in ZFS-land (Always Sync, that is to say, commit writes to the rotationals post haste as if it was the last day on earth) is also the slowest. Nearly 20 seconds of latency as I force ZFS to commit everything I send it immediately (vs flush it later), which it struggles to do at a measly average speed of 4.4 megabytes/second.

Compression-wise, I thought I’d see a big difference between the various compression schemes, but I don’t. Lzgb, lz4, and the ultra-space-saving/high-cpu-cost gzip-9 all turn in about equal results from an IOPS & performance perspective. It’s almost a wash, really, and that’s likely because of the predictable nature of the IO SQLIO is generating.

Labworks 1:Epilogue

Last point: ZFS, as Chris Wahl pointed out, is a sort of virtualization layer atop your storage. Now if you’re a virtualization guy like me or Wahl, that’s easy to grasp; Windows 2012 R2’s Storage Spaces concept is similar in function.

But sometimes in virtualization, it’s good to peel away the abstraction onion and watch what that looks like in practice. ZFS has a number of tools and monitors that look at your Zpool IO, but to really see how ZFS works, I advise you to run gstat. GStat shows what your disks are doing and if you’re carefully setting up your environment, you ought to be able to see the effects of your settings on each individual spindle.

In this Gifcam, watch ada0-6 as they struggle under load with the "Always Sync" option enabled.
In this Gifcam, watch ada0-5 (the western digitals)as they struggle under load with the “Always Sync” option enabled. Notice that the zvol/Alpha-Pool/Test2 volume (The logical volume construct) is at 100% busy and the ops/s are not very stellar.

Now look at this gstat sample. Under SQLIO-load, the zvol is showing 10,000 IOPS, 300+MB/s. But ada0-5, the physical drives, aren't doing squat.

Now look at this gstat sample. Under SQLIO-load, the zvol is showing 10,000 IOPS, 300+MB/s. But ada0-5, the physical drives, aren’t doing squat for several seconds at a time as SAN2 absorbs & processes all the IO coming at it.

That, friends, is the essence of ZFS.

 Links/Knowledge/Required Reading Used in this Post:

[table]
Resource, Author, Summary

Nexenta’s awesome whitepapers and guides, Nexenta, Find ’em and collect ’em good stuff on MPIO config and ZFS performance

Comparing SSD vs NoSSD in Nexenta w/NFS, Larry Smith, A fellow ZFS fan with more focus on NFS & VMware

Get the Most out of ZFS SSD, Sebastian “vBagpipes” Laubscher, Sebastian finds a different way to provision the ZIL/SLOG

Nexenta & Scale, Hans DeLeenHeer, Fellow #TFD delegate looks at ZFS tiers in superhero context

SLOG/ZIL Insight, FreeNAS forum, Great forum-focused post on SLOG/ZIL in BSD ZFS

SLOG Blog, Oracle, 2007 post about the ZIL & SLOG heralding storage di

 Zpool and ZIL management, Magnus Strahlert, Excellent how-to guide for ZIL/L2ARC provisioning

[/table]

 

Labworks 2:5-8 – Get-Me -ConvergedSwitching -For “Hyper-V” | Now-Please

Hello Labworks fans, detractors and partisans alike, hope you had a nice Easter / Resurrection / Agnostic Spring Celebration weekend.

Last time on Labworks 2:1-4, we looked at some of the awesome teaming options Microsoft gave us with Server 2012 via its multiplexor driver. We also made the required configuration adjustments on our switch for jumbo frames & VLAN trunking, then we built ourselves some port channel interfaces flavored with LACP.

I think the multiplexor driver/protocol is one of the great (unsung?) enhancements of Server 2012/R2 because it’s a sort of pre-virtualization abstraction layer (That is to say, your NICs are abstracted & standardized via this driver before we build our important virtual switches) and because it’s a value & performance multiplier you can use on just about any modern NIC, from the humble RealTek to the Mighty Intel Server 10GbE.

But I’m getting too excited here; let’s get back to the curriculum and get started shall we?

Goals

5.  Understand what Microsoft’s multiplexor driver/LBFO has done to our NICs

6. Build our Virtual Machine Switch for maximum flexibility & performance

7. The vEthernets are Coming

8. Next Steps: Jumbo frames from End-to-end and performance tuning

Schematic:

Lab 2 - Daisetta Labs overview

2:5 Understand what Microsoft’s Multiplexor driver/LBFO has done to our NICs

So as I said above, the best way to think about the multiplexor driver & Microsoft’s Load Balancing/Failover tech is by viewing it as a pre-virtualization abstraction layer for your NICs. Let’s take a look.

Our Network Connections screen doesn’t look much different yet, save for one new decked-out icon labeled “Daisetta-Team:”

daisettateam

Meanwhile, this screen is still showing the four NICs we joined into a team in Labworks 2:3, so what gives?

A click on the properties of any of those NICs (save for the RealTek) reveals what’s happened:

Egads! My Intel NIC has been neutered by LBFO
Egads! My Intel NIC has been neutered by LBFO

The LBFO process unbinds many (though not all) settings, configurations, protocols and certain driver elements from your physical NICs, then binds the fabulous Multiplexor driver/protocol to the NIC as you see in the screenshot above.

In the dark days of 2008 R2 & Windows core, when we had to walk up hill to school both ways in the snow I had to download and run a cmd tool called nvspbind to get this kind of information.

Fortunately for us in 2012 & R2, we have some simple cmdlets:

daisettateam3

So notice Microsoft has essentially stripped “Ethernet 4” of all that would have made it special & unique amongst my 4x1GbE NICs; where I might have thought to tag a VLAN onto that Intel GbE, the multiplexor has stripped that option out. If I had statically assigned an IP address to this interface, TCP/IP v4 & v6 are now no longer bound to the NIC itself and thus are incapable of having an IP address.

And the awesome thing is you can do this across NICs, even NICs made by separate vendors. I could, for example, mix the sacred NICs (Intel) with the profane NICs (RealTek)…it don’t matter, all NICs are invited to the LBFO party.

No extra licensing costs here either; if you own a Server 2012 or 2012 R2 license, you get this for free, which is all kinds of kick ass as this bit of tech has allowed me in many situations to delay hardware spend. Why go for 10GbE NICs & Switches when I can combine some old Broadcom NICs, leverage LACP on the switch, and build 6×1 or 8x1GbE Converged LACP teams?

LBFO even adds up all the NICs you’ve given it and teases you with a calculated LinkSpeed figure, which we’re going to hold it to in the next step:

4GbS LACP team sounds great, but is it really 4Gb/s?
4GbS LACP team sounds great, but is it really 4Gb/s?

2:6 Build our Virtual Machine Switch for maximum flexibility & performance

If we just had the multiplexor protocol & LBFO available to us, it’d be great for physical server performance & durability. But if you’re deploying Hyper-V, you get to have your LBFO cake and eat it too, by putting a virtual switch atop the team.

This is all very easy to do in Hyper-V manager. Simply right click your server, select Virtual Switch Manager, make sure the Multiplexor driver is selected as the NIC, and press OK.

Bob’s your Uncle:

daisettaconverged1

But let’s go a bit deeper and do this via powershell, where we get some extra options & control:

PS C:usersjeff.DAISETTALABS> new-vmswitch -NetAdapterInterfaceDescription “Microsoft Network Adapter Multiplexor Driver” -AllowManagementOS 1 -MinimumBandwidthMode Weight -name “Daisetta-Converged”

Let’s go through each of these:

  • New-vmswitch : the cmdlet we’re invoking to build the switch. Run get-help new-vmswitch for a rundown of the cmdlet’s structure & options
  • -NetAdapterInterfaceDescription : here we’re telling Windows which NIC to build the VM Switch on top of. Get the precise name from Get-NetAdapter and enclose it in quotes
  • -Allow ManagementOS 1 : Recall the diagram above. This boolean switch (1 yes, 0 no) tells Windows to create the VM Switch & plug the Host/Management Operating System into said Switch. You may or may not want this; in the lab I say yes; at work I’ve used No.
  • -Minimum Bandwidth Mode Weight: We lay out the rules for how the switch will apportion some of the 4Gb/s bandwidth available to it. By using “Weight,” we’re telling the switch we’ll assign some values later
  • Name: Name your switch

A few seconds later, and congrats Mr. Hyper-V admin, you have built a converged virtual switch!

2:7 The vEthernets are Coming

Now that we’ve built our converged virtual switch, we need to plug some things into it. And that starts on the physical host.

If you’re building a Hyper-V cluster or stand-alone Hyper-V host with VMs on networked storage, you’ll approach vEthernet adpaters differently than if you’re building Hyper-V for VMs on attached/internal storage or on SMB 3.0 share storage. In the former, you’re going to need storage vEthernet adpters; in the latter you won’t need as many vEthernets unless you’re going multi-channel SMB 3.0, which we’ll cover in another labworks session.

I’m going to show you the iSCSI + Failover Clustering model.

In traditional Microsoft Failover Clustering for Virtual Machines, we need a minimum of five discrete networks. Here’s how that shakes out in the Daisetta Lab:

[table]

Network Name, VLAN ID, Purpose, Notes

Management, 1, Host & VM management network, You can separate the two if you like

CSV, 14, Host Cluster & communication and coordination, Important for clustering Hyper-V hosts

LM, 15, Live Migration network, When you must send VMs from broke host to host with the most LM is there for you

iSCSI 1-3, 11-13, Storage, Soemwhat controversial but supported

[/table]

Now you should be connecting that dots: remember in Labworks 2:1, we built a trunked port-channel on our Cisco 2960S for the sole purpose of these vEthernet adapters & our converged switch.

So, we’re going to attach tagged vethernet adapters to our host via powershell. Pay attention here to the “-managementOS” tag; though our Converged switch is for virtual machines, we’re using it for our physical host as well.

You can script his out of course (and VMM does that for you), but if you just want to copy paste, do it in this order:

  • Add the vEthernets
add-vmnetworkadapter -managementos -name CSV -switchname Daisetta-converged
add-vmnetworkadapter -managementos -name iSCSI-1 -switchname Daisetta-converged add-vmnetworkadapter -managementos -name iSCSI-2 -switchname Daisetta-converged
add-vmnetworkadapter -managementos -name iSCSI-3 -switchname Daisetta-converged
add-vmnetworkadapter -managementos -name LM -switchname Daisetta-converged
  • Tag those vEthernets!
Set-VMNetworkAdapterVlan -ManagementOS -Access -VlanId 15 -VMNetworkAdapterName LM
Set-VMNetworkAdapterVlan -ManagementOS -Access -VlanId 14 -VMNetworkAdapterName CSV
Set-VMNetworkAdapterVlan -ManagementOS -Access -VlanId 13 -VMNetworkAdapterName iSCSI-3
Set-VMNetworkAdapterVlan -ManagementOS -Access -VlanId 12 -VMNetworkAdapterName iSCSI-2
Set-VMNetworkAdapterVlan -ManagementOS -Access -VlanId 11 -VMNetworkAdapterName iSCSI-1
  • Now set IPs
New-NetIPAddress -IPAddress 172.16.14.12 -InterfaceAlias "vEthernet (CSV)" -AddressFamily IPv4 -PrefixLength 24
 
New-NetIPAddress -IPAddress 172.16.15.12 -InterfaceAlias “vEthernet (LM)” -AddressFamily IPv4 -PrefixLength 24
New-NetIPAddress -IPAddress 172.16.13.12 -InterfaceAlias "vEthernet (iSCSI-3)" -AddressFamily IPv4 -PrefixLength 24
New-NetIPAddress -IPAddress 172.16.12.12 -InterfaceAlias "vEthernet (iSCSI-2)" -AddressFamily IPv4 -PrefixLength 24
New-NetIPAddress -IPAddress 172.16.11.12 -InterfaceAlias "vEthernet (iSCSI-1)" -AddressFamily IPv4 -PrefixLength 24
 

Notice we didn’t include a Gateway in the New-NetIPAddress cmdlet; that’s because when we built our Virtual Switch with the “-managementOS 1” switch attached, Windows automatically provisioned a vEthernet adapter for us, which either got an IP via DHCP or took an apipa address.

So now we have our vEthernets and their appropriate VLAN tags:

daisettaconverged2
Ignore the DMZ vEthernet for now. Notice Daisetta-Converged, our VM Switch, is seen as a VMNetworkAdapter and is untagged. In my lab, this interface functions as my Host Management interface. In a production scenario, you’ll probably use separate vEthernet adapters for Host Management and not expose the switch itself to the management OS

 

 

 

 

 

 

 

2:8: Next Steps : Jumbo Frames from end-to-end & Performance Tuning

So if you’ve made it this far, congrats. If you do nothing else, you now have a converged Hyper-V virtual switch, tagged vEthernets on your host, and a virtualized infrastructure that’s ready for VMs.

But there’s more you can do; stay tuned for the next labworks post where we’ll get into jumbo frames & performance tuning this baby so she can run with all the bandwidth we’ve given her.

Links/Knowledge/Required Reading Used in this Post:

[table]
Resource, Author, Summary
New-VMSwitch Technet, Microsoft, Always good to have Technet reference
Building a Converged Fabric with Server 2012, Hans “The Hyper-Dutchman” Vredevoort, A 2012 post which helped me when I was struggling through 2008 R2 to 2012 Hyper-V migration

Hyper-V 3.0 Converged Networks with Force 10 and DCB, Dell, Neat Wiki & diagram with iSCSI as separate virtual switch but with DCB

[/table]

 

 

Labworks 2:1-4 : Converged Hyper-V Switching like a boss

Greetings Labworks fans, today we’re going to learn how to build converged Hyper-V switches, switches so cool they’re nearly identical to the ones available to enterprise users with their fancy System Center licenses.

If you’re coming from a VMware mindset, a Hyper-V converged switch is probably most similar to Distributed vSwitches, though admittedly I’m a total n00b on VMware, so take that statement with a grain of salt. The idea here is to build an advanced switching fabric on your Hyper-V hosts that is fault-tolerant & performance-oriented, and like a Distributed vSwitch, common among your physical hosts and your guests. 

This is one of my favorite topics because I have a serious & problematic love-affair with LACP and a Terrets-like urge to team things up & jumbo, but you don’t need an LACP-capable switch or jumbo frame to enjoy Converged Switching goodness.

Let’s dive in, shall we?

Goals

  1. Prepare the physical switch for Jumbo Frames
  2. Understand LBFO: Microsoft’s Load Balancing/Fail Over teaming technology introduced in Server 2012
  3. Enable LACP on the Switch and on the Server
  4. Build the Switch on the Team & Next Steps

Required Tools ‘n Tech:

  • Server 2012 or 2012 R2…sorry Windows 8.1 Professional/Enterprise fans…LBFO is not available for 8.1. I know, I feel your pain. But the naked Hyper-V 3.0 Hypervisor (Core only) is free, so what are you waiting for?
  • A switch, preferably gigabit. LACP not required but a huge performance multiplier
  • NICs: As in plural. You need at least two. Yes, you can use your Keepin’ it RealTek NICs..Hyper-V doesn’t care that your NICs aren’t server-grade, but I advise against consumer-NICs for production!!

Schematic

State of the Lab as of today. Ag_node_1 is new, with a core i7 Haswell (Yay!), ag_node_2 is the same, still running CSVs off my ZFS box, and check it out, bottom right: a new host, SMB1:

Lab 2 - Daisetta Labs overview

SMB1 Detail:

 

labworks 2

2:1 Prepare the Physical Switch for Jumbo Frames

You can skip this section if all you have at your disposal is a dumb switch.

Commands below are off of a Cisco 2960s. Commands are similar on the new SG300 & 500 series Cisco switches. PowerConnect 5548 switches from Dell aren’t terribly different either, though I seem to recall you have you enable jumbo mtu on each port as well as the switch.

First we’re going to want to turn on Jumbo Frames, system-wide, which usually requires a reload of your switch, so schedule for a maintenance window!

daisettalabs.net(config)#system mtu jumbo 9198

You can run a show system mtu after the reload to be sure the switch is ready for the corpulent frames you will soon send its way:

daisettalabs.net#show system mtu

System MTU size is 1514 bytes
System Jumbo MTU size is 9198 bytes
System Alternate MTU size is 1514 bytes
Routing MTU size is 1514 bytes

2:2 Load Balancing & Failover

Load Balancing & Failover, or LBFO as it’s known, was the #1 feature I was looking forward to in Server 2012.

And boy did Microsoft deliver.

LBFO is a driver/framework that takes whatever NICs you have, “teams” them, applies a mature & resilient multiplexor driver to them, and gives you redundancy & performance in just a few clicks or powershell cmdlets. Let’s do GUI for the team, and later on, we’ll use Powershell to build a switch on that team.

Sidenote: Don’t bother applying IP addresses, VLANs to your LBFO-destined physical NICs at this point. Do bother installing your manufacturer’s latest driver, or hacking one on as I’ve had to do with my new ag_node_1 Intel NIC. (SideSideNote: as this blogger states, Intel can eat a bag of d**** for dropping so many NICs from Server 2012 support. Broadcom, for all the hassles I’ve had with them, still updates drivers on four year old cards!)

On SMB1 from the above schematic, I’ve got five gigabit NICs. One is a RealTek on the motherboard, and the other four are Intel; 1-4 on a PCIe Quad Gigabit network card, i350 x4 I believe.

nics1

The RealTek NIC has a static IP and is my management interface for the purposes of this labworks. We’ll only be teaming the four Intel NICs here. Be sure to leave at least one of your NICs out of the LBFO team unless you are sitting in front of your server console; you can always add it in later.

Launch Server Manager in the GUI and click on “All Servers,” then right click on SMB1 and select Configure NIC Teaming:

nics2

A new window will emerge,titled, NIC Teaming.

In the NIC Teaming window, notice on the right the five GbE adapters you have and their status (Green Arrow). Click on “Tasks” and select “New Team” (Red Arrow):

nics3

The New Team window is where all the magic happens. Let’s pause for a moment and go to our switch.

On my old 2960s, we’re building LACP-flavored port channels by using the “channel group _ mode active” command, which tells the switch to use the genuine-article LACP/802.11ax protocol rather than the older Cisco proprietary Port Aggregation Protocol (PAgp) system, which is activated by running “channel group _ mode auto.”

However, if you have a newer switch, perhaps a nice little SG 300 or something similar, PAgp is dead and not available to you, but the process for LACP is like the old PAgp command: “channel group _ mode auto”  will turn on LACP.

Here’s the 2960s process. Note that my Intel NICs are plugged into Gig 1/0/20-23, with spanning-tree portfast enabled (which we’ll change once our Converged virtual switch is built):

daisettalabs.net#show run int gig 1/0/20
Building configuration...

Current configuration : 63 bytes
!
interface GigabitEthernet1/0/20
spanning-tree portfast

daisettalabs.net#conf t
Enter configuration commands, one per line. End with CNTL/Z.
daisettalabs.net(config)#int range gig 1/0/20-23
daisettalabs.net(config-if-range)#description SMB1 TEAM
daisettalabs.net(config-if-range)#speed 1000
daisettalabs.net(config-if-range)#duplex full
daisettalabs.net(config-if-range)#channel-group 3 mode active
daisettalabs.net(config-if-range)#switchport mode trunk
daisettalabs.net(config-if-range)#
daisettalabs.net(config-if-range)#do wr
Building configuration...
[OK]

Presto! That wasn’t so hard was it?

Note that I’ve trunked all four interfaces; that’s important in Hyper-V Converged switching. We’ll need to trunk po3 as well. 

Let’s take a look at our new port channel:

daisettalabs.net(config-if-range)#do show run int po3
Building configuration…

Current configuration : 54 bytes
!
interface Port-channel3
switchport mode trunk
end

daisettalabs.net(config-if-range)#

Now let’s check the state of the port channel:

daisettalabs.net#show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met u - unsuitable for bundling w - waiting to be aggregated d - default port Number of channel-groups in use: 3 Number of aggregators: 3 Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 
1 Po1(SU) LACP Gi1/0/1(P) Gi1/0/2(P) Gi1/0/3(P)
2 Po2(SU) LACP Gi1/0/11(D) Gi1/0/13(P) Gi1/0/14(P) Gi1/0/15(P) Gi1/0/16(P) 
3 Po3(SD) LACP Gi1/0/19(s) Gi1/0/20(D) Gi1/0/21(s) Gi1/0/22(s) Gi1/0/23(D)

po3 is in total disarray, but not for long. Back on SMB1, it’s time to team those NICs:

nic5

I’m a fan of naming-conventions even if this screenshot doesn’t show it; All teams on all hosts have the same “Daisetta-Team” name, and I usually rename NICs as well, but honestly, you could go mad trying to understand why Windows names NICs the way it does (Seriously. It’s a Thing). There’s no /dev/eth0 for us in MIcroosft-land, it’s always something obscure and strange and out-of-sequence, which is part of the reason why Converged Switching & LBFO kick ass; who cares what your interfaces are named so long as they are identically configured?

If you don’t have an LACP-capable switch, you’ll select “Switch Independent” here.

As for Load Balancing modes: in server 2012, you get Address Hash (Source/Dest MAC or IP in Layer 3 LACP), or Hyper-V Port, which is sort of a round-robin approach (VM1 goes to one port in the team, VM2 to the other).

I prefer the new (with 2012 R2) Dynamic mode which negotiates with the physical switch. More color on those choices & what they mean for you in the References section at the bottom.

Press ok, sit back, and watch my gifcam shot:

Mmmm, taste the convergence.

2:4 Build a Switch on top of that team & Next Steps

If you’ve ever built a switch for Hyper-V, you’ll find building the converged switch immediately familiar, save for one technicality: you’re going to build a switch on top of that multiplexor driver you just created!

Sounds scary? Perhaps. I’ll go into some of the intricacies and gotchas and show some cool powershell bits ‘n bobs on the next episode of Labworks.

Eventually we’re going to dangle all sorts of things off this virtual switch-atop-a-multiplexor-driver!

nic6

 

Links/Knowledge/Required Reading Used in this Post:

[table]
Resource, Author, Summary
Windows Server 2012 LBFO Whitepaper, Microsoft, Must-have though a bit dated at this point
Etherchannel Considerations, Jeremy Stretch at Packetlife.net, Great overview on Cisco aggregation tech including LACP and PAgp

VLAN Tricks with NICS – Teaming & Hyper-V, Keith Mayer, LBFO + VLANs – Hyper-V = still a win

[/table]

Fresh ZFS on Hyper-V : Nexenta 4.01 CE or I have my evening planned

nexenta4

There’s been some changes in the Daisetta Lab.

Details soon, but sometimes, the fun just can’t wait on my ability to blog it.

Here’s a hint:

  • Nexenta 4.01 Community Edition LIVE
  • 2x Samsung 256GB SSD in RAID 0
  • 3x HGST 2TB in RAID 0
  • Core i5-4670k
  • 16GB RAM on the VM
  • Hyper-V 3.0 & a Legacy virtual NIC because, sadly, Nexenta doesn’t see the Hyper-V Synthetic NIC

The Sammy SSDs put in this outstanding effort last night as I was breaking down into fits of maniacal laughter:

2ssdraid0

1GB/second writes is impressive, but what has ZFS taught us through the course of Labworks 1?

The ARC is still the king.

Labworks #1: Building a durable, performance-oriented ZFS box for Hyper-V, VMware

Welcome to my first Labworks post in which I test, build & validate a ZFS storage solution for my home Hyper-V & VMware lab.

Be sure to check out the followup lab posts on this same topic in the table below!

[table]

Labworks Chapter, Section, Subject, Title & URL

Labworks 1:, 1, Storage, Building a Durable and Performance-Oriented ZFS Box for Hyper-V & VMware

,2-3, Storage, I Heart the ARC & Let’s Pull Some Drives!

[/table]

Labworks  #1: Building a durable, performance-oriented ZFS box for Hyper-V, VMware

Primary Goal: To build a durable and performance-oriented storage array using Sun’s fantastic, 128 bit, high-integrity Zetabyte File System for use with Lab Hyper-V CSVs & Windows clusters, VMware ESXi 5.5, other hypervisors,

 

The ARC: My RAM makes your SSD look like 15k drives
The ARC: My RAM makes your SSD look like a couplel of old, wheezing 15k drives

Secondary Goal: Leverage consumer-grade SSDs to increase/multiply performance by using them as ZFS Intent Log (ZIL) write-cache and L2ARC read cache

Bonus: The Windows 7 PC in the living room that’s running Windows Media Center with CableCARD & HD Home Run was running out of DVR disk space and can’t record to SMB shares but can record to iSCSI LUNs.

Technologies used: iSCSI, MPIO, LACP, Jumbo Frames, IOMETER, SQLIO, ATTO, Robocopy, CrystalDiskMark, FreeBSD, NAS4Free, Windows Server 2012 R2, Hyper-V 3.0, Converged switch, VMware, standard switch, Cisco SG300

Schematic: 

Click for larger
Click for larger.

Hardware Notes:
[table]
System, Motherboard, Class, CPU, RAM, NIC, Hypervisor
Node-1, Asus Z87-K, Consumer, Haswell i-5, 24GB, 2x1GbE Intel I305, Hyper-V
Node-2, Biostar HZZMU3, Consumer, Ivy Bridge i-7, 24GB, 2x1GbE Broadcom BC5709C, Hyper-V
Node-3, MSI 760GM-P23, Consumer, AMD FX-6300, 16GB, 2x1GbE Intel i305, ESXi 5.5
san2, Gigabyte GA-F2A88XM-D3H, Consumer, AMD A8-5500, 24GB, 4x1GbE Broadcom BC5709C, NAS4Free
sw01, Cisco SG300-10 Port, Small Busines, n/a, n/a, 10x1GbE, n/a
[/table]

Array Setup:

I picked the Gigabyte board above because it’s got an outstanding eight SATA 6Gbit ports, all running on the native AMD A88x Bolton-D4 chipset, which, it turns out, isn’t supported well in Illumos (see Lab Notes below).

I added to that a cheap $20 Marve 9128se two port SATA 6gbit PCIe card, which hosts the boot volume & the SanDisk SSD.

[table]

Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Zpool Members

Samsung 840 EVO SSD, 1, 128GB, 512byte, 250MB/read, L2ARC Read Cache

SanDisk Ultra Plus II SSD, 1, 128GB, 512byte, 250MB/read & 250MB/write?, ZIL

Seagate 2.5″ Momentus, 1, 500GB, 512byte, 80MB/r/w, Boot/swap/system

[/table]

Performance Tests:

I’m not finished with all the benchmarking, which is notoriously difficult to get right, but here’s a taste. Expect a followup soon.

All shots below involved lzp2 compression on SAN2

SQLIO Short Test: 

sqlio lab 1 short test
Obviously seeing the benefit of ZFS compression & ARC at the front end. IOPS become more realistic toward the middle and right as read cache is exhausted. Consistently in around 150MB-240Mb/s though, the limit of two 1GbE cables.

 

ATTO standard run:

atto
I’ve got a big write problem somewhere. Is it the ZIL, which don’t seem to be performing under BSD as they did under Nexenta? Something else? Could also be related to the Test Volume being formatted NTFS 64kb. Still trying to figure it out

 

NFS Tests:

None so far. From a VMware perspective, I want to rebuild the Standard switch as a distributed switch now that I’ve got a VCenter appliance running. But that’s not my priority at the moment.

Durability Tests:

Pulled two drives -the limit on RAIDZ2- under normal conditions. Put them back in, saw some alerts about the “administrator pulling drives” and the Zpool being in a degraded state. My CSVs remained online, however. Following a short zpool online command, both drives rejoined the pool and the degraded error went away.

Fun shots:

Because it’s not all about repeatable lab experiments. Here’s a Gifcam shot from Node-1 as it completely saturates both 2x1GbE Intel NICs:

test

and some pretty blinking lights from the six 2.5″ drives:

0303141929-MOTION

Lab notes & Lessons Learned:

First off, I’d like to buy a beer for the unknown technology enthusiast/lab guy who uttered these sage words of wisdom, which I failed to heed:

You buy cheap, you buy twice

Listen to that man, would you? Because going consumer, while tempting, is not smart. Learn from my mistakes: if you have to buy, buy server boards.

Secondly, I prefer NexentaStor to NAS4Free with ZFS, but like others, I worry about and have been stung by Open Solaris/Illumos hardware support. Most of that is my own fault, cf the note above, but still: does Illumos have a future? I’m hopeful, NextentaStor is going to appear at next month’s Storage Field Day 5, so that’s a good sign, and version 4.0 is due out anytime.

The Illumos/Nexenta command structure is much more intuitive to me than FreeBSD. In place of your favorite *nix commands, Nexenta employs some great, verb-noun show commands, and dtrace, the excellent diagnostic/performance tool included in Solaris is baked right into Nexenta. In NAS4Free/FreeBSD 9.1, you’ve got to add a few packages to get the equivalent stats for the ARC, L2ARC and ZFS, and adding dtrace involves a make & kernel modification, something I haven’t been brave enough to try yet.

Next: Jumbo Frames for the win. From Node-1, the desktop in my office, my Core i5-4670k CPU would regularly hit 35-50% utilization during my standard SQLIO benchmark before I configured jumbo frames from end-to-end. Now, after enabling Jumbo frames on the Intel NICs, the Hyper-V converged switch, the SG-300 and the ZFS box, utilization peaks at 15-20% during the same SQLIO test, and the benchmarks have show an increase as well. Unfortunately in FreeBSD world, adding jumbo frames is something you have to do on the interface & routing table, and it doesn’t persist across reboots for me, though that may be due to a driver issue on the Broadcom card.

The Western Digital 2.5″ drives aren’t stellar performers and they aren’t cheap, but boy are they quiet, well-built, and run cool, asking politely for only 1 watt under load. I’ve returned the hot, loud & failure prone HGST 3.5″ 2 TB drives I borrowed from work; it’s too hard to put them in a chassis that’s short-depth.

Lastly, ZFS’ adaptive replacement cache, which I’ve enthused over a lot in recent weeks, is quite the value & performance-multiplier. I’ve tested Windows Server 2012 R2 Storage Appliance’s tiered storage model, and while I was impressed with it’s responsiveness, ReFS, and ability to pool storage in interesting ways, nothing can compete with ZFS’ ARC model. It’s simply awesome; deceptively-simple, but awesome.

Lesson is that if you’re going to lose an entire box to storage in your lab, your chosen storage system better use every last ounce of that box, including its RAM, to serve storage up to you. 2012 R2 doesn’t, but I’m hopeful soon that it may (Update 1 perhaps?)

Here’s a cool screenshot from Nexenta, my last build before I re-did everything, showing ARC-hits following a cold boot of the array (top), and a few days later, when things are really cooking for my Hyper-V VMs stored, which are getting tagged with ZFS’ “Most Frequently Used” category and thus getting the benefit of fast RAM & L2ARC:

cache

Next Steps:

  • Find out why my writes suck so bad.
  • Test Nas4Free’s NFS performance
  • Test SMB 3.0 from a virtual machine inside the ZFS box
  • Sell some stuff so I can buy a proper SLC SSD drive for the ZIL
  • Re-build the rookie Standard Switch into a true Distributed Switch in ESXi

Links/Knowledge/Required Reading Used in this Post:

[table]
Resource, Author, Summary
Three Example Home Lab Storage Designs using SSDs and Spinning Disk, Chris Wahl, Good piece on different lab storage models
ZFS, Wikipedia, Great overview of ZFS history and features
Activity of the ZFS Arc, Brendan Gregg, Excellent overview of ZFS’ RAM-as-cache
Hybrid Storage Pool Performance, Brendan Gregg, Details ZFS performance
FreeBSD Jumbo Frames, NixCraft, Applying MTU correctly
Hyper-V vEthernet Jumbo Frames, Darryl Van der Peijl, Great little powershell script to keep you out of regedit
Nexenta Community Edition 3.1.5, NexentaStor, My personal preference for a Solaris-derived ZFS box
Nas4Free, Nas4Free.org, FreeBSD-based ZFS; works with more hardware
[/table]