Microsoft is the original & ultimate hyperconverged play

The In Tech We Trust Podcast has quickly became my favorite enterprise technology podcast since it debuted late last year. If you haven’t tuned into it yet, I advise you to get the RSS feed on your favored podcast player of choice ASAP.

The five gents ((Nigel Poulton, Linux trainer at Pluralsight, Hans De Leenheer,datacenter/storage and one of my secret crushes, Gabe Chapman, Marc Farley and Rick Vanover)) putting on the podcast are among the sharpest guys in infrastructure technology, have great on-air chemistry with each other, and consistently deliver an organized & smart format that hits my player on-time as expected every week. Oh, and they’ve equalized the Skype audio feeds too!

And yet….I can’t let the analysis in the two most recent shows slip by without comment. Indeed, it’s time for some tough love for my favorite podcast.

Guys you totally missed the mark discussing hyperconvergence & Microsoft over the last two shows!

For my readers who haven’t listened, here’s the compressed & deduped rundown of 50+ minutes of good stimulating conversation on hyperconvergence:

  • There’s little doubt in 2015 that hyperconverged infrastructure (HCI) is a durable & real thing in enterprise technology, and that that thing is changing the industry. HCI is real and not a fad, and it’s being adopted by customers.
  • But if HCI is a real, it’s also different things to different people; for Hans, it’s about scale-out node-based architecture, for others on the show, it’s more or less the industry definition: unified compute & storage with automation & management APIs and a GUI framework over the top.
  • But that loose definition is also evolving, as Rick Vanover sharply pointed out that EMC’s new offering, vSpex Blue, offers something more than what we’d traditionally (like two weeks ago) think of as hyperconvergence

Good stuff and good discussion.

And then the conversation turned to Microsoft. And it all went downhill. A summary of the guys’ views:

  • Microsoft doesn’t have a hyperconverged pony in the race, except perhaps Storage Spaces, which few like/adopt/bet on/understand
  • MS has ceded this battlefield to VMware
  • None of the cool & popular hyperconverged kids, save for Nutanix and Gridstore, want to play with Microsoft
  • Microsoft has totally blown this opportunity to remain relevant and Hyper-V is too hard. Marc Farley in particularly emphasized how badly Microsoft has blown hyperconvergence

I was, you might say, frustrated as I listened to this sentiment on the drive into my office today. My two cents below:

The appeal of Hyperconvergence is a two-sided coin. On the one side are all the familiar technical & operational benefits that are making it a successful and interesting part of the market.

  • It’s an appliance: Technical complexity and (hopefully) dysfunction are ironed out by the vendor so that storage/compute/network just work
  • It’s Easy: Simple to deploy, maintain, manage
  • It’s software-based and it’s evolving to offer more: As the guys on the show noted, newer HCI systems are offering more than ones released 6 months or a year ago.

The other side of that coin is less talked about, but no less powerful. HCI systems are rational cost centers, and the success of HCI marks a subtle but important shift in IT & in the market.

    • It’s a predictable check cut to fewer vendors: Hyperconvergence is also about vendor consolidation in IT shops that are under pressure to make costs predictable and smoother (not just lower).
    • It’s something other than best-of-breed: The success of HCI systems also suggests that IT shops may be shying away from best-of-breed purchasing habits and warming up to a more strategic one-throat-to-choke approach ((EMC & VMware, for instance, are titans in the industry, with best-in-class products in storage & virtualization, yet I can’t help but feel there’s more going on than the chattering classes realize. Step back and think of all the new stuff in vSphere 6, and couple it with all the old stuff that’s been rebranded as new in the last year or so by VMware. Of all that ‘stuff’, how much is best of breed, and how much of it is decent enough that a VMware customer can plausibly buy it and offset spend elsewhere?))
    • It’s some hybrid of all of the above: HCI in this scenario allows IT to have its cake and eat it too, maybe through vendor consolidation, or cost-offsets. Hard to gauge but the effect is real I think.

((As Vanover noted, EMC’s value-adds on the vSpex Blue architecture are potentially huge: if you buy vSpex Blue architecture, you get backup & replication, which means you don’t have to talk to or cut yearly checks to Commvault, Symantec or Veeam. I’ve scored touchdowns using that exact same play, embracing less-than-best Microsoft products that do the same thing as best-in-class SAN licenses))

And that’s where Microsoft enters the picture as the original -and ultimate- Hyperconverged play.

Like any solid HCI offering, Microsoft makes your hardware less important by abstracting it, but where Microsoft is different is that they scope supported solutions to x86. VMware, in contrast only hands out EVO:RAIL stickers to hardware vendors who dress x86 up and call it an appliance, which is more or less the Barracuda Networks model. ((I’m sorry. I know that was a a cheapshot,  but I couldn’t resist))

With your vanilla, Plain Jane whitebox x86 hardware, you can then use Microsoft’s Hyperconverged software system (or what I think of as Windows Server) to virtualize & abstract all the things from network (solid NFV & evolving overlay/SDN controller) to compute to storage, which features tiering, fault-tolerance, scale-out and other features usually found in traditional SAN systems.

But it doesn’t stop there. That same software powers services in an enormous IaaS/PaaS cloud, which works hand-in-hand with a federated productivity cloud that handles identity, messaging, data-mining, mail and more. The IaaS cloud, by the way, offers DR capabilities today, and you can connect to it via routing & ipsec, or you can extend your datacenter’s layer 2 broadcast domain to it if you like.

On the management/automation side, I understand/sympathize with ignorance of non-‘softies. Microsoft fans enthuse  about Powershell so much because it is -today-  a unified management system across a big chunk of the MS stack, either masked by GUI systems like System Center & Azure Pack or exposed as naked cmdlets. Powershell alone isn’t cool though, but Powershell & Windows Server aligned with truly open management frameworks like CIM, SMI-S and WBEM is very cool, especially in contrast to feature-packed but closed APIs.

On the cost side,there’s even more to the MS hyperconverged story:  Customers can buy what is in effect a single SKU (the Enterprise Agreement) and get access to most if not all of the MS stack.

Usually,organizations pay for the EA in small, easier-to-digest bites over a three year span, which the CFO likes because it’s predictable & smooth. (( Now, of course, I’m drastically simplifying Microsoft’s licensing regime and the process of buying an EA as you can’t add an EA to your cart & checkout, it’s a friggin negotiation. And yes I know everyone hates the true-up. And I grant that an EA just answers the software piece; organizations will still need the hardware, but I’d argue that de-coupling software from hardware makes purchasing the latter much, much easier, and how much hardware do you really need if you have Azure IaaS to fill in the gaps?))

Are all these Microsoft things you’ve bought best of breed? No, of course not. But you knew that ahead of time, if you did you homework.

Are they good enough in a lot of circumstances?

I’ll let you judge that for yourself, but, speaking from experience here, IT shops that go down the MS/EA route strategically do end up in the same magical, end-of-the-rainbow fairy-tale place that buyers of HCI systems are seeking.

That place is pretty great, let me tell you. It’s a place where the spend & costs are more predictable and bigger checks are cut to fewer vendors.  It’s a place where there are fewer debutante hardware systems fighting each other and demanding special attention & annual maintenance/support renewals in the datacenter. It’s also a place where you can manage things by learning verb-noun pairs in Powershell.

If that’s not the ultimate form of hyperconvergence, what is?

Thoughts on EVO:RAIL

So if you work in IT, and even better, if you’re in the virtualization space of IT as I am, you have to know that VMworld is happening this week.

VMworld is just about the biggest vCelebration of vTechnologies there is. Part trade-show, part pilgrimage, part vLollapalooza, VMworld is where all the sexy new vProducts are announced by VMware, makers of ESXi, vSphere, vCenter, and so many other vThings.

It’s an awesome show…think MacWorld at the height of Steve Jobs but with fewer hipsters and way more virtualization engineers. Awesome.

And I’ve never been :sadface:

And 2014’s VMworld was a doozy. You see, the vGiant announced a new 2U, four node vSphere & vSAN cluster-in-a-box hardware device called EVO:RAIL. I’ve been reading all about EVO:RAIL for the last two days and here’s what I think as your loyal Hyper-V blogger:

  • What’s in a name? Right off the bat, I was struck by the name for this appliance. EVO:RAIL…say what? What’s VMware trying to get across here? Am I to associate EVO with the fast Mitsubishi Lancers of my youth, or is this EVO in the more Manga/Anime sense of the word? Taken together, EVO:RAIL also calls to mind sci-fi, does it not? You could picture Lt. Cmdr Data talking about an EVO:RAIL to Cmdr Riker, as in “The Romulan bird of prey is outfitted with four EVO:RAIL phase cannons, against which the Enterprise’s shields stand no chance.” Speaking of guns: I also thought of the US Navy’s Railguns; long range kinetic weapons designed to destroy the Nutanix/Simplivity the enemy.
  • If you’re selling an appliance, do you need vExperts? One thing that struck me about VMware’s introduction of EVO:RAIL was their emphasis on how simple it is to rack, stack, install, deploy and virtualize. They claim the “hyper-converged” 2U box can be up and running in about 15 minutes; a full rack of these babies could be computing for you in less than 2 hours. evo1They’ve built a sexy HTML 5 GUI to manage the thing, no vSphere console or PowerCLI in sight. It’s all pre-baked, pre-configured, and pre-built for you, the small-to-medium enterprise. It’s so simple a help desk guy could set it up. So with all that said, do I still need to hire vExperts and VCDX pros to build out my virtualization infrastructure? It would appear not. Is that the message VMware is trying to convey here?
  • One SKU for the Win: I can’t be the only one that thinks buying the VMware stack is a complicated & time-consuming affair. Chris Wahl points out that EVO:RAIL is one SKU, one invoice, one price to pay, and VMware’s product page confirms that, saying you can buy a Dell EVO:RAIL or a Fujitsu EVO:RAIL, but whatever you buy, it’ll be one SKU. This is really nice. But why? VMware is famous for licensing its best-in-class features…why mess with something that’s worked so well for them?
    Shades of Azure simplicity here
    Shades of Azure simplicity here

    One could argue that EVO:RAIL is a reaction to simplified pricing structures on rival systems…let’s be honest with ourselves. What’s more complicated: buying a full vSphere and/or vHorizon suite for a new four node cluster, or purchasing the equivalent amount of computing units in Azure/AWS/Google Compute? What model is faster to deploy, from sales call to purchasing to receiving to service? What model probably requires consulting help?

    Don’t get me wrong, I think it’s great. I like simple menus, and whereas buying VMware stuff before was like choosing from a complicated, multi-page, multi-entree menu, now it’s like buying burgers at In ‘n Out. That’s very cool, but it means something has changed in vLand.

  • I love the density: As someone who’s putting the finishing touches on my own new virtualization infrastructure, I love the density in EVO:RAIL. 2 Rack Units with E5-26xx class Xeons packing 6 cores each means you can pack about 48 cores into 2U! Not bad, not bad at all. The product page also says you can have up to 16TB of stroage in those same 2U (courtesy of VSAN) and while you still need a ToR switch to jack into, each node has 2x10GbE SFP+ or Copper. Which is excellent. RAM is the only thing that’s a bit constrained; each node in an EVO:RAIL can only hold 192GB of RAM, a total of 768GB per EVO:RAIL.In comparison, my beloved 2U pizza boxes offer more density in some places, but less overall, given than 1 Pizza Box = one node. In the Supermicros I’m racking up later this week, I can match the core count (4×12 Core E5-46xx), improve upon the RAM (up to 1TB per node) and easily surpass the 16TB of storage. That’s all in 2U and all for about $15-18k.Where the EVO:RAIL appears to really shine is in VM/VDI density. VMware claims a single EVO:RAIL is built to support 100 General Purpose VMs or to support up to 250 VDI sessions, which is f*(*U#$ outstanding.
  • I wonder if I can run Hyper-V on that: Of course I thought that. Because that would really kick ass if I could.

Overall, a mighty impressive showing from VMware this week. Like my VMware colleagues, I pine for an EVO:RAIL in my lab.

I think EVO:RAIL points to something bigger though…This product marks a shift in VMware’s thinking, a strategic reaction to the changes in the marketplace. This is not just a play against Nutranix and other hyper-converged vendors, but against the simplicity and non-specialist nature of cloud Infrastructure as a Service.  This is a play against complexity in other words…this is VMware telling the marketplace that you can have best-in-class virtualization without worst-in-class licensing pain and without hiring vExperts to help you deploy it.

Labworks #1: Building a durable, performance-oriented ZFS box for Hyper-V, VMware

Welcome to my first Labworks post in which I test, build & validate a ZFS storage solution for my home Hyper-V & VMware lab.

Be sure to check out the followup lab posts on this same topic in the table below!

[table]

Labworks Chapter, Section, Subject, Title & URL

Labworks 1:, 1, Storage, Building a Durable and Performance-Oriented ZFS Box for Hyper-V & VMware

,2-3, Storage, I Heart the ARC & Let’s Pull Some Drives!

[/table]

Labworks  #1: Building a durable, performance-oriented ZFS box for Hyper-V, VMware

Primary Goal: To build a durable and performance-oriented storage array using Sun’s fantastic, 128 bit, high-integrity Zetabyte File System for use with Lab Hyper-V CSVs & Windows clusters, VMware ESXi 5.5, other hypervisors,

 

The ARC: My RAM makes your SSD look like 15k drives
The ARC: My RAM makes your SSD look like a couplel of old, wheezing 15k drives

Secondary Goal: Leverage consumer-grade SSDs to increase/multiply performance by using them as ZFS Intent Log (ZIL) write-cache and L2ARC read cache

Bonus: The Windows 7 PC in the living room that’s running Windows Media Center with CableCARD & HD Home Run was running out of DVR disk space and can’t record to SMB shares but can record to iSCSI LUNs.

Technologies used: iSCSI, MPIO, LACP, Jumbo Frames, IOMETER, SQLIO, ATTO, Robocopy, CrystalDiskMark, FreeBSD, NAS4Free, Windows Server 2012 R2, Hyper-V 3.0, Converged switch, VMware, standard switch, Cisco SG300

Schematic: 

Click for larger
Click for larger.

Hardware Notes:
[table]
System, Motherboard, Class, CPU, RAM, NIC, Hypervisor
Node-1, Asus Z87-K, Consumer, Haswell i-5, 24GB, 2x1GbE Intel I305, Hyper-V
Node-2, Biostar HZZMU3, Consumer, Ivy Bridge i-7, 24GB, 2x1GbE Broadcom BC5709C, Hyper-V
Node-3, MSI 760GM-P23, Consumer, AMD FX-6300, 16GB, 2x1GbE Intel i305, ESXi 5.5
san2, Gigabyte GA-F2A88XM-D3H, Consumer, AMD A8-5500, 24GB, 4x1GbE Broadcom BC5709C, NAS4Free
sw01, Cisco SG300-10 Port, Small Busines, n/a, n/a, 10x1GbE, n/a
[/table]

Array Setup:

I picked the Gigabyte board above because it’s got an outstanding eight SATA 6Gbit ports, all running on the native AMD A88x Bolton-D4 chipset, which, it turns out, isn’t supported well in Illumos (see Lab Notes below).

I added to that a cheap $20 Marve 9128se two port SATA 6gbit PCIe card, which hosts the boot volume & the SanDisk SSD.

[table]

Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Zpool Members

Samsung 840 EVO SSD, 1, 128GB, 512byte, 250MB/read, L2ARC Read Cache

SanDisk Ultra Plus II SSD, 1, 128GB, 512byte, 250MB/read & 250MB/write?, ZIL

Seagate 2.5″ Momentus, 1, 500GB, 512byte, 80MB/r/w, Boot/swap/system

[/table]

Performance Tests:

I’m not finished with all the benchmarking, which is notoriously difficult to get right, but here’s a taste. Expect a followup soon.

All shots below involved lzp2 compression on SAN2

SQLIO Short Test: 

sqlio lab 1 short test
Obviously seeing the benefit of ZFS compression & ARC at the front end. IOPS become more realistic toward the middle and right as read cache is exhausted. Consistently in around 150MB-240Mb/s though, the limit of two 1GbE cables.

 

ATTO standard run:

atto
I’ve got a big write problem somewhere. Is it the ZIL, which don’t seem to be performing under BSD as they did under Nexenta? Something else? Could also be related to the Test Volume being formatted NTFS 64kb. Still trying to figure it out

 

NFS Tests:

None so far. From a VMware perspective, I want to rebuild the Standard switch as a distributed switch now that I’ve got a VCenter appliance running. But that’s not my priority at the moment.

Durability Tests:

Pulled two drives -the limit on RAIDZ2- under normal conditions. Put them back in, saw some alerts about the “administrator pulling drives” and the Zpool being in a degraded state. My CSVs remained online, however. Following a short zpool online command, both drives rejoined the pool and the degraded error went away.

Fun shots:

Because it’s not all about repeatable lab experiments. Here’s a Gifcam shot from Node-1 as it completely saturates both 2x1GbE Intel NICs:

test

and some pretty blinking lights from the six 2.5″ drives:

0303141929-MOTION

Lab notes & Lessons Learned:

First off, I’d like to buy a beer for the unknown technology enthusiast/lab guy who uttered these sage words of wisdom, which I failed to heed:

You buy cheap, you buy twice

Listen to that man, would you? Because going consumer, while tempting, is not smart. Learn from my mistakes: if you have to buy, buy server boards.

Secondly, I prefer NexentaStor to NAS4Free with ZFS, but like others, I worry about and have been stung by Open Solaris/Illumos hardware support. Most of that is my own fault, cf the note above, but still: does Illumos have a future? I’m hopeful, NextentaStor is going to appear at next month’s Storage Field Day 5, so that’s a good sign, and version 4.0 is due out anytime.

The Illumos/Nexenta command structure is much more intuitive to me than FreeBSD. In place of your favorite *nix commands, Nexenta employs some great, verb-noun show commands, and dtrace, the excellent diagnostic/performance tool included in Solaris is baked right into Nexenta. In NAS4Free/FreeBSD 9.1, you’ve got to add a few packages to get the equivalent stats for the ARC, L2ARC and ZFS, and adding dtrace involves a make & kernel modification, something I haven’t been brave enough to try yet.

Next: Jumbo Frames for the win. From Node-1, the desktop in my office, my Core i5-4670k CPU would regularly hit 35-50% utilization during my standard SQLIO benchmark before I configured jumbo frames from end-to-end. Now, after enabling Jumbo frames on the Intel NICs, the Hyper-V converged switch, the SG-300 and the ZFS box, utilization peaks at 15-20% during the same SQLIO test, and the benchmarks have show an increase as well. Unfortunately in FreeBSD world, adding jumbo frames is something you have to do on the interface & routing table, and it doesn’t persist across reboots for me, though that may be due to a driver issue on the Broadcom card.

The Western Digital 2.5″ drives aren’t stellar performers and they aren’t cheap, but boy are they quiet, well-built, and run cool, asking politely for only 1 watt under load. I’ve returned the hot, loud & failure prone HGST 3.5″ 2 TB drives I borrowed from work; it’s too hard to put them in a chassis that’s short-depth.

Lastly, ZFS’ adaptive replacement cache, which I’ve enthused over a lot in recent weeks, is quite the value & performance-multiplier. I’ve tested Windows Server 2012 R2 Storage Appliance’s tiered storage model, and while I was impressed with it’s responsiveness, ReFS, and ability to pool storage in interesting ways, nothing can compete with ZFS’ ARC model. It’s simply awesome; deceptively-simple, but awesome.

Lesson is that if you’re going to lose an entire box to storage in your lab, your chosen storage system better use every last ounce of that box, including its RAM, to serve storage up to you. 2012 R2 doesn’t, but I’m hopeful soon that it may (Update 1 perhaps?)

Here’s a cool screenshot from Nexenta, my last build before I re-did everything, showing ARC-hits following a cold boot of the array (top), and a few days later, when things are really cooking for my Hyper-V VMs stored, which are getting tagged with ZFS’ “Most Frequently Used” category and thus getting the benefit of fast RAM & L2ARC:

cache

Next Steps:

  • Find out why my writes suck so bad.
  • Test Nas4Free’s NFS performance
  • Test SMB 3.0 from a virtual machine inside the ZFS box
  • Sell some stuff so I can buy a proper SLC SSD drive for the ZIL
  • Re-build the rookie Standard Switch into a true Distributed Switch in ESXi

Links/Knowledge/Required Reading Used in this Post:

[table]
Resource, Author, Summary
Three Example Home Lab Storage Designs using SSDs and Spinning Disk, Chris Wahl, Good piece on different lab storage models
ZFS, Wikipedia, Great overview of ZFS history and features
Activity of the ZFS Arc, Brendan Gregg, Excellent overview of ZFS’ RAM-as-cache
Hybrid Storage Pool Performance, Brendan Gregg, Details ZFS performance
FreeBSD Jumbo Frames, NixCraft, Applying MTU correctly
Hyper-V vEthernet Jumbo Frames, Darryl Van der Peijl, Great little powershell script to keep you out of regedit
Nexenta Community Edition 3.1.5, NexentaStor, My personal preference for a Solaris-derived ZFS box
Nas4Free, Nas4Free.org, FreeBSD-based ZFS; works with more hardware
[/table]