It’s been awhile since I posted about my home lab, Daisettalabs.net, but rest assured, though I’ve been largely radio silent on it, I’ve been busy.

If 2013 saw the birth of Daisetta Labs.net, 2014 was akin to the terrible twos, with some joy & victories mixed together with teething pains and bruising.

So what’s 2015 shaping up to be?

Well, if I had to characterize it, I’d say it’s #LabGlory, through and through. Honestly. Why?

I’ve assembled a home lab that’s capable of simulating just about anything I run into in the ‘wild’ as a professional. And that’s always been the goal with my lab: practicing technology at home so that I can excel at work.

Let’s have a look at the state of the lab, shall we?

Hardware & Software

Daisetta Labs.net 2015 is comprised of the following:

  • Five (5) physical servers
  • 136 GB RAM
  • Sixteen (16) non-HT Cores
  • One (1) wireless access point
  • One (1) zone-based Firewall
  • Two (2) multilayer gigabit switches
  • One (1) Cable modem in bridge mode
  • Two (2) Public IPs (DHCP)
  • One (1) Silicon Dust HD
  • Ten (10) VLANs
  • Thirteen (13) VMs
  • Five (5) Port-Channels
  • One (1) Windows Media Center PC

That’s quite a bit of kit, as a former British colleague used to say. What’s it all do? Let’s dive in:

Physical Layout

The bulk of my lab gear is in my garage on a wooden workbench.

Nodes 2-4, the core switch, my Zywall edge device, modem, TV tuner, Silicon Dust device and Ooma phone all reside in a secured 12U, two post rack I picked up on ebay about two years ago for $40. One other server, core.daisettalabs.net, sits inside a mid-tower case stuffed with nine 2TB Hitachi HDDs and five 256GB SSDs below the rack.

Placing my lab in the garage has a few benefits, chief among them: I don’t hear (as many) complaints from the family cluster about noise. Also, because it’s largely in the garage, it’s isolated & out of reach of the Child Partition’s curious fingers, which, as every parent knows, are attracted to buttons of all types.

Power & Thermal

Of course you can’t build a lab at home without reliable power, so I’ve got one rack-mounted APC UPS, and one consumer-grade Cyberpower UPS for core.daisettalabs.net and all the internet gear.

On average, the lab gear in the garage consumes about 346 watts, or about 3 amps. That’s significant, no doubt, costing me about $38/month to power, or about 2/3rds the cost of a subscription to IT Pro TV or Pluralsight. 🙂

Thermals are a big challenge. My house was built in 1967, has decent insulation and holds temperature fairly well in the habitable parts of the space. But none of that is true about the garage, where my USB lab thermometer has recorded temps as low as 3C last winter and as high as 39c in Summer 2014. That’s air-temperature at the top of the rack, mind you, not at the CPU.

One of my goals for this year is to automate the shutdown/powerup of all node servers in the Garage based on the temperature reading of the USB thermometer. The $25 thermometer is something I picked up on Amazon awhile ago; it outputs to .csv but I haven’t figured out how to automate its software interface with powershell….yet.

Anyway, here’s my stack, all stickered up and ready for review:

IMG_20150329_214535914

Beyond the garage, the Daisetta Lab extends to my home’s main hallway, the living room, and of course, my home office.

Here’s the layout:

homelab2015

Compute

On the compute side of things, it’s almost all Haswell with the exception of core and node3:

[table]

Server, Architecture, CPU, Cores, RAM, Function, OS, Motherboard

Core, AMD A-series, A8-5500, 2, 8GB, Tiered Storage Spaces & DC/DHCP/DNS, Server 2012 R2, Gigabyte D4

Node1, Haswell, i7-4770k, 4, 32GB, Main PC/Office/VM host/storage, 2012R2, Supermicro X10SAT

Node2, Haswell, Xeon E3-1241, 4, 32GB, Cluster node, 2012r2 core, Supermicro X10SAF

Node3, Ivy Bridge, i7-2600, 4, 32GB, Cluster node, 2012r2 core, Biostar

Node4, Haswell, i5-4670, 4, 32GB, Cluster node/storage, 2012r2 core, Asus

[/table]

I love Haswell for its speed, thermal properties and affordability, but damn! That’s a lot of boxes, isn’t it? Unfortunately, you just can’t get very VM dense when 32GB is the max amount of RAM Haswell E3/i7 chipsets support. I love dynamic RAM on a VM as much as the next guy, but even with Windows core, it’s been hard to squeeze more than 8-10 VMs on a single host. With Hyper-V Containers coming, who knows, maybe that will change?

Node1, the pride of the fleet and my main productivity machine, boasting 2x850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces
Node1, the pride of the fleet and my main productivity machine, boasting 2×850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces

While I included it in the diagram, TVPC3 is not really a lab machine. It’s a cheap Ivy Bridge Pentium with 8GB of RAM and 3TB of local storage. It’s sole function in life is to decrypt the HD stream it receives from the Silicon Dust tuner and display HGTV for my mother-in-law with as little friction as possible. Running Windows 8.1 with Media Center, it’s the only PC in the house without battery backup.

Physical Network
About 18 months ago, I poured gallons of sweat equity into cabling my house. I ran at least a dozen CAT-5e cables from the garage to my home office, bedrooms, living room and to some external parts of the house for video surveillance.
I don’t regret it in the least; nothing like having a reliable, physical backbone to connect up your home network/lab environment!

Meet my underlay
Meet my underlay

At the core of the physical network lies my venerable Cisco 2960S-48TS-L switch. Switch1 may be a humble access-layer switch, but in my lab, the 2960S bundles 17 ports into five port channels, serves as my DG, routes with some rudimentary Layer 3 functions ((Up to 16 static routes, no dynamic route features are available)) and segments 9 VLANs and one port-security VLAN, a feature that’s akin to PVLAN.

Switch2 is a 10 port Cisco Small Business SG-300 running at Layer 3 and connected to Switch1 via a 2-port port-channel. I use a few ports on switch2 for the TV and an IP cam.

On the edge is redzed.daisettalabs.net, the Zyxel USG-50, which I wrote about last month.

Connecting this kit up to the internet is my Motorola Surfboard router/modem/switch/AP, which I run in bridge mode. The great thing about this device and my cable service is that for some reason, up to two LAN ports can be active at any given time. This means that CableCo gives me two public, DHCP addresses, simultaneously. One of these goes into a WAN port on the Zyxel, and the other goes into a downed switchport

Love Meraki's RF Spectrum chart!
Love Meraki’s RF Spectrum chart!

Lastly, there’s my Meraki MR-16, an access point a friend and Ubiquity networks fan gave me. Though it’s a bit underpowered for my tastes, I love this device. The MR-16 is trunked to switch1 and connects via an 802.3af power injector. I announce two SSIDs off the Meraki, both secured with WPA2 Personal ((WPA2 Enterprise is on the agenda this year)). Depending on which SSID you connect to, you’ll end up on the Device or VM VLANs.

Virtual Network

The virtual network was built entirely in System Center VMM 2012 R2. Nothing too fancy here, with multiple Gigabit adapters per physical host, one converged logical vSwitch and a separate NIC on each host fronting for the DMZ network:

Nodes 1, 2 & 4 are all Haswell, and are clustered. Node3 is standalone.

Thanks to VMM, building this out is largely a breeze, once you’ve settled on an architecture. I like to run the cmdlets to build the virtual & logical networks myself, but there’s also a great script available that will build a converged network for you.

A physical host typically looks like this (I say typically because I don’t have an equal number of adapters in all hosts):

I trust VLANs and VMM's segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks
I trust VLANs and VMM’s segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks

We’re already several levels deep in my personal abstraction cave, why stop here? Here’s the layout of VM Networks, which are distinguished from but related to logical networks in VMM:

labnet13

I get a lot of questions on this blog about jumbo frames and Hyper-V switching, and I just want to reiterate that it’s not that hard to do, and look, here’s proof:

jumbopacket

Good stuff!

Storage

And last, and certainly most-interestingly, we arrive at Daisetta Lab’s storage resources.

My lab journey began with storage testing, in particular ZFS via NexentaCore (Illumos), NAS4Free and Solaris 11. But that’s ancient history; since last summer, I’ve been all Windows, all the time in my lab, starting with SAN.Daisettalabs.net ((cf #StorageGlory : 30 Days on a Windows SAN)).

Now?

Well, I had so much fun -and importantly so few failures/pains- with Microsoft’s Tiered Storage Spaces that I’ve decided to deploy not one, or even two, but three Tiered Storage Spaces. Here’s the layout:

[table]Server, #HDD, #SSD, StoragePool Capacity, StoragePool Free, #vDisks, Function

Core, 9, 6, 16.7TB, 12.7TB, 6 So far, SMB3/iSCSI target for entire lab

Node1,2, 2, 2.05TB, 1.15TB,2, SMB3 target for Hyper-V replication

Node4,3,1, 2.86TB, 1.97TB,2, SMB3 target for Hyper-V replication

[/table]

I have to say, I continue to be very impressed with Tiered Storage Spaces. It’s super-flexible, the cmdlets are well-documented, and Microsoft is iterating on it rapidly. More on the performance of Tiered Storage Spaces in a subsequent post.

Thanks for reading!

Snover re-factoring Windows Server & System Center

My last two posts on Microsoft were filled with angst and despair at Microsoft’s announcement that the next gen versions of Server & System Center would be delayed until sometime in 2016. Why, I cried out, why the delay on Server, and what’s to become of my System Center, I wondered?

I went a bit off-the-rails, imagining that Satya Nadella had shaken things up for the System Center team. Then I wrote a letter to him asking him what was up.

Snover & Microsoft love Linux
Snover & Microsoft love Linux

Well, I was wrong on all that, or perhaps I was only a little bit right.

There was a shakeup, but it wasn’t Nadella who had angrily overturned a gigantic redwood table at System Center HQ, spilling Visio shapes & System Center management packs as he did so, rather it was Mr Windows himself, the Most Distinguished of Distinguished Technical Fellows, Dr. Jeffrey Snover who had shaken things up.

Yes. The Padre of Powershell himself filled in the gaps for me on why System Center & Windows Server were delayed during a TechDays online one day after my last post.

During that  talk, he announced that the Windows Server Team has been meshed with the System Center Team and, even better, the Azure team. Hot dog.

Redmond mag:

[Snover] explained that the System Center team and the Windows Server team are now “a single organization,” with common planning and scheduling. He said that the integration of the two formerly separate organizations isn’t 100 percent, but it’s better than it’s been in the past. The team also takes advantage of joint development efforts with the Microsoft Azure team, he added.

That’s outstanding news in my view.

Microsoft’s private|hybrid|public cloud story is second to none as far as I’m concerned. No one else offers deep integration between cutting edge public cloud systems (Azure) with your on-prem legacy infrastructure stack.

Yet that deep integration (not speaking of AAD Sync & ADFS 3 here) was becoming confused and muddled with overlap between the older tools (System Center) and the newer tools like Desired State Configuration, mixed in with AzurePack, an on-prem/cloud management engine.

It sounds to me like Snover’s going to put together a coherent strategy using all the tools, and I can’t think of a better guy to do the job.

But what of Windows server?

It’s getting Snovered too, but in a way that’s not as clear to me. Again, Redmond mag:

The next Windows Server product will be deeply refactored for cloud scenarios. It will have just the components for that and nothing else, Snover explained. Next, on top of that, Microsoft plans to build a server that will be the same as the Windows Servers that organizations currently use. This server it will have two application profiles. One of the application profiles will target the existing APIs for Windows Server, while the other will target the subsets of the APIs that are cloud optimized, Snover explained. On top of the server, it will be possible to install a client, he added. This redesign is happening to better support automation, he explained.

I watched most of Snover’s talk, took a few days to think about it, and still have no idea what to make of the high-level architecture slide below that flashed on screen briefly:

vnext

Some thoughts that ran through my head: is the cloud-optimized server akin to CoreOS, with active/passive boot partitions, something that will finally make Patch Tuesday obsolete? One could hope that with further abstraction, we’ll get something like that in Windows Server vNext.

In some sense, we already have parts of this: if you enable the Hyper-V feature on a bare-metal computer, you emerge, after a few reboots, running a Windows virtual machine atop a Type-1 Hypervisor.

Big deal right? Well, Snover’s slide seems to indicate this will be the default state for the next generation of Windows server, but more than that, it seems to indicate that what we think of as the Type-1 Hyperivisor is getting a bunch of new features, like container support.

We knew Docker support was coming, but at this level, and almost indistinguishable from the hypervisor itself?

That’s potentially all kinds of awesome.

Interestingly, Server Roles & Features look like they’re being recast into a “Client” level that operates above a Windows Server.

Which, if we continue down the rabbit hole, means we have to ask the question: If my AD Domain Controller  or my RemoteApp session host farm servers are now clients, what are they running on? It certainly doesn’t seem to be a Windows server anymore, but rather a kind agnostic compute fabric, made up of virtual “Servers” and/or “Containers” operating atop a cloud-optimized server running on bare-metal…an agnostic computing ((Damn straight, had to work that in there)) fabric that stretches across my old on-prem Dells all the way up to the Azure cloud…right?!?

I’m like four levels deep into Jeffrey Snover’s subconscious so I’ll stop, but suffice it to say, the delay of Windows Server & System Center appears to be justified and I can’t wait to start testing it in 2016.

Hyper-V + VXLAN and more from Tech Ed Europe

If you thought -as I admittedly did- that on-prem Windows Server was being left for dead on the side of the Azure road, then boy were we wrong.

Not sure where to start here, but some incredible announcements from Microsoft in Barcelona, most of which I got from Windows Server MVP reporter Aidan Finn

Among them:

  • VXLAN, NVGRE & Network Controller, courtesy of Azure: This is something I’ve hoped for in the next version of Windows Server: a more compelling SDN story, something more than Network Function Virtualization & NVGRE encapsulation. If bringing the some of the best -and widely supported- bits of the VMware ecosystem to on-prem Hyper-V & System Center isn’t a virtualization engineer’s wet dream, I don’t know what is.
  • VMware meet Azure Site Recovery: Coming soon to a datacenter near you, failover your VMware infrastructure via Azure Site Recovery, the same way Hyper-V shops can

    Not sure what to do with this yet, but gimme!
    Not sure what to do with this yet, but gimme!
  • In-place/rolling upgrades for Hyper-V Clusters: This feature was announced with the release of Windows Server Technical Preview (of course, I only read about it after I wiped out my lab 2012 R2 cluster) but there’s a lot more detail on it from TechEd via Finn:  rebuild physical nodes without evicting them first.You keep the same Cluster Name Object, simply live migrating your VMs off your targeted hosts. Killer.
  • Single cluster node failure: In the old days, I used to lose sleep over clusres.dll, or clussvc.exe, two important pieces in Microsoft Clustering technology. Sure, your VMs will failover & restart on a new host, but that’s no fun.  Ben Armstrong demonstrated how vNext handles node failure by killing the cluster service live during his presentation. Finn says the VMs didn’t failover,but the host was isolated by the other nodes and the cluster simply paused and waited for the node to recovery (up to 4 minutes). Awesome!
  • Azure Witness: Also for clustering fans who are torn (as I am) between selecting file or disk witness for clusters: you will soon be able to add mighty Azure as a witness to your on-prem cluster. Split brain fears no more!
  • More enhancements for Storage QoS: Ensure that your tenant doesn’t rob IOPS from everyone else.
  • The Windows SAN, for real: Yes, we can soon do offsite block-level replication from our on-prem Tiered Storage Spaces servers.
  • New System Center coming next year: So much to unpack here, but I’ll keep it brief. You may love System Center, you may hate it, but it’s not dead. I’m a fan of the big two: VMM, and ConfigMan. OpsMan I’ve had a love/hate relationship with. Well the news out of TechEd Europe is that System Center is still alive, but more integration with Azure + a substantial new release will debut next summer. So the VMM Technical Preview I’m running in the Daisetta Lab (which installs to C:Program FilesVMM 2012 R2 btw) is not the VMM I was looking for.

Other incredible announcements:

  • Docker, CoreOS & Azure: Integration of the market-leading container technology with Azure is apparently further along than I believed. A demo was shown that hurts my brain to think about: Azure + Docker + CoreOS, the linux OS that has two OS partitions and is fault-tolerant. Wow
  • Enhancements to Rights Management Service: Stop users from CTRL-Cing/CTRL-Ving your company’s data to Twitter
  • Audiocodes announces an on-prem device that appears to bring us one step closer to the dream: Lync for voice, O365 for the PBX, all switched out to the PSTN. I said one step closer!
  • Azure Operational Insights: I’m a fan of the Splunk model (point your firehose of data/logs/events at a server, and let it make sense of it) and it appears Azure Operational Insights is a product that will jump into that space. Screen cap from Finn

This is really exciting stuff.

Commentary

Looking back on the last few years in Microsoft’s history, one thing stands out: the painful change from the old Server 2008R2 model to the new 2012 model was worth it. All of the things I’ve raved about on this blog in Hyper-V (converged network, storage spaces etc) were just teasers -but also important architectural elements- that made the things we see announced today possible.

The overhaul* of Windows Server is paying huge dividends for Microsoft and for IT pros who can adapt & master it. Exciting times.

* unlike the Windows mobile > Windows Phone transition, which was not worth it

Containers! For Windows! Courtesy of Docker

DockerWithWindowsSrvAndLinux-1024x505 (1)

Big news yesterday for fans of agnostic cloud/on-prem computing.

Docker -the application virtualization stack that’s caught on like wildfire among the *nix set- is coming to Windows.

Yeah baby.

Mary Jo with the details:

Under the terms of the agreement announced today, the Docker Engine open source runtime for building, running and orchestrating containers will work with the next version of Windows Server. The Docker Engine for Windows Server will be developed as a Docker open source project, with Microsoft participating as an active community member. Docker Engine images for Windows Server will be available in the Docker Hub. The Docker Hub will also be integrated directly into Azure so that it is accessible through the Azure Management Portal and Azure Gallery. Microsoft also will be contributing to Docker’s open orchestration application programming interfaces (APIs).

When I first heard the news, emotion was mixed.

On the one hand, I love it. Virtualization of all flavors -OS, storage, network, and application- is where I want to be, as a blogger, at home in my lab, and professionally.

Yet, as a Windows guy (I dabble, of course), Docker was just a bit out of reach for me, even with my lab, which is 100% Windows.

On the other hand, I also remembered how dreadful it used to be to run Linux applications on Windows. Installing GTK+ Libraries on Windows isn’t fun, and the end-result often isn’t very attractive. In my world, keeping the two separate on the application & OS side/uniting them via Kerberos and/or https/rest has always been my preference.

But that’s old world thinking, ladies and gentlemen.

Because you see, this announcement from Microsoft & Docker Inc sounds deep, rich, functional. Microsoft’s going to contribute some of its Server code to the Docker folks, and the Docker crew will help build Container tech into Windows Server and Azure. I’m hopeful Docker will just be another Role in Server, and that Jeffrey Snover’s powershell cmdlets will hook deep into the Docker stuff.

This probably marks the death of App-V, which I wrote about in comparison to Docker just last month, but that’s fine with me.

Docker on Windows marks a giant step forward for Agnostic Computing…do we dare imagine a future in which our application stacks are portable? Today I’m running an application in a Docker Container on Azure, and tomorrow I move it to AWS?

Microsoft says that’s exactly the vision:

Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere. This partnership will enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. This level of interoperability is what we at MS Open Tech strive to deliver through contributions to open source projects such as Docker.

Full announcement.

Microsoft releases new V2V and P2V tool

Do you smell what I smell?

Inhale it boys and girls because what you smell is the sweet aroma of VMware VMs being removed from the vSphere collective and placed into System Center & Hyper-V’s warm embrace.

Microsoft has released version three of its V2V and P2V assimilator tool:

Today we are releasing the Microsoft Virtual Machine Converter (MVMC) 3.0, a supported, freely available solution for converting VMware-based virtual machines and virtual disks to Hyper-V-based virtual machines and virtual hard disks (VHDs).

With the latest release, MVMC 3.0 adds the ability to convert a physical computer running Windows Server 2008 or above, or Windows Vista or above to a virtual machine running on a Hyper-V host (P2V).

This new functionality adds to existing features available including:

• Native Windows PowerShell capability that enables scripting and integration into IT automation workflows.
• Conversion and provisioning of Linux-based guest operating systems from VMware hosts to Hyper-V hosts.
• Conversion of offline virtual machines.
• Conversion of virtual machines from VMware vSphere 5.5, VMware vSphere 5.1, and VMware vSphere 4.1 hosts to Hyper-V virtual machines.

Download available here.

This couldn’t have come at a better time for me. At work -which is keeping me so busy I’ve been neglecting these august pages- my new Hyper-V cluster went Production in mid-September and has been running very well indeed.

But building a durable & performance-oriented virtualization platform for a small to medium enterprise is only 1/10th of the battle.

If I were a consultant, I’d have finished my job weeks ago, saying to the customer:

Right. Here you go lads: your cluster is built, your VMM & SCCM are happy, and the various automation bits ‘n bobs that make life in Modern IT Departments not only bearable, but fun, are complete

But I’m an employee, so much more remains to be done. So among many other things, I now transition from building the base of the stack to moving important workloads to it, namely:

  • Migrating and/or replacing important physical servers to the new stack
  • Shepherding dozens of important production VMs out of some legacy ESXi 5 & 4 hosts and into Hyper-V & System Center and thence onto greatness

So it’s really great to see Microsoft release a new version of its tool.

Thoughts on EVO:RAIL

So if you work in IT, and even better, if you’re in the virtualization space of IT as I am, you have to know that VMworld is happening this week.

VMworld is just about the biggest vCelebration of vTechnologies there is. Part trade-show, part pilgrimage, part vLollapalooza, VMworld is where all the sexy new vProducts are announced by VMware, makers of ESXi, vSphere, vCenter, and so many other vThings.

It’s an awesome show…think MacWorld at the height of Steve Jobs but with fewer hipsters and way more virtualization engineers. Awesome.

And I’ve never been :sadface:

And 2014’s VMworld was a doozy. You see, the vGiant announced a new 2U, four node vSphere & vSAN cluster-in-a-box hardware device called EVO:RAIL. I’ve been reading all about EVO:RAIL for the last two days and here’s what I think as your loyal Hyper-V blogger:

  • What’s in a name? Right off the bat, I was struck by the name for this appliance. EVO:RAIL…say what? What’s VMware trying to get across here? Am I to associate EVO with the fast Mitsubishi Lancers of my youth, or is this EVO in the more Manga/Anime sense of the word? Taken together, EVO:RAIL also calls to mind sci-fi, does it not? You could picture Lt. Cmdr Data talking about an EVO:RAIL to Cmdr Riker, as in “The Romulan bird of prey is outfitted with four EVO:RAIL phase cannons, against which the Enterprise’s shields stand no chance.” Speaking of guns: I also thought of the US Navy’s Railguns; long range kinetic weapons designed to destroy the Nutanix/Simplivity the enemy.
  • If you’re selling an appliance, do you need vExperts? One thing that struck me about VMware’s introduction of EVO:RAIL was their emphasis on how simple it is to rack, stack, install, deploy and virtualize. They claim the “hyper-converged” 2U box can be up and running in about 15 minutes; a full rack of these babies could be computing for you in less than 2 hours. evo1They’ve built a sexy HTML 5 GUI to manage the thing, no vSphere console or PowerCLI in sight. It’s all pre-baked, pre-configured, and pre-built for you, the small-to-medium enterprise. It’s so simple a help desk guy could set it up. So with all that said, do I still need to hire vExperts and VCDX pros to build out my virtualization infrastructure? It would appear not. Is that the message VMware is trying to convey here?
  • One SKU for the Win: I can’t be the only one that thinks buying the VMware stack is a complicated & time-consuming affair. Chris Wahl points out that EVO:RAIL is one SKU, one invoice, one price to pay, and VMware’s product page confirms that, saying you can buy a Dell EVO:RAIL or a Fujitsu EVO:RAIL, but whatever you buy, it’ll be one SKU. This is really nice. But why? VMware is famous for licensing its best-in-class features…why mess with something that’s worked so well for them?
    Shades of Azure simplicity here
    Shades of Azure simplicity here

    One could argue that EVO:RAIL is a reaction to simplified pricing structures on rival systems…let’s be honest with ourselves. What’s more complicated: buying a full vSphere and/or vHorizon suite for a new four node cluster, or purchasing the equivalent amount of computing units in Azure/AWS/Google Compute? What model is faster to deploy, from sales call to purchasing to receiving to service? What model probably requires consulting help?

    Don’t get me wrong, I think it’s great. I like simple menus, and whereas buying VMware stuff before was like choosing from a complicated, multi-page, multi-entree menu, now it’s like buying burgers at In ‘n Out. That’s very cool, but it means something has changed in vLand.

  • I love the density: As someone who’s putting the finishing touches on my own new virtualization infrastructure, I love the density in EVO:RAIL. 2 Rack Units with E5-26xx class Xeons packing 6 cores each means you can pack about 48 cores into 2U! Not bad, not bad at all. The product page also says you can have up to 16TB of stroage in those same 2U (courtesy of VSAN) and while you still need a ToR switch to jack into, each node has 2x10GbE SFP+ or Copper. Which is excellent. RAM is the only thing that’s a bit constrained; each node in an EVO:RAIL can only hold 192GB of RAM, a total of 768GB per EVO:RAIL.In comparison, my beloved 2U pizza boxes offer more density in some places, but less overall, given than 1 Pizza Box = one node. In the Supermicros I’m racking up later this week, I can match the core count (4×12 Core E5-46xx), improve upon the RAM (up to 1TB per node) and easily surpass the 16TB of storage. That’s all in 2U and all for about $15-18k.Where the EVO:RAIL appears to really shine is in VM/VDI density. VMware claims a single EVO:RAIL is built to support 100 General Purpose VMs or to support up to 250 VDI sessions, which is f*(*U#$ outstanding.
  • I wonder if I can run Hyper-V on that: Of course I thought that. Because that would really kick ass if I could.

Overall, a mighty impressive showing from VMware this week. Like my VMware colleagues, I pine for an EVO:RAIL in my lab.

I think EVO:RAIL points to something bigger though…This product marks a shift in VMware’s thinking, a strategic reaction to the changes in the marketplace. This is not just a play against Nutranix and other hyper-converged vendors, but against the simplicity and non-specialist nature of cloud Infrastructure as a Service.  This is a play against complexity in other words…this is VMware telling the marketplace that you can have best-in-class virtualization without worst-in-class licensing pain and without hiring vExperts to help you deploy it.

Tales from the Hot Lane

A few brief updates & random thoughts from the last few days on all the stuff I’ve been working on.

Refreshing the Core at work: Summer’s ending, but at work, a new season is advancing, one rack unit at a time. I am gradually racking up & configuring new compute, storage, and network as it arrives; It Is Not About the Hardware™, but since you were wondering: 64 Ivy Bridge cores and about 512GB RAM, 30TB of storage, and Nexus 3k switching.

Cisco_logoAhh, the Nexus line. Never had the privilege to work on such fine switching infrastructure. Long time admirer, first-time NX-OS user. I have a pair of them plus a Layer 3 license so the long-term thinking involves not just connecting my compute to my storage, but connecting this dense stack northbound & out via OSPF or static routes over a fault-tolerant HSRP or VRRP config.

To do that, I need to get familiar with some Nexus-flavored acronyms that aren’t familiar to me: virtual port channels (VPC), Control Plane policy (COPP), VRF, and oh-so-many-more. I’ll also be attempting to answer the question once and for all: what spanning tree mode does one use to connect a Nexus switch to a virtualization host running Hyper-V’s converged switching architecture? I’ve used portfast in the lab on my Catalyst, but the lab switch is five years old, whereas this Nexus is brand new. And portfast never struck me as the right answer, just the easy one.

To answer those questions and more, I have TAC and this excellent tome provided gratis by the awesome VAR who sold us much of the equipment.

Into the vCPU Blender goes Lync: Last Friday, I got a call from my former boss & friend who now heads up a fast-growing IT department on the coast. He’s been busy refreshing & rationalizing much of his infrastructure as well, but as is typical for him, he wants more. He wants total IT transformation, so as he’s built out his infrastructure, he laid the groundwork to go 100% Microsoft Lync 2013 for voice.

Yeah baby. Lync 2013 as your PBX, delivering dial tone to your endpoints, whether they are Bluetooth-connected PC headsets, desk phones, or apps on a mobile.

Forget software-defined networking. This is software-defined voice & video, with no special server hardware, cloud services, or any other the other typical expensive nonsense you’d see in a VoIP implementation.

If Lync 2013 as PBX is not on your IT Bucket List, it should be. It was something my former boss & I never managed to accomplish at our previous employer on Hyper-V.

Now he was doing it alone. On a fast VMware/Nexus/NetApp stack with distributed vSwitches. And he wanted to run something by me.

So you can imagine how pleased I was to have a chat with him about it.

He was facing one problem which threatened his Go Live date: Mean Opinion Score, or MOS, a simple 0-5 score Lync provides to its administrators that summarizes call quality. MOS is a subset of a hugely detailed Media Quality Summary Report, detailed here at TechNet.

thMy friend was scoring a .6 on his MOS. He wanted it to be at 4 or above prior to go-live.

So at first we suspected QoS tags were being stripped somewhere between his endpoint device and the Lync Mediation VM. Sure enough, Wireshark proved that out; a Distributed vSwitch (or was it a Nexus?) wasn’t respecting the tag, resulting in a sort of half-duplex QoS if you will.

He fixed that, ran the test again, and still: .6. Yikes! Two days to go live. He called again.

That’s when I remembered the last time we tried to tackle this together. You see, the Lync Mediation Server is sort of the real PBX component in Lync Enterprise Voice architecture. It handles signalling to your endpoints, interfaces with the PSTN or a SIP trunk, and is the one server workload that, even in 2014, I’d hesitate making virtual.

My boss had three of them. All VMs on three different VMware hosts across two sites.

I dug up a Microsoft whitepaper on virtualizing Lync, something we didn’t have the last time we tried this. While Redmond says Lync Enterprise Voice on top of VMs can work, it’s damned expensive from a virtualization host perspective. MS advises:

  • You should disable hyperthreading on all hosts.
  • Do not use processor oversubscription; maintain a 1:1 ratio of virtual CPU to physical CPU.
  • Make sure your host servers support nested page tables (NPT) and extended page tables (EPT).
  • Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.

Talk about Harshing your vBuzz. Essentially, building Lync out virtually with Enterprise Voice forces you to go sparse on your hosts, which is akin to buying physical servers for Lync. If you don’t, into the vCPU blender goes Lync, and out comes poor voice quality, angry users, bitterness, regret and self-punishment.

Anyway, he did as advised, put some additional vCPU & memory reservations in place on his hosts, and yesterday, whilst I was toiling in the Hot Lane, he called me from Lync via his mobile.

He’s a married man just like me, but I must say his voice sounded damn sexy as it was sliced up into packets, sent over the wire, and converted back to analog on my mobile’s speaker. A virtual chest bump over the phone was next, then we said goodbye.

Another Go Live Victory (by proxy). Sweet.

Azure Outage: Yesterday’s bruising hours-long global Azure outage affected Virtual Machines, storage blobs, web services, database services and HD Insight, Microsoft’s service for big data crunching. As it unfolded, I navel-gazed, when I felt like helping. There was literally nothing I could do. Had I some crucial IaaS or PaaS in the Azure stack, I’d be shit out of luck, just like the rest. I felt quite helpless; refreshing Mary Jo’s pageyellow-exclamation-mark-in-triangle-md and the Azure dashboard didn’t help. I wondered what the problem was; it’s been a difficult week for Microsofties whether on-prem or in Azure. Had to be related to the update cycle, I thought.

On the plus side, Azure Active Directory services never went down, nor did several other services. Office 365 stayed up as well, though it is built atop separate-but-related infrastructure in my understanding.

Lastly, I pondered two thoughts: if you’re thinking of reducing your OpEx by replacing your DR strategy with an Azure Site Recovery strategy, does this change your mind? And if you’re building out Azure as your primary IaaS or PaaS, do you just accept such outages or do you plan a failback strategy?

Labworks : Towards a 100% Windows-defined Daisetta Lab: What’s next for the Daisetta Lab? Well, I have me an AMD Duron CPU, a suitable motherboard, a 1U enclosure with PSU, and three Keepin’ it RealTek NICs. Oh, I also have a case of the envies, envies for the VMware crowd and their VXLAN and NSX and of course VMworld next week. So I’m thinking of building a Network Virtualization Gateway appliance. For those keeping score at home, that would mean from Storage to Compute to Network Edge, I’d have a 100% Windows lab environment, infused with NVGRE which has more use cases than just multi-tenancy as I had thought.

Stack Builders ‘R Us

This is a really lame but (IMHO) effective drawing of what I think of as a modern small/medium business enterprise ‘stack’:

stack

As you can see, just about every element of a modern IT is portrayed.

Down at the base of the pyramid, you got your storage. IOPS, RAID, rotational & ssd, snapshots, dedupes, inline compression, site to site storage replication, clones and oh me oh my…all the things we really really love are right here. It’s the Luntastic layer and always will be.

Above that, your compute & Memory. The denser the better, 2U Pizza Boxes don’t grow on trees and the business isn’t going to shell out more $$$ if you get it wrong.

Above that, we have what my networking friends would call the “Underlay network.” Right. Some cat 6, twinax, fiber, whatever. This is where we push some packets, whether to our storage from our compute, northbound out to the world, southbound & down the stack, or east/west across it. Leafs, spines, encapsulation, control & data planes, it’s all here.

And going higher -still in Infrastructure Land mind you- we have the virtualization layer. Yeah baby. This is what it’s all about, this is the layer that saved my career in IT and made things interesting again. This layer is designed to abstract all that is beneath it with two goals in mind: cost savings via efficiency gains & ease of provisioning/use.

And boy,has this layer changed the game, hasn’t it?

So if you’re a virtualization engineer like I am, maybe this is all you care about. I wouldn’t blame you. The infrastructure layer is, after all, the best part of the stack, the only part of the stack that can claim to be #Glorious.

But in my career, I always get roped in (willingly or not) into the upper layers of the stack. And so that is where I shall take you, if you let me.

Next up, the Platform layer. This is the layer where that special DBA in your life likes to live. He optimizes his query plans atop your Infrastructure layer, and though he is old-school in the ways of storage, he’s learned to trust you and your fancy QoS .vhdxs, or your incredibly awesome DRS fault-tolerant vCPUs.

Or maybe you don’t have a DBA in your Valentine’s card rotation. Maybe this is the layer at which the devs in your life, whether they are running Eclipse or Visual Studio, make your life hell. They’re always asking for more x (x= memory, storage, compute, IP), and though they’re highly-technical folks, their eyes kind of glaze over when you bring up NVGRE or VXLAN or Converged/Distributed Switching or whatever tech you heart at the layer below.

Then again, maybe you work in this layer. Maybe you’re responsible for building & maintaining session virtualization tech like RDS or XenApp, or maybe you maintain file shares, web farms, or something else.

Point is, the people at this layer are platform builders. To borrow from the automotive industry, platform guys build the car that travels on the road infrastructure guys build. It does no good for either of us if the road is bumpy or the car isn’t reliable, does it? The user doesn’t distinguish between ‘road’ and ‘car’, do they? They just blame IT.

Next up: software & service layer. Our users exist here, and so do we. Maybe for you this layer is about supporting & deploying Android & iPhone handsets and thinking about MDM. Or maybe you spend your day supporting old-school fat client applications, or pushing them out.

And finally, now we arrive to the top of the pyramid. User-space. The business.

This is where (and the metaphor really fits, doesn’t it?) the rubber meets the road ladies and gentlemen. It’s where the business user drives the car (platform) on the road (infrastructure). This is where we sink or swim, where wins are tallied and heros made, or careers are shattered and the cycle of failure>begets>blame>begets>fear>begets failure begins in earnest.

That’s the stack. And if you’re in IT, you’re in some part of that stack, whether you know it or not.

But the stack is changing. I made a silly graphic for that too. Maybe tomorrow.

#StorageGlory Achieved : 30 Days on a Windows SAN

 Behold, these three remain. File. Block. Object. And the greatest of these is block.  – Sr. Systems Engineer St. Paul, in a letter to confused storage engineers in Thessalonika

Right. So a couple weeks back I teased the hardware specs of the new storage array I built for the Daisetta Lab at home.

Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab
Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab

My idea was to combine all types of disks -rotational 3.5″ & 2.5″ drives, SSDs, mSATAs, hell, I considered USB- into one tight, well-built storage box for my lab and home data needs. A sort of Storage Ark, if you will; all media types were welcome, but only if they came in twos (for mirroring & Parity sake, of course) and only if they rotated at exactly 7200 RPM and/or leveled their wears evenly across the silica.

And onto this unholy motley crue of hard disks I slapped a software architecture that promised to abstract all the typical storage driver, interface, and controller nonsense away, far, far away in fact, to a land where the storage can be mixed, the controllers diverse, and by virtue of the software-definition bits, network & hypervisor agnostic. In short, I wanted to build an agnostic #StorageGlory box in the Daisetta Lab.

Right. So what did I use to achieve this? ZFS and Zpools?

Hell no, that’s so January.

VSAN? Ha! I’m no Chris Wahl.

I used Windows, naturally.

That’s right. Windows. Server 2012 R2 to be specific, running Core + Infrastructure GUI with 8GB of RAM, and some 17TB of raw disk space available to it. And a little technique developed by the ace Microsoft server team called Tiered Storage Spaces.

Was a #StorageGlory Achievement Unlocked, or was it a dud?

Here’s my review after 30 days on my Windows SAN: san.daisettalabs.net.

The Good

It doesn’t make you pick a side in either storage or storage-networking: Do you like abstracted pools of storage, managed entirely by software? Put another way, do you hate your RAID controller and crush on your old-school NetApp filer, which seemingly could do everything but object storage?

When I say block, do you instinctively say file? Or vice-versa?

Well then my friend, have I got a storage system for your lab (and maybe production!) environment: Windows Storage Spaces (now with Tiering!) offers just about everything guys like you or me need in a storage system for lab & home media environments. I love it not just because it’s Microsoft, but also because it doesn’t make me choose between storage & storage-networking paradigms. It’s perhaps the ultimate agnostic storage technology, and I say that as someone who thinks about agnosticism and storage.

A lot.

You know what I’m talking about. Maybe today, you’ll need some block storage for this VM or that particular job. Maybe you’re in a *nix state of mind and want to fiddle with NFS. Or perhaps you’re feeling bold & courageous and decide to try out VMware again, building some datastores on both iSCSI LUNs and NFS shares. Then again, maybe you want to see what SMB 3.0 3.0 is all about, the MS fanboys sure seem to be talking it up.

The point is this: I don’t care what your storage fancy is, but for lab-work (which makes for excellence in work-work) you need a storage platform that’s flexible and supportive of as many technologies as possible and is, hopefully, software-defined.

And that storage system is -hard to believe I’ll grant you- Windows Server 2012 R2.

I love storage and I can’t think of one other storage system -save for maybe NetApp- that let’s me do crazy things like store .vmdks inside of .vhdxs (oh the vIrony!), use SMB 3 multichannel over the same NICs I’m using for iSCSI traffic, create snapshots & clones just like big filers all while giving me the performance-multiplier benefits of SSDs and caching and a reasonable level of resiliency.

File this one under WackWackStorageGloryAchievedWindows boys and girls.

I can do it all with Storage Spaces in 2012 R2.

As I was thinking about how to write about Storage Spaces, I decided to make a chart, if only to help me keep it straight. It’s rough but maybe you’ll find it useful as you think about storage abstraction/virtualization tech:

Storage-Compared
And yes. Ex post facto dedupe is a made up term. By me. It’s latin for “After the fact, dedupe,” because I always scheduled my dedupes for Saturday night, when the IO load on the filer was low. Ex post facto dedupe is in contrast to some newer storage companies that offer inline compression & dedupe, but none of the ones above offer this, sadly.

It’s easy to build and supports your disks & controllers: This is a Microsoft product. Which means it’s easy to deploy & build for your average server guy. Mine’s running on a very skinny, re-re-purposed SanDisk Ready Cache SSD. With Windows 2012 R2 server running the Infrastructure Management GUI (no explorer.exe, just Server Manager + your favorite snap-ins), it’s using about 6GB of space on the boot drive.

And drivers for the Intel C226 SATA controller, the LSI 9218si SAS card, and the extra ASMedia 1061 controller were all installed automagically by Windows during the build.

The only other system that came close to being this easy to install -as a server product- was Oracle Solaris 11.2 Beta. It found, installed drivers for, and exposed all controllers & disks, so I was well on my way to going the ZFS route again, but figured I’d give Windows a chance this time around.

Nexenta 4, in contrast, never loaded past the Install Community Edition screen.

It’s improved a lot over 2012: Storage Spaces almost two years ago now, and I remember playing with it at work a bit. I found it to be a mind-f*** as it was a radically different approach to storage within the Windows server context.

I also found it to be slow, dreadfully slow even, and not very survivable. Though it did accept any disk disk I gave it, it didn’t exactly like it when I removed a USB drive during an extended write test. And it didn’t take the disk back at the conclusion of the test either.

Like everything else in Microsoft’s current generation, Storage Spaces in 2012 R2 is much better, more configurable, easier to monitor, and more tolerant of disk failures.

It also has something for the IOPS speedfreak inside all of us.

Storage Spaces, abstract this away
Storage Spaces, abstract this away

Tiered Storage Spaces & Adjustable write cache: Coming from ZFS & the Adaptive Replacement Cache, the ZFS Intent Log, the SLOG, and L2ARC, I was kind of hooked on the idea of using massive amounts of my ECC RAM to function as a sort of poor-mans NVRAM.

Windows can’t do that, but with Tiered Storage Spaces, you can at least drop a few SSDs in your array (in my case three x 256GB 840 EVO & one 128GB Samsung 830), mix them into your disk pool, and voila! Fast read-cache, with a Microsoft-flavored MRU/LFU algorithm of some type keeping your hottest data on the fastest disks and your old data on the cheep ‘n deep rotationals.

What’s more, going with Tiered Storage Spaces gives you a modest 1GB write cache, but as I found out, you can increase that up to 10GB.

Which i naturally did while building this guy out. I mean, who wouldn’t want more write-cache?

But there’s a huge gotcha buried in the Technet and blogposts I found about this. I wanted to pool all my disks together into as large of a single virtual disk as possible, then pack iSCSI-connected .vhdxs, SMB 3 shares, and more inside that single, durable & tiered virtual disk.What I didn’t want was several virtual disks (it helped me to think of virtual disks as a sort of Aggregate) with SMB 3 shares and vhdx files stored haphazardly between them.

Which is what you get when you adjust the write-cache size. Recall that I have a capacity of about 17TB raw among all my disks. Building a storage pool, then a virtual disk with a 10GB write cache gave me a tiered virtual disk with a maximum size of about 965GB.  More on that below.

It can be wicked fast, but so is RAID 0: Check out my standard SQLIO benchmark routine, which I run against all storage technologies that come my way. The 1.5 hour test is by no means comprehensive -and I’m not saying the IOPS counter is accurate at all (showing max values across all tests by the way)- but I like this test because it lets me kick the tires on my array, take her out for a spin, and see how she handles.

And with a “Simple” layout (no redundancy, probably equivalent to RAID 0), she handles pretty damn well, but even I’m not crazy enough to run tiered storage spaces in a simple layout config:

storage spaces
These three tests (1.5 hours each, identical setup against multiple configs) were done locally on the array, not over my home network

What’s odd is how poorly the array performed with 10GB of “Write Cache.” Not sure what happened here, but as you can see, latency spiked higher during the 10GB write cache write phase of the test than just about every other test segment.

Something to do with parity no doubt.

For my lab & home storage needs, I settled on a Mirror 2-way parity setup that gives me moderate performance with durability in mind, though not much as you’ll see below.

Making the most of my lab/home network and my NICs: Recall that I have six GbE NICs on this box. Two are built into the Supermicro board itself (Intel), and the other four come by way of a quad-port Intel I350-T4 server NIC.

Anytime you’re planning to do a Microsoft cluster in the 1GbE world, you need lots of NICs. It’s a bit of a crutch in some respects, especially in iSCSI. Typically you VLAN off each iSCSI NIC for your Hyper-V hosts and those NICs do one thing and one thing only: iSCSI, or Live Migration, or CSV etc. Feels wasteful.

But on my new storage box at home, I can use them for double-duty: iSCSI (or LM/CSV) as well as SMB 3. Yes!

Usually I turn off Client for Microsoft Networks (the SMB file sharing toggle in NIC properties) on each dedicated NIC (or vEthernet), but since I want my file cake & my block cake at the same time, I decided to turn SMB on on all iSCSI vEthernet adapters (from the physical & virtual hosts) and leave SMB on the iSCSI NICs on san.daisettalabs.net as well.

The end result? This:

[table caption=”Storage Networking-All of the Above Approach” width=”500″ colwidth=”20|100|50″ colalign=”left|left|center|left|right”]
nic,Name,VLAN,IP,Function
1,MGMT,100,192.168.100.15,MGMT & SMB3
2,CLNT,102,192.168.102.15,Home net & SMB3
3,iSCSI-10,10,172.16.10.x,iSCSI & SMB3
4,iSCSI-11,10,172.16.11.x,iSCSI & SMB3
5,iSCSI-12,10,172.16.12.x,iSCSI & SMB3
[/table]

That’s five, count ’em five NICs (or discrete channels, more specifically) I can use to fully soak in the goodness that is SMB 3 multichannel, with the cost of only a slightly unsettling epistemological question about whether iSCSI NICs are truly iSCSI if they’re doing file storage protocols.

Now SMB 3 is so transparent (on by default) you almost forget that you can configure it, but there’s quite a few ways to adjust file share performance. Aidan Finn argues for constraining SMB 3 to certain NICs, while Jose Barreto details how multichannel works on standalone physical NICs, a pair in a team, and multiple teams of NICs.

I haven’t decided which model to follow (though on san.daisettalabs.net, I’m not going to change anything or use Converged switching…it’s just storage), but SMB 3 is really exciting and it’s great that with Storage Spaces, you can have high performance file & block storage. I’ve hit 420MB/sec on synchronous file copies from san to host and back again. Outstanding!

I Finally got iSNS to work and it’s…meh: One nice thing about san.daisettalabs.net is that that’s all you need to know…the FQDN is now the resident iSCSI Name Server, meaning it’s all I need to set on an MS iSCSI Initiator. It’s a nice feature to have, but probably wasn’t worth the 30 minutes I spent getting it to work (hint: run set-wmiinstance before you run iSNS cmdlets in powershell!) as iSNS isn’t so great when you have…

SMI-S, which is awesome for Virtual Machine Manager fans: SMI-S, you’re thinking, what the the hell is that? Well, it’s a standardized framework for communicating block storage information between your storage array and whatever interface you use to manage & deploy resources on your array. Developed by no less an august body than the Storage Networking Industry Association (SNIA), it’s one of those “standards” that seem like a good idea, but you can’t find it much in the wild as it were. I’ve used SMI-S against a NetApp Filer (in the Classic DoT days, not sure if it works against cDoT) but your Nimbles, your Pures, and other new players in the market get the same funny look on their face when you ask them if they support SMI-S.

“Is that a vCenter thing?” they ask.

Sigh.

Microsoft, to its credit, does. Right on Windows Server. It’s a simple feature you install and two or three powershell commands later, you can point Virtual Machine Manager at it and voila! Provision, delete, resize, and classify iSCSI LUNS on your Windows SAN, just like the big boys do (probably) in Azure, only here, we’re totally enjoying the use of our corpulent.vhdx drives, whereas in Azure, for some reason, they’re still stuck on .vhds like rookies. Haha!

Single Pane o' glass in VMM with SMI-S for the Hyper-V set
Single Pane o’ glass in VMM with SMI-S, GUIDs galore and more for the Hyper-V set

It’s a very stable storage platform for Microsoft Clustering: I’ve built a lot of Microsoft Hyper-V clusters. A lot. More than half a dozen in production, and probably three times that in dev or lab environments, so it’s like second nature to me. Stable storage & networking are not just important factors in Microsoft clusters, they are the only factors.

So how is it building out a Hyper-V cluster atop a Windows SAN? It’s the same, and different at the same time, but, unlike so many other cluster builds, I passed the validation test on the first attempt with green check marks everywhere. And weeks have gone by without a single error in the Failover Clustering snap-in; it’s great.

The Bad

It’s expensive and seemingly not as redundant as other storage tech: When you build your storage pool out of offlined disks, your first choice is going to involve (just like other storage abstraction platforms) disk redundancy. Microsoft makes it simple, but doesn’t really tell you the cost of that redundancy until later in the process.

Recall that I have 17TB of raw storage on san.daisettalabs.net, organized as follows:

[table]

Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Cheep ‘n deep

Samsung 840 EVO SSD, 3, 256GB, 512byte, 250MB/read, Tiers not fears

Samsung 830 SSD, 1, 128GB, 512byte, 250MB/read, Tiers not fears

HGST 3.5″ Momentus, 6, 2TB, 512byte, 105MB/r/w, Cheep ‘n deep

[/table]

Now, according to my trusty IOPS Excel calculator, if I were to use traditional RAID 5 or RAID 6 on that set of spinners, I’d get about 16.5TB usable in the former, 15TB usable in the latter (assuming RAID penalty of 5 & 6, respectively)

For much of the last year, I’ve been using ZFS & RAIDZ2 on the set of six WD Red 2.5″ drives. Those have a raw capacity of 6TB. In RAIDZ2 (roughly analogous to RAID 6), I recall getting about 4.2TB usable.

All in all, traditional RAID & ZFS’ RAIDZ cost me between 12% and  35% of my capacity respectively.

So how much does Windows Storage Spaces resiliency model (Mirrored, 2-way parity) cost me? A lot. We’re in RAID-DP territory here people:

 

storagespaces5

Ack! With 17TB of raw storage, I get about 5.7TB usable, a cost of about 66%!

And for that, what kind of resieliency do I get?

I sure as hell can’t pull two disks simultaneously, as I did live during prod in my ZFS box. I can suffer the loss of only a single disk. And even then, other Windows bloggers point to some pain as the array tries to adjust.

Now, I’m not the brightest on RAID & parity and such, so perhaps there’s a more resilient, less costly way to use Storage Spaces with Tiering, but wow…this strikes me as a lot of wasted disk.

Not as easy to de-abstract the storage: When a disk array is under load, one of my favorite things to do is watch how the IO hits the physical elements in the array. Modern disk arrays make what your disks are doing abstract, almost invisible, but to truly understand how these things work, sometimes you just want the modern equivalent of lun stats.

In ZFS, I loved just letting gstat run, which showed me the load my IO was placing on the ARC, the L2ARC and finally, the disks. Awesome stuff:

In this Gifcam, watch ada0-6 as they struggle under load with the "Always Sync" option enabled.
In this Gifcam, watch ada0-6 as they struggle under load with the “Always Sync” option enabled.

As best as I can tell, there’s no live powershell equivalent to gstat for Storage Spaces. There are teases though; you can query your disks, get their SMART vitals, and more, but peeling away the onion layers and actually watching how Windows handles your IO would make Storage Spaces the total package.

Bottom line

So that’s about it: this is the best storage box I’ve built in the Daisetta Lab. No regrets going with Windows. The platform is mature, stable, offers very good performance, and decent resiliency, if at a high disk cost.

I’m so impressed I’ve checked my Windows SAN skepticism at the door and would run this in a production environment at a small/medium business (clustered, in the Scaled Out File Server role). Cost-wise, it’s a bargain. Check out this array: it’s the same exact Hardware a certain upstart Storage vendor I like (that rhymes with Gymbal Porridge) sells, but for a lot less!

#StorageGlory achieved. At home. In my garage.

30 Days hands-on with VMTurbo’s OpsMan #VFD3

Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo's Yuri Rabover
Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo’s Yuri Rabover

So you may recall that back in March, yours truly, Parent Partition, was invited as a delegate to a Tech Field Day event, specifically Virtualization Field Day #3, put on by the excellent team at Gestalt IT especially for the guys guys who like V.

And you may recall further that as I diligently blogged the news and views to you, that by day 3, I was getting tired and grumpy. Wear leveling algorithms intended to prevent failure could no longer cope with all this random tech field day IO, hot spots were beginning to show in the parent partition and the resource exhaustion section of the Windows event viewer, well, she was blinking red.

And so, into this pity-party I was throwing for myself walked a Russian named Yuri, a Dr. named Schmuel and a product called a “VMTurbo” as well as a Macbook that like all Mac products, wouldn’t play nice with the projector.

You can and should read all about what happened next because 1) VMTurbo is an interesting product and I worked hard on the piece, and 2) it’s one of the most popular posts on my little blog.

Now the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation wasn’t just that it played into my fevered fantasies of being a virtualization economics czar (though it did), or that it promised to bridge the divide via reporting between Infrastructure guys like me and the CFO & corner office finance people (though it can), or that it had lots of cool graphs, sliders, knobs and other GUI candy (though it does).

No, the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation was that they said it would work with that other great Type 1 Hypervisor, a Type-1 Hypervisor I’m rather fond of: Microsoft’s Hyper-V.

I didn't even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo's website and imagine it's my stack
I didn’t even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo’s website and imagine it’s my stack

And so in the last four or five weeks of my employment with Previous Employer (PE), I had the opportunity to test these claims, not in a lab environment, but against the stack I had built, cared for, upgraded, and worried about for four years.

That’s right baby. I put VMTurbo’s economics engine up against my six node Hyper-V cluster in PE’s primary datacenter, a rationalized but aging cluster with two iSCSI storage arrays, a 6509E, and 70+ virtual machines.

Who’s the better engineer? Me, or the Boston appliance designed by a Russian named Yuri and a Dr. named Schmuel? 

Here’s what I found.

The Good

  • Thinking economically isn’t just part of the pitch: VMTurbo’s sales reps, sales engineers and product managers, several of whom I spoke with during the implementation, really believe this stuff. Just about everyone I worked with stood up to my barrage of excited-but-serious questioning and could speak literately to VMTurbo’s producer/consumer model, this resource-buys-from-that-resource idea, the virtualized datacenter as a market analogy. The company even sends out Adam Smith-themed emails (Famous economist…wrote the Wealth of Nations if you’re not aware). If your infrastructure and budget are similar to what mine were at PE, if you stress over managing virtualization infrastructure, if you fold paper again and again like I did, VMTurbo gets you.
  • Installation of the appliance was easy: Install process was simple: download a zipped .vhd (not .vhdx), either deploy it via VMM template or put the VHD into a CSV and import it, connnect it to your VM network, and start it up. The appliance was hassle-free as a VM; it’s running Suse Linux, and quite a bit of java code from what I could tell, but for you, it’s packaged up into a nice http:// site, and all you have to do is pop in the 30 day license XML key.
  • It was insightful, peering into the stack from top to nearly the bottom and delivering solid APM:  After I got the product working, I immediately made the VMturbo guys help me designate a total of about 10 virtual machines, two executables, the SQL instances supporting those .exes and more resources as Mission Critical. The applications & the terminal services VMs they run on are pounded 24 hours a day, six days a week by 200-300 users. Telling VMTurbo to adjust its recommendations in light of this application infrastructure wasn’t simple, but it wasn’t very difficult either. That I finally got something to view the stack in this way put a bounce in my step and a feather in my cap in the closing days of my time with PE. With VMTurbo, my former colleagues on the help desk could answer “Why is it slow?!?!” and I think that’s great.
  • Like mom, it points out flaws, records your mistakes and even puts a $$ on them, which was embarrassing yet illuminating: I was measured by this appliance and found wanting. VMTurbo, after watching the stack for a good two weeks, surprisingly told me I had overprovisioned -by two- virtual CPUs on a secondary SQL server. It recommended I turn off that SQL box (yes, yes, we in Hyper-V land can’t hot-unplug vCPU yet, Save it VMware fans!) and subtract two virtual CPUs. It even (and I didn’t have time to figure out how it calculated this) said my over-provisioning cost about $1200. Yikes.
  • It’s agent-less: And the Windows guys reading this just breathed a sigh of relief. But hold your golf clap…there’s color around this from a Hyper-V perspective I’ll get into below. For now, know this: VMTurbo knocked my socks off with its superb grasp & use of WMI. I love Windows Management Instrumentation, but VMTurbo takes WMI to a level I hadn’t thought of, querying the stack frequently, aggregating and massaging the results, and spitting out its models. This thing takes WMI and does real math against the results, math and pivots even an Excel jockey could appreciate. One of the VMTurbo product managers I worked with told me that they’d like to use Powershell, but powershell queries were still to slow whereas WMI could be queried rapidly.
  • It produces great reports I could never quite build in SCOM: By the end of day two, I had PDFs on CPU, Storage & network bandwidth consumption, top consumers, projections, and a good sense of current state vs desired state. Of course you can automate report creation and deliver via email etc. In the old days it was hard to get simple reports on CSV space free/space used; VMTurbo needed no special configuration to see how much space was left in a CSV
  • vFeng Shui for your virtual datacenter
    vFeng Shui for your virtual datacenter

    Integrates with AD: Expected. No surprises.

  • It’s low impact: I gave the VM 3 CPU and 16GB of RAM. The .vhd was about 30 gigabytes. Unlike SCOM, no worries here about the Observer Effect (always loved it when SCOM & its disk-intensive SQL back-end would report high load on a LUN that, you guessed it, was attached to the SCOM VM).
  • A Eureka! style moment: A software developer I showed the product to immediately got the concept. Viewing infrastructure as a supply chain, the heat map showing current state and desired state, these were things immediately familiar to him, and as he builds software products for PE, I considered that good insight. VMTurbo may not be your traditional operations manager, but it can assist you in translating your infrastructure into terms & concepts the business understands intuitively.
  • I was comfortable with its recommendations: During #VFD3, there was some animated discussion around flipping the VMTurbo switch from a “Hey! Virtualization engineer, you should do this,” to a “VMTurbo Optimize Automagically!” mode. But after watching it for a few weeks, after putting the APM together, I watched its recommendations closely. Didn’t flip the switch but it’s there. And that’s cool.
  • You can set it against your employer’s month end schedule: Didn’t catch a lot of how to do this, but you can give VMTurbo context. If it’s the end of the month, maybe you’ll see increased utilization of your finance systems. You can model peaks and troughs in the business cycle and (I think) it will adjust recommendations accordingly ahead of time.
  • Cost: Getting sensitive here but I will say this: it wasn’t outrageous. It hit the budget we had. Cost is by socket. It was a doable figure. Purchase is up to my PE, but I think VMTurbo worked well for PE’s particular infrastructure and circumstances.

The Bad:

  • No sugar coating it here, this thing’s built for VMware: All vendors please take note. If VMware, nomenclature is “vCPU, vMem, vNIC, Datastore, vMotion” If Hyper-V, nomenclature is “VM CPU, VM Mem, VMNic, Cluster Shared Volume (or CSV), Live Migration.” Should be simple enough to change or give us 29%ers a toggle. Still works, but annoying to see Datastore everywhere.
  • Interface is all flash: It’s like Adobe barfed all over the user interface. Mostly hassle-free, but occasionally a change you expected to register on screen took a manual refresh to become visible. Minor complaint.
  • Doesn’t speak SMB 3.0 yet: A conversation with one product engineer more or less took the route it usually takes. “SMB 3? You mean CIFS?” Sigh. But not enough to scuttle the product for Hyper-V shops…yet. If they still don’t know what SMB 3 is in two years…well I do declare I’d be highly offended. For now, if they want to take Hyper-V seriously as their website says they do, VMTurbo should focus some dev efforts on SMB 3 as it’s a transformative file storage tech, a few steps beyond what NFS can do. EMC called it the future of storage!
  • VFD-Logo-400x398Didn’t talk to my storage: There is visibility down to the platter from an APM perspective, but this wasn’t in scope for the trial we engaged in. Our filer had direct support, our Nimble, as a newer storage platform, did not. So IOPS weren’t part of the APM calculations, though free/used space was.

The Ugly:

  • Trusted Install & taking ownership of reg keys is required: So remember how I said VMTurbo was agent-less, using WMI in an ingenious way to gather its data from VMs and hosts alike? Well, yeah, about that. For Hyper-V and Windows shops who are at all current (2012 or R2, as well as 2008 R2), this means provisioning a service account with sufficient permissions, taking ownership of two Reg keys away from Trusted Installer (a very important ‘user’) in HKLMCLSID and one further down in WOW64, and assigning full control permissions to the service account on the reg key. This was painful for me, no doubt, and I hesitated for a good week. In the end, Trusted Installer still keeps full-control, so it’s a benign change, and I think payoff is worth it. A Senior VMTurbo product engineer told me VMTurbo is working with Microsoft to query WMI without making the customer modify the registry, but as of now, this is required. And the Group Policy I built to do this for me didn’t work entirely. On 2008 R2 VMs, you only have to modify the one CLSID key

Soup to nuts, I left PE pretty impressed with VMTurbo. I’m not joking when I say it probably could optimize my former virtualized environment better than I could. And it can do it around the clock, unlike me, even when I’m jacked up on 5 Hour Energy or a triple-shot espresso with house music on in the background.

vmturboStepping back and thinking of the concept here and divesting myself from the pain of install in a Hyper-V context: products like this are the future of IT. VMTurbo is awesome and unique in an on-prem context as it bridges the gap between cost & operations, but it’s also kind of a window into our future as IT pros

That’s because if your employer is cloud-focused at all, the infrastructure-as-market-economy model is going to be in your future, like it or not. Cloud compute/storage/network, to a large extent, is all about supply, demand, consumption, production and bursting of resources against your OpEx budget.

What’s neat about VMTurbo is not just that it’s going to help you get the most out of the CapEx you spent on your gear, but also that it helps you shift your thinking a bit, away from up/down, latency, and login times to a rationalized economic model you’ll need in the years ahead.