The Game Just Changed Son

Meet the new focused, nimble & determined Microsoft, now free of Windows shackles, eyes set fiercely on the rainbow colored G in Mountain View:

Microsoft CEO Satya Nadella on Thursday unveiled Office for iPad, a highly anticipated and long overdue version of its bread-and-butter productivity software for Apple’s popular tablet.

The move enforces Microsoft’s recognition that it must deliver services to both businesses and consumers wherever they are, especially on mobile devices.

The app will be live for free in Apple’s App Store today. You’ll be able to read and present your content that way, but for creating and editing content, you will need an Office 365 subscription.

Today marks the “beginning of exploration for us,” Nadella said in his opening remarks.

Lean and mean Nadella then stared menacingly into the camera, narrowed his gaze and stared right through the glass to Sergey & Larry’s soul, then puffed his chest slightly and in what can only be described as a shouted whisper said, “Nadella out.”

Then he dropped the mic, stormed off the stage and some reporters fainted.

The game just changed, son.

My first thoughts, some probably influenced or borrowed from the great Mary Jo Foley and/or Paul Thurott are:

  • Is this the end of Windows? Microsoft just de-coupled it’s most profitable product Office, from the product that built Microsoft, Windows. That’s huge and it’s good. Redmond has recovered from Ballmer Fever, an acute condition marked by paralysis and fear, excess nostalgia for the way things were, and grandiose thinking out of sync with reality. Spring has come, rebirth is in the air, and the Ballmer Fever has broken, hallelujah. Office 365 online for cloud + high quality Office is going to the places where the people are at: iPads and iPhones. It’s been a soul-wrenching fight to get here, but Microsoft did it. They gut checked this out of committee and into the real world. Good on them and good on Nadella for making “cloud & mobile” something more than “devices & services.”
  • Drive is old ‘n Tired, O365 is the fresh look: Google Drive hasn’t aged well. Funny to think of it in those terms, but Docs debuted in what, 2007 or 2008? Largely it looks the same. Office 2013 has its UI fans, me among them, and it’s detractors, but you can’t say it’s remained static and boring. It’s rich. It’s still pretty fresh and, what’s more, the Hotmail/ Messenger veneers have finally been wiped away from Microsoft’s O365, Azure and interfaces for good. It used to be you could scratch an Azure or like a lottery ticket and find some Hotmail or underneath, but no longer!  Somehow, someway, Microsoft went and learned great UI & Web design skills, and this is coming from an HTML 5 fanboi & zealot and one-time disciple of the ChromeOS Religion.
  • This is a really compelling deal: The O365 personal subscription is $70 a year, so affordable you almost want to buy a domain and move it there since you get Office + Outlook just like at your work. Now that deal is even sweeter because you can get Office on an iPad your wife can use. No more Office interpretative art apps for the Household Technologist to troubleshoot just to view the .xlsx file! Hurray!
  • This means less pain at work, perhaps a beginning of the end of Tech Cognitive Dissonance & Shadow IT: That C-Level executive in your life who cut his teeth on Lotus 1-2-3 and is now the premiere Excel ninja & Power Pivot/Sharepoint reosurce for the company can use his preferred touch device, an iPad, the same device he brute forced into your stack a few years ago. No more fighting; here’s your iPad Office sir. Yes sir, I promise, I will never ask you to try a Surface tablet again. Thank you good day. One less battle!
  • This is Agnostic Computing in theory and practice: We the people want the software and tools and apps and content we like to be available on whatever device we happen to have with us at the moment, whether it’s got a half-eaten fruit on it, a Windows logo, or a green robot. Microsoft just validated that vision
  • This is confusing for Microsoft shops: My British colleague and friend sitting across from me was excited by this too. He’s a Windows guy like me professionally, but an Apple fan in his private life. Hey, no one is perfect, and I hesitate to share his private afflictions like that, but he approved it. Anyway, upon hearing the Good News, my colleague began pondering an iPad-as-workstation strategy and rushed to the AppStore to get Office for his iPhone. Now he’s got The Real Thing installed alongside his OneDrive app, and he opened an Excel doc from our Sharepoint site which hooked somehow into our modest O365 implementation but that prompted for his on-prem domain credentials and holy shit, can I get a whiteboard up in here? I’ve lost the thread/Kereberos ticket. Point is, he got Excel on his iPhone.

I’m feeling pretty bullish on Microsoft today even if I fear for my precious Windows & Hyper-V. This is exciting news, a step forward for technology and an acknowledgement that while Microsoft can’t compete everywhere, in some places, it’s still to be feared and it’s still on its feet. 

The fever has broken and the game is back on. Your move Google.

Labworks 1:1-2 : I Heart the ARC & Let’s Pull Some Drives!

Last week on Labworks’ debut, Labworks #1 : Building a Durable & Performance Oriented ZFS Box for Hyper-V & VMware, I discussed & shared a few tips, observations & excellent resources for building out a storage layer for your home IT lab using Sun’s Oracle’s the open source community’s Illumos’ the awesome Zetabyte File System via the excellent NAS4Free crew and FreeBSD.

The post has gotten quite a bit of traffic and I hope it’s been helpful to folks. I intended to do the followup posts soon after that, but boy, have I had a tough week in technology. 

Let’s hop to it, shall we?

Labworks  1.1-2 : I Heart the ARC and Let’s Pull Some Drives!

When we left Labworks 1, I assigned myself some homework. Here’s an update on each of those tasks, the grade I’d give myself (I went to Catholic school so I’m characteristically harsh) and some notes on the status:

Next Step, Completed, Grade, Notes
Find out why my writes suck, Kind of, B-, Replaced Switch & Deep dived the ZIL

Test NAS4Free’s NFS Performance, No,F, One Pink Screen of Death too many

Test SMB 3.0 from a VM inside ZFS box, No, F, Block vs File Bakeoff plans

Sell some stuff, No, C, Other priorities

Rebuild rookie standard switch into distributed, no,F, Cant build a vSwitch without a VMware host

I have updates on all of these items, so if you’re curious stick around as they’ll be posted in subsequent Labworks. Suffice it to say, there’s been some infrastructure changes in the Daisetta Lab, and so here’s an updated physical layout with Skull & Crossbones over my VMware host, which I put out of its misery last week.

Lab 1a - Daisetta Labs

In the meantime, I wanted to share some of the benefits of ZFS for your Hyper-V or VMware lab.

1:1 – I Heart the ARC

So I covered some of the benchmark results in Labworks 1, but I wanted to get practical this week. Graphs & benchmarks are great, but how does ZFS storage architecture benefit your virtualization lab?


At least in the case of Hyper-V, Clustered Share Volumes and dynamic .vhdxs on iSCSI.

To really show how it works, I had to zero out my ARC, empty the L2ARC, and wipe the writes/reads counters to each physical volume out. And to do that, I had to reboot SAN2. My three virtual machines -a Windows 7 vm, a SQL 2014 VM, and a Virtual Machine Management server, had to be shut down, and just to do it right and by the book, I placed both the CSV & LUN mapped to Node-1 into maintenance mode.

And then I started the whole thing back up. Here are the results followed by two animated gifs. Remember, the ARC is your system RAM, so watch how it grows as ZFS starts putting VMs into RAM and L2ARC, my SSD drives:

ARC Size (Cold Boot), ARC Size after VM Boot, ARC Size +5h, L2ARC Size (Cold Boot), L2ARC Size after VM Boot, L2ARC Size + 5h

7MB, 10GB, 14GB, 900KB, 4.59GB,6.8GB


So for you in your lab, or if you’re pondering similar tech at work, what’s this mean?

Boot speed of your VM fleet is the easiest to quantify and the greatest to behold, but that’s just for starters.

ZFS’ ARC & L2ARC shaved over 80% off my VM’s boot times and massively reduced load on rotational disks on the second boot.

Awesome stuff:

Win7 Cold Boot to Login, Highest ZVol %busy, SSD Read/Write Ops, Win7 2nd Boot to Login, Highest ZVol %busy, SSD Read/Write Ops

121s, 103%, 8/44, 19.9s, 13%, 4/100k


The gains here are enormous and hint at the reasons why SSD & caching are so attractive. Done right, the effect is multiplicative in nature; you’re not just adding IOPS when you add an SSD, you’re multiplying storage performance by several orders of magnitude in certain scenarios. And VM boot times are such a scenario where the effect is very dramatic:


% Improvement in Boot Time,ZVol %Busy Decrease, %ARC Growth, L2ARC Growth

84%, -87%, 43%,410%


This is great news if you’re building lab storage because, as I said in Labworks 1, if you’re going to have to use an entire physical box for storage, best to use every last bit of that box for storage, including RAM. ZFS is the only non-commercial system I know of to give you that ability, and though the investment is steep, the payoff is impressive. 

46869718Now at work, imagine you have a fleet of 50 virtual machines, or 100 or more, and you have to boot or reboot them on a Saturday during your maintenance window. Without some sort of caching mechanism, be it a ZFS ARC & its MRU/MFU algorithms, or some of the newer stuff we saw at #VFD3 including Coho’s system & Atlantis’ ILIO USX, you’re screwed.

Kiss your Saturday goodbye because on old rotational arrays, you’re going to have to stagger your boots, spread it over two Saturdays, or suffer the logarithmic curve of filer entropy & death as more IO begets more IO delay in a vicious cycle of decay that will result in you banging your fists bloody on the filer, begging the storage gods for mercy and relief.

Oh man that was a painful Saturday four years ago.

I wish I could breakdown these results even further; what percentage of that 19s boot time is due to my .vhdx being stored in SAN2’s ARC, and what percentage is due, if any, to ZFS compression on the volume or by the CPU on the IO ‘stream’ itself, as I’ve got that particular box ticked on CSV1 as well?

That’s important to understand for lab work or your real job because SSD & caching are only half of the reason why the stodgy storage sector has been turned on its head. Step back and survey the new players vs the old, and I think you’ll find that many of the new players are reading & writing data to/from their arrays in more intelligent (or risky, depending on your perspective) ways, by leveraging the CPU to compress inbound IO, or de-duping on the front-end rather than on the back-end or, in the case of a Coho, just handing over the switch & Layer 2 to the array itself in wild yet amazing & extensible ways.

My humble NAS4Free box isn’t near those levels of sophistication yet I don’t think it’s improper to draw an academic-family tree-style dotted line between my ZFS lab storage & some of the great new storage products on the market that are using sophisticated caching algorithms & compression/processing to deliver high performance storage downmarket, so downmarket that I’ve got fast storage in my garage!

Perhaps a future labworks will explore compression vs caching, but for now, let’s take a look at what ZFS is doing during the cold & warm boots of my VMs.

Single Pane O’GifGlass animated shot of the cold boot (truncated):

In the putty window, ada0-5 are HDD, ada6&7 are SSD, and ada8 is boot
In the putty window, ada0-5 are HDD, ada6&7 are SSD, and ada8 is boot. GStat de-abstracts ZFS & shows you what your disks are doing. Check out how ZFS alternates writes to the two SSDs. Neat stuff.

And the near #StorageGlory Gifcam shot of the entire 19s 2nd boot cycle after my ARC & L2ARC are sufficiently populated:

80% decrease in boot times thanks to the ARC & l2ARC. Value boner indeed.
80% decrease in boot times thanks to the ARC & L2ARC. Now ZFS has some idea of what my most frequently used & most recently used data is, and that algorithm will populate the ARC & L2ARC.

Of course, how often are we rebooting VMs anyway? Fair point.

One could argue the results above, while interesting, have limited applicability in a lab, a small enterprise or even a large one, but consider this: if you deliver applications via session virtualization technologies -XenApp or RDS come to mind- on top of a hypervisor (virtualization within virtualization for the win!), then ZFS and other caching systems will likely ease your pain and get your users to their application faster than you ever could achieve with rotational storage alone. So in my book, it’s something you should master and understand.

Durability Testing

So all this is great. ZFS performs very well for Hyper-V, the ARC/L2ARC paradigm works, and it’s all rather convincing isn’t it? I’ll save some thoughts on writes for a subsequent Labworks, but so far, things are looking up.

Of course you can’t be in IT and not worry about the durability & integrity of your data. As storage guys say, all else is plumbing; when it comes to data and storage, an array has to guarantee integrity.

This is probably most enjoyable test of all IT testing regimes, if only because it’s so physical, so dramatic, so violent, and so rare. I’m talking about drive pulls & storage failure simulations, the kind of test you only get to do when you’re engaging in a PoC at work, and then, perhaps for SMB guys like me, only once every few years.

As I put it back in January when I was testing a Nimble array at work, “Wreck that array.”

At home of course I can’t afford true n+1 everywhere, let alone waste disks on something approaching the level of reliability of RAID DP, but I can at least test RAIDZ2, ZFS’ equivalent to RAID 6.

Drive Pull test below. Will my CSVs stay online? Click play.

More Labworks results tomorrow!

Storage Networking excellence in an easy-to-digest .mp3

Can’t recommend the latest Packet Pushers Podcast enough. I mean normally, Packet Pushers (Where too Much Networking would Never be Enough) is great, but their Storage Networking episode this week was excellent.

Whether you’re a small-to-medium enterprise with a limited budget & your only dream is getting your Jumbo frames to work from end-to-end on your 1GigE (“a 10 year old design,” one of the panelists snarked), or you’re a die-hard Fiber Channel guy and will be until you die, the episode has something for you.

Rock star line-up too: Chris Wahl, Greg Ferro and J. Metz, a Cisco PhD.

The only guy missing is Andrew Warfield of Coho Data, who blew my mind and achieved Philosopher King of Storage status during his awesome whiteboarding session at #VFD3.

Check it out.

Branch office surveillance in a box :Ubiquiti Aircam, Ubunutu Linux & Hyper-V

I pause today from migrating .vhdxs to this:



and stressing this:


to deploying six of these to a new small branch office:


In my industry, we’re constantly standing up new branch offices, tearing down old ones, and so sometimes I have to take off the virtualization/storage guy hat and put on the project management, facilities & security hat, something I admit to enjoying.

And one of my focuses in this area is on rapid deployment of branch offices. I want to be able to deploy a branch office from an IT, security & infrastructure perspective as quickly as overnight and at or below budgeted cost. Tearing down branch offices is more leisurely, but building new ones? I aim for excellence; I want to be the Amazon Prime of branch office rollouts.

Lack of 802.3af PoE standard makes standards guy cry, but for the price, I'll tolerate and use a dongle
Lack of 802.3af PoE standard makes standards guy cry, but for the price, I’ll tolerate and use a dongle

So I’ve tried to templatize the branch office stack as much as possible. Ideally, I’d have a hardened, secure rolling 19″ 12 or 16u rack, complete with a 8 or 16 port switch (SG300 maybe?), patch panel, a Dell R210 II server, 16GB RAM, and 1 terabyte in RAID 1 as a Hyper-V host, a short-depth but sufficient capacity UPS, and a router of some type: it should have 4G LTE & 1Gbase-T as a WAN-connectivity option, VPN ability (to connect to our MPLS) and, ipv6 dreams aside for now, NAT ability, and, of course, the one thing that will never become virtualized or software-defined, a workgroup printer.

Give me that in a rolling rack, and I can drop-ship it anywhere in CONUS overnight. Boom, Instant Branch Office in a Box (structured cabling comes later).

But one of the things that’s gotten in the way of this dream (besides getting the $ spend ahead of time, which is also a big issue) has been provisioning camera security. We need to watch our valuables, but how?

Weather resistant I suppose though I've read the little door that covers this hatch can let moisture in
Weather resistant I suppose though I’ve read the little door that covers this hatch can let moisture in

Usually that means contracting with some slow-moving local security company, going through a lengthy scoping process, choosing between cheap CCTV & DVR vs ip cameras & DVR, then going through a separate structured cabling process, and finally, validating. Major pain, and can get pricey very quickly: the last office I built required six 720p non-IR cameras + IP DVR + Mobile access to camera feeds. Price:$10k, 1.5x the cost of all the equipment I purchased for the 12u rolling rack!!

Meanwhile, you’ve got the business’ stakeholders wondering why it’s all so complicated. At home, they can connect a $100 720p IP camera up to their wifi, and stream video of their son/dog/whatever to their iPhone while they’re out and about, all without hiring anyone. I know it’s not as hardened or reliable as a real security camera system, but in IT, if you’re explaining, you’re losing.

And they do have a point.

This is a space begging for some good old fashioned disruption, especially if you don’t want the montly OpEx for the security company & your goal is only to get adequate surveillance (Two big Ifs, I recognize).

Enter Ubiquti Networks, an unusual but interesting wireless company that targets enterprise, carrier and pro-sumers with some neat solutions (60GhZ point-to-point wifi for the win!). After selling the boss on the vision & showing him the security company quote, I was able to get approval for six Ubiquiti Networks Airvision cameras, a Dome camera all for about $850 off Amazon, via the magical procurement powers of the corporate credit card.

The potential for my pet Branch Office in a Box project is huge and the cost was low. Here’s the vision:

  • Cat5e structured cabling contractor can now hang my cameras and run Cat 5e to them, especially since I’m familiar with aperture & focal length characteristics of the cameras and can estimate location without being on site.
  • DVR unit is an Ubuntu virtual machine in Hyper-V 3, recording to local storage which is backed up off-site via normal processes (it’s just a *.vhdx afterall) . That alone is huge; it’s been very painful to off-site footage from proprietary DVR systems
  • Reserve IPs for cameras prior to deployment via MAC address and normal process
  • Simple affair to secure via HTTPS/ssh the Linux appliance, NAT it out to the internet, then send a URL for the Apple Store & Play Store Ubiquiti camera compatible software, of which there seem to be several

Fantastic. I mean all that’s missing from making BiB into something stupid-proof and ready today is fire & alarm systems (yes, I’ve looked at NEST but regulations made me run for traditional vendors).

Demerits on this package so far:

  • Feels a bit cheap but not complaining too much. However it won't survive an attack
    Feels a bit cheap but not complaining too much. However it won’t survive an attack

    The cameras feel a little cheap. They offer minimal weather-resistance but the plastic casing feels like it was recycled from a 1995 CRT monitor: this thing’s going to turn yellow & brittle

  • No vandal-resistance. Maybe I missed the sku for that add-on. May need to improvise here; these won’t survive a single lucky struck from a hoodlum and his Louisville Slugger
  • Passive POE: So much for standards right? These cameras, sadly, require passive PoE dongle-injectors. And no, 802.3af active PoE, the kind in your switch, won’t work. You need a dongle-injector.

Other than, color me impressed.

Out of the box, the cameras are set for DHCP, but if you reserve the MAC on your DHCP server, you can neatly provision them in your chosen range without going through the usual pain.

Building the Ubuntu virtual machine -our DIY IP cam DVR system- on the Hyper-V host couldn’t be simpler. I followed Willie Howe’s steps here and recorded a few Gifcam shots to show you how easy it was.

As far as the management interface and DVR system: well I’ll say it feels much more integrated, thoughtful and enterprise-level than any of the small IP DVR systems I’ve struggled with at branch offices to date.


The big question is on performance, reliability, and sensitivity to recording when there’s movement in the zones I need to be monitored. And whether the stakeholder at the remote office will like the app.

But so far, I have to say: I’m impressed. I just did in 90 minutes what would have taken a local security company contractor about 2 weeks to do at a cost about 90% less than they wanted from me.

That’s good value even if these cheap $99 cameras don’t last for more than a year or two.

Airvision https interface allows you to post a floorplan, schedule and manage cameras, and  set recordings settings.
Airvision https interface allows you to post a floorplan, schedule and manage cameras, and set recordings settings.


#VFD3 Day 3 : VMTurbo wants to vStimulate your vEconomy

Going into the last day of #VFD3, I was a bit cranky, missing my Child Partition, and strict though her protocols be, the Supervisor Module spouse back home.

But #TFD delegates don’t get to feel sorry for themselves so I put on my big boy pants, and turned my thoughts to VMTurbo.

vmturbo“What the hell is a VM Turbo,” I asked myself as I walked in the meeting room and took a good hard look at the VMTurbo guys.

Two of them were older gentlemen, one was about my age. The two older guys had Euro-something accents; one was obviously Russian. I made with the introductions and got a business card from one of them. It listed an address in New Jersey or New York or something. Somewhere gritty, industrial and old no doubt, the inverse of Silicon Valley.

As I moved to my seat, there was some drama between the VMTurbo presenters and Gestalt’s Tom Hollingsworth who, even though he’s a CCIE, seems to really enjoy playing SVGA Resolution Cop. “The online viewers aren’t going to be able to see your demo at 1152×800,” Tom said. “You need to go 1024×768 to the projector” he nagged.

“But but….” the VMTurbo guys responded, fiddling with some setting which caused the projected OS X desktop to disappear and be replaced by a projector test pattern. At that point everyone’s attention shifted, once again, to the stupid presentation Mac book and its malfunctioning DisplayPort-VGA converter dongle*

Sitting now, I sighed: So this was how #VFD3 was going to end: A couple of European sales guys had flown out from the east coast to California, probably hoping to catch some sunshine after pitching a vmturbo to us, but now Deputy SVGA and the Mac Book’s stupid dongle were getting in their way.

Great! I thought as I sat down.

And then I was blown away for the next 2.5 hours.

VMTurbo isn’t a bolt-on forced induction airflow device for your 2u host, rather, it’s a company co-founded four years ago  by some brainy former EMC scientists, both of whom were now standing before me. The Russian guy, Yuri Rabover, has a PhD in operating systems, and the CEO, Shmuel Kliger, has a PhD in something sufficiently impressive.

The product they were pitching is called Operations Manager (yes, yet another OpsMan), which is unfortunate because the generic name doesn’t help this interesting product stand out from the pack….this is operations management, true, but with an economics engine, real life reporting on cost/benefits and an opportunity cost framework that seems pretty ingenious to me.

Yeah I said economics. As in animal spirits and John Maynard Keynes, the road to serfdom & Frederic Hayek, Karl Max & Das Kaptial vs Milton Friedman, ‘Merica and childhood lemonade stands…that kind of economics.

Command your vEconomy with OpsMan. Obama wishes he had these sliders & dials for the economy
Command your vEconomy with OpsMan. Obama wishes he had these sliders & dials for the economy

And I’m not exaggerating here; they opened the meeting talking about economics! They told us to step back and imagine our vertical stack at every stage -from LUN to switch to hypervisor to cpu to user-facing app- as a marketplace in which resources are bought, sold, and consumed, in which there is limited supply & unpredctable demand, in which certain resources are scarce & therefore valuable while others are plentiful and therefore cheap. There’s even a consumers/producers slider; your “maker” filer produces 30,000 IOPS, but your “taker” users are consuming 25,000. Abuse & overuuse of the modern Type 1 Hypervisor akin to over-grazing the commons with your sheep, a Tragedy of the vCommons, if you will.

Someone stop me, I’m having too much fun.

You get the idea: the whole virtualization construct is a market economy.

I can’t speak for the other delegates, but I was enraptured partly because it seemed to validate my post on Pareto efficiency & Datacenter spending as right-tracked & thoughtful rather than the ravings of a crazy man and partly because it’s a framework I’ve gotten used to operating under my entire IT career.

But that’s just me nerding out and getting some feel-good confirmation bias. Is this even a useful & practical way to think of your IT resources? Is VMTurbo’s premise solid or crazy?

I’d argue it’s a pretty solid premise & a good way to frame your virtualization infrastructure. We already do it in IT: I bet you ‘budget’ IOPS, bandwidth, CPU & memory to meet herky-jerky demand, amid expectations for the availability and performance of those resources. That’s kind of our job, especially in the SMB space, where companies buy the server, storage, and network guy for the price of one systems guy, right?

So what VMTurbo’s OpsMan actually do? Some pretty cool things.


VMTurbo's Yuri Rabover whiteboards my pain : between the user & my spindles, wherein lies the problem?
VMTurbo’s Yuri Rabover whiteboards my pain : between the user & my spindles, wherein lies the problem?

OpsMan allows you to put a high value/mission critical designation on a user-facing application. And with that information, the economic engine takes over and with OpsMan’s visibility all the way from your user-facing VM to your old 7200 rpm spinning platters, it’s going to central-plan the animal spirits out of your stack and spit out some recommendations to you, which you can then, in the words of one of the VMTurbo guys, “Hit the recommendations button and this red thing goes to green.”**

Ahh who doesn’t like green icons indicating health and balance? Of course, achieving that isn’t very hard; just give as much resources as you can to every VM you have and you’ll get green. The nag emails will go away, and all will be well.

For awhile anyway.

Any rookie button pusher can give VMs what they want and make things green (answer: always more), but it takes wisdom and discernment to give VMs what they actually need to accomplish their task yet avoid the cardinal sin of over-provisioning which leads inevitably to a Ponzi scheme-style collapse of your entire infrastructure. 

Therein lies the rub and VMTurbo says it can put some realistic $$ and statistics around that decision, not only dissuading you from over-provisioning a VM, but giving you an opportunity cost for those extra resources you’re about to assign.

Is that a Windows Hyper-V logo alongside VMware, KVM & Xen? I'm ever-so pleased. Go onnnnn
Is that a Windows Hyper-V logo alongside VMware, KVM & Xen? I’m ever-so pleased. Go onnnnn with your presentation: “Recently, Microsoft’s Hyper-V (and its surrounding ecosystem) has reached a level of technical maturity that has more enterprises considering the increased diversity deployment,” VMTurbo says in a blog.

Sure it’s tempting to add 10vCPU & 32GB of RAM to that critical SQL cluster, especially if it gets the accounting department off your back. But VMTurbo’s OpsMan sees the stack from top to bottom and it can caution you that adding those resources to SQL will degrade the performance of your XenApp PoS farm, for instance, or it might suggest you add some disk to another stack.

Neat stuff.

VMTurbo, essentially, says it can do your job (especially in the SMB space) better than you can, freeing you up from fire fighting & panicked ssh sessions to check the filer’s load to more important things, like DevOps or script-writing or whatever you fancy I suppose.

And when its done having its way with your stack, when the output from your vEconomy is incredibly larger than the input, when you’ve arrived at #VirtualizationGlory and there’s no more left to give, the VMTurbo guys say OpsMan can give you a report the CFO can read and understand, a report that says, “Here thar be performance, but no farther without CapEx, yarr.”

Am I using what I've got in the most efficient way possible? Welcome to the black arts of virtualization.
Am I using what I’ve got in the most efficient way possible? Welcome to the black arts of virtualization.

For me in Hyper-V & SMB land, where SCOM is a cruel & unpredictable, costly mistress and the consultant spend is meager, VMTurbo feels like a solid & well thought out product. For VMware, OpenStack shops? I’m unclear if VCOPS does something similar to OpsMan, but man did VMTurbo’s presentation get my VMware colleagues talking. Highly recommend you set aside an hour and watch the discussion at Tech Field Day’s youtube site.

All in all not a bad way to end #VFD3: getting challenged to think of virtualized systems as economies by some very sharp engineers from Europe, one of whom learned operating system design in Soviet Russia, where the virtualization sequence is decidedly v2p, but the thinking, understanding, and perhaps execution are first rate. I’ll know more on the execution side when I put this in my lab.

I hereby dub the VMTurbo guys Philosopher Kings of Virtualization for taking a unique and thoughtful approach.

Ping them if: You operate in environments where IT spend is limited, or you want to get the most out of your hardware

Set Outlook reminder for when: No need to wait. With solid VMware support, some flattering & nice things but also real products for Hyper-V, and KVM/Xen support, Open Stack soon, VMTurbo pleases

Send to /dev/null if: You have virtually unlimited hardware capex to spend and play with. If so, congrats. 

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Eric Shanks

Andrea Mauro

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 


* Sidenote: Still floored that one can’t escape dongle technology dysfunction even in Silicon Valley

**You can even automate VMTurbo’s recommendations, which the company says a great many of its customers do, a remark that caused a bit of discomfort among the #VFD3 crew. 

Wargamming a mass Storage Live Migration with a 6509e, part 1

Storage Live Migration is something we Hyper-V guys only got in Server 2012 and it was one of the features I wanted the most after watching jealously as VMware guys storage vMotion .vhdks since George Bush was in office (or was it Clinton?).

I use Live Migration all the time during maintenance cycles and such, but pushing .vhdx hard drives around is more of a rare event for me.

Until now. See, I’ve got a new, moderately-performing array, a Nimble CS260 + an EL-125 add-on SAS shelf. It’s the same Nimble I abused in my popular January bakeoff blog post, and I’m thrilled to finally have some decent hybrid storage up in my datacenter.

Big Iron Switching baby

However before I can push the button or press enter at the end of a cmdlet and begin the .vhdx parade from the land of slow to the promised land of speed, I’ve gotta worry about my switch.

You see, I’ve got another dinosaur in the rack just below the Nimble: a Cisco 6509e with three awful desktop-class blades, two Sup-720 mods in layer 3 and with HSRP, and then, the crown jewels of the fleet: two WS-X6748-GE-TX, where all my Hyper-V hosts & two SAN arrays are plugged in, each of them with two port-groups each with 20Gb/s fabric capacity.

Ahhh, the 6509: love it or hate it, you can’t help but respect it. I love it because it’s everything the fancy-pants switches of today are not: huge, heavy, with shitty cable management, extremely expensive to maintain (TAC…why do you hurt me even as I love you?), and hungry for your datacenter’s amperage.

I mean look at this thing, it gobbles amps like my filer gobbles spindles for RAID-DP:

show power cisco

325 watts per 6748, or, just about 12 watts less than my entire home lab consumes with four PCs, two switches, and a pfSense box. The X6748s are like a big block V8 belching out smoke & dripping oil in an age of Teslas & Priuses…just to get these blades into the chassis forced me to buy a 220v circuit & to achieve PSU redundancy required heavy & loud 3,000 watt supplies.

The efficiency nerd in me despises this switch for its cost & its Rush Limbaugh attitude toward the environment, yet I love it because even though it’s seven or eight years old, it’s only just now (perhaps) hitting the downward slope on my cost/performance bell curve. Even with those spendy power supplies and with increasing TAC costs, it still gives me enough performance thanks to this design & Hyper-V’s converged switching model:

Errr sorry for the colors. The Visio looks much better but I had to create a diagram that even networking people could understand

Now MIke Laverick, all-star VMware blogger & employee, has had a great series of posts lately on switching and virtualization. I suggest you download them to your brain stat if you’re a VMware shop; especiallythe ones enabling netflow on your vSwitch & installing a vApp Scrutinizer, the new distributed switch features offered in ESXi 5.5 and migrating from standard to distributed switches. Great stuff there.

But if you’re at all interested in Hyper-V and/or haven’t gone to 10/40Gig yet and want to wring some more out of your old 5e patch cables, Hyper-V’s converged switching model is a damned fine option. Essentially a Hyper-V converged switch is a L2/L3 virtual switch fabricated on top of a Microsoft multiplexor driver that does GigE NIC teaming on the parent partition of your Hyper-V host.

This is something of a cross between a physical and logical diagram and it’s a bit silly and cartoonish, but a fair representation of the setup:

converged fabric
The red highlight is where the magic happens

So this is the setup I’ve adopted in all my Hyper-V instances….it’s the setup that changed the way we do things in Hyper-V 3.0, it’s the setup that allows you to add/subtract physical NIC adapters or shutdown Cisco interfaces on the fly, without any effect on the vNICs on the host or the vNICs on the guest. It is one of the chief drivers for my continuing love affair with LACP, but you don’t need an LACP-capable switch to use this model; that’s what’s great about the multiplexor driver.

It’s awesome and durable and scalable and, oh yeah, you can make it run like a Supercharged V-6. This setup is tunable!

Distributed Switching & Enhanced LACP got nothing on converged Hyper-V switching, and that is all the smack I shall talk.

Now sharp readers will notice two things: #1 I’ve oversubscribed the 6748 blades (the white spaces on the switch diagram are standard virtual switches, iSCSI HBAs for host/guests and these switches function just like the unsexy Standard switch in ESXi) and #2 just because you team it doesn’t mean you can magically make 8 1GbE prots into a single 8 Gb interface.

Correct on both counts, which is why I have to at least give the beastly old 6509 some consideration. It’s only got 20Gb/s of fabric bandwidth per 24 port port-group. Best to test before I move my .vhdxs.

In part 2, I’ll show you in detail some of those tests. In the meantime, here’s some of my Netflows & results from some tests I”m running ahead of moves this weekend.

Hitting nearly 2gb/s on each of the Nimble iSCSI vEthernets
Hitting nearly 2gb/s on each of the Nimble iSCSI vEthernets



What? These 6748 have been holding out on me and still have 80% left of its fabric to give me. So give it. I will not settle for 20% I want at least 50% utilization to make the moves fast and smooth. How to get there?
What? These 6748 have been holding out on me and still have 80% left of its fabric to give me. So give it. I will not settle for 20% I want at least 50% utilization to make the moves fast and smooth. How to get there?


My Netflow's not as sexy as Scrutinizer, but the spikey thing on the right shows one of my move/stress tests went way past the 95th percentile. More fun this weekend!
My Netflow’s not as sexy as Scrutinizer, but the spikey thing on the right shows one of my move/stress tests went way past the 95th percentile. More fun this weekend!



Labworks #1: Building a durable, performance-oriented ZFS box for Hyper-V, VMware

Welcome to my first Labworks post in which I test, build & validate a ZFS storage solution for my home Hyper-V & VMware lab.

Be sure to check out the followup lab posts on this same topic in the table below!


Labworks Chapter, Section, Subject, Title & URL

Labworks 1:, 1, Storage, Building a Durable and Performance-Oriented ZFS Box for Hyper-V & VMware

,2-3, Storage, I Heart the ARC & Let’s Pull Some Drives!


Labworks  #1: Building a durable, performance-oriented ZFS box for Hyper-V, VMware

Primary Goal: To build a durable and performance-oriented storage array using Sun’s fantastic, 128 bit, high-integrity Zetabyte File System for use with Lab Hyper-V CSVs & Windows clusters, VMware ESXi 5.5, other hypervisors,


The ARC: My RAM makes your SSD look like 15k drives
The ARC: My RAM makes your SSD look like a couplel of old, wheezing 15k drives

Secondary Goal: Leverage consumer-grade SSDs to increase/multiply performance by using them as ZFS Intent Log (ZIL) write-cache and L2ARC read cache

Bonus: The Windows 7 PC in the living room that’s running Windows Media Center with CableCARD & HD Home Run was running out of DVR disk space and can’t record to SMB shares but can record to iSCSI LUNs.

Technologies used: iSCSI, MPIO, LACP, Jumbo Frames, IOMETER, SQLIO, ATTO, Robocopy, CrystalDiskMark, FreeBSD, NAS4Free, Windows Server 2012 R2, Hyper-V 3.0, Converged switch, VMware, standard switch, Cisco SG300


Click for larger
Click for larger.

Hardware Notes:
System, Motherboard, Class, CPU, RAM, NIC, Hypervisor
Node-1, Asus Z87-K, Consumer, Haswell i-5, 24GB, 2x1GbE Intel I305, Hyper-V
Node-2, Biostar HZZMU3, Consumer, Ivy Bridge i-7, 24GB, 2x1GbE Broadcom BC5709C, Hyper-V
Node-3, MSI 760GM-P23, Consumer, AMD FX-6300, 16GB, 2x1GbE Intel i305, ESXi 5.5
san2, Gigabyte GA-F2A88XM-D3H, Consumer, AMD A8-5500, 24GB, 4x1GbE Broadcom BC5709C, NAS4Free
sw01, Cisco SG300-10 Port, Small Busines, n/a, n/a, 10x1GbE, n/a

Array Setup:

I picked the Gigabyte board above because it’s got an outstanding eight SATA 6Gbit ports, all running on the native AMD A88x Bolton-D4 chipset, which, it turns out, isn’t supported well in Illumos (see Lab Notes below).

I added to that a cheap $20 Marve 9128se two port SATA 6gbit PCIe card, which hosts the boot volume & the SanDisk SSD.


Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Zpool Members

Samsung 840 EVO SSD, 1, 128GB, 512byte, 250MB/read, L2ARC Read Cache

SanDisk Ultra Plus II SSD, 1, 128GB, 512byte, 250MB/read & 250MB/write?, ZIL

Seagate 2.5″ Momentus, 1, 500GB, 512byte, 80MB/r/w, Boot/swap/system


Performance Tests:

I’m not finished with all the benchmarking, which is notoriously difficult to get right, but here’s a taste. Expect a followup soon.

All shots below involved lzp2 compression on SAN2

SQLIO Short Test: 

sqlio lab 1 short test
Obviously seeing the benefit of ZFS compression & ARC at the front end. IOPS become more realistic toward the middle and right as read cache is exhausted. Consistently in around 150MB-240Mb/s though, the limit of two 1GbE cables.


ATTO standard run:

I’ve got a big write problem somewhere. Is it the ZIL, which don’t seem to be performing under BSD as they did under Nexenta? Something else? Could also be related to the Test Volume being formatted NTFS 64kb. Still trying to figure it out


NFS Tests:

None so far. From a VMware perspective, I want to rebuild the Standard switch as a distributed switch now that I’ve got a VCenter appliance running. But that’s not my priority at the moment.

Durability Tests:

Pulled two drives -the limit on RAIDZ2- under normal conditions. Put them back in, saw some alerts about the “administrator pulling drives” and the Zpool being in a degraded state. My CSVs remained online, however. Following a short zpool online command, both drives rejoined the pool and the degraded error went away.

Fun shots:

Because it’s not all about repeatable lab experiments. Here’s a Gifcam shot from Node-1 as it completely saturates both 2x1GbE Intel NICs:


and some pretty blinking lights from the six 2.5″ drives:


Lab notes & Lessons Learned:

First off, I’d like to buy a beer for the unknown technology enthusiast/lab guy who uttered these sage words of wisdom, which I failed to heed:

You buy cheap, you buy twice

Listen to that man, would you? Because going consumer, while tempting, is not smart. Learn from my mistakes: if you have to buy, buy server boards.

Secondly, I prefer NexentaStor to NAS4Free with ZFS, but like others, I worry about and have been stung by Open Solaris/Illumos hardware support. Most of that is my own fault, cf the note above, but still: does Illumos have a future? I’m hopeful, NextentaStor is going to appear at next month’s Storage Field Day 5, so that’s a good sign, and version 4.0 is due out anytime.

The Illumos/Nexenta command structure is much more intuitive to me than FreeBSD. In place of your favorite *nix commands, Nexenta employs some great, verb-noun show commands, and dtrace, the excellent diagnostic/performance tool included in Solaris is baked right into Nexenta. In NAS4Free/FreeBSD 9.1, you’ve got to add a few packages to get the equivalent stats for the ARC, L2ARC and ZFS, and adding dtrace involves a make & kernel modification, something I haven’t been brave enough to try yet.

Next: Jumbo Frames for the win. From Node-1, the desktop in my office, my Core i5-4670k CPU would regularly hit 35-50% utilization during my standard SQLIO benchmark before I configured jumbo frames from end-to-end. Now, after enabling Jumbo frames on the Intel NICs, the Hyper-V converged switch, the SG-300 and the ZFS box, utilization peaks at 15-20% during the same SQLIO test, and the benchmarks have show an increase as well. Unfortunately in FreeBSD world, adding jumbo frames is something you have to do on the interface & routing table, and it doesn’t persist across reboots for me, though that may be due to a driver issue on the Broadcom card.

The Western Digital 2.5″ drives aren’t stellar performers and they aren’t cheap, but boy are they quiet, well-built, and run cool, asking politely for only 1 watt under load. I’ve returned the hot, loud & failure prone HGST 3.5″ 2 TB drives I borrowed from work; it’s too hard to put them in a chassis that’s short-depth.

Lastly, ZFS’ adaptive replacement cache, which I’ve enthused over a lot in recent weeks, is quite the value & performance-multiplier. I’ve tested Windows Server 2012 R2 Storage Appliance’s tiered storage model, and while I was impressed with it’s responsiveness, ReFS, and ability to pool storage in interesting ways, nothing can compete with ZFS’ ARC model. It’s simply awesome; deceptively-simple, but awesome.

Lesson is that if you’re going to lose an entire box to storage in your lab, your chosen storage system better use every last ounce of that box, including its RAM, to serve storage up to you. 2012 R2 doesn’t, but I’m hopeful soon that it may (Update 1 perhaps?)

Here’s a cool screenshot from Nexenta, my last build before I re-did everything, showing ARC-hits following a cold boot of the array (top), and a few days later, when things are really cooking for my Hyper-V VMs stored, which are getting tagged with ZFS’ “Most Frequently Used” category and thus getting the benefit of fast RAM & L2ARC:


Next Steps:

  • Find out why my writes suck so bad.
  • Test Nas4Free’s NFS performance
  • Test SMB 3.0 from a virtual machine inside the ZFS box
  • Sell some stuff so I can buy a proper SLC SSD drive for the ZIL
  • Re-build the rookie Standard Switch into a true Distributed Switch in ESXi

Links/Knowledge/Required Reading Used in this Post:

Resource, Author, Summary
Three Example Home Lab Storage Designs using SSDs and Spinning Disk, Chris Wahl, Good piece on different lab storage models
ZFS, Wikipedia, Great overview of ZFS history and features
Activity of the ZFS Arc, Brendan Gregg, Excellent overview of ZFS’ RAM-as-cache
Hybrid Storage Pool Performance, Brendan Gregg, Details ZFS performance
FreeBSD Jumbo Frames, NixCraft, Applying MTU correctly
Hyper-V vEthernet Jumbo Frames, Darryl Van der Peijl, Great little powershell script to keep you out of regedit
Nexenta Community Edition 3.1.5, NexentaStor, My personal preference for a Solaris-derived ZFS box
Nas4Free,, FreeBSD-based ZFS; works with more hardware

#VFD3 Day One – Pure Storage has 99 problems but a disk ain’t one

Sponsor #3 : Pure Storage

PureStorage Logo - RGB - Small-304

I love the smell of storage disruption in the morning.

And this morning smells like a potpouri of storage disruption. And its wafting over to the NetApp & EMC buildings I saw off the freeway.

I was really looking forward to my time with Pure, and I wasn’t disappointed in the least. Pure, you see, offers an all-flash array, a startlingly-simple product lineup, an AJAX-licious UI, and makes such bold IOPS claims, that their jet black/orange arrays are considered illegal and immoral south of the Mason Dixon line.

This doesn't work anymore, or so newer storage vendors would have us believe
This doesn’t work anymore, or so newer storage vendors would have us believe

Pure also takes a democratic approach to flash. It’s not just for the rich guys anymore; in fact, Pure says, they’re making the biggest splash in SMB markets like the one I play in. Whoa, really. Flash for me? For everyone?

When did I die and wake up in Willy Wonka’s Storage Factory?

It’s an attractive vision for storage nerds like me. Maybe Pure has the right formula and their growth and success portends an end to the tyranny of the spindle, to rack U upon rack U of spinning 3.5″ drives and the heat and electrical spend that kind of storage requires.

So are they right, is it time for an all-flash storage array in your datacenter?

I went through this at work recently and it came down to this: there is an element of suspending your disbelief when it comes to all-flash arrays and even newer hybrid arrays. There’s some magic to this thing in other words that you have to accept, or at least get past, before you’d consider a Pure.

I say that because even if you were to use the cheapest MLC flash drives you could find, and you were to buy them in bulk and get a volume discount, I can’t see a way you’d approach the $ per GB cost of spinning drives in a given amount of rack U, nor could you match GB per U of 2.5″ 1 terabyte spinning disks (though you can come close on the latter). At least not in 2014 or perhaps even 2015.

So here, in one image, is the magic, Pure’s elevator pitch for the crazy idea that you can get an affordable, all-flash array that beats any spinning disk system on performance and meets/exceeds the storage capacity of other arrays:


Pure’s arrays leverage the CPU and RAM to maximize capacity & performance. Your typical storage workload on a Pure will get compressed where it can be compressed, deduped in-line, blocks of zeros (or other similar patterns) won’t be written to the array at all (rather, metadata will be recorded as appropriate) and thin provisioning goes from being a debatable storage strategy to a way of life in the Pure array.

Pure says all this inline processing helps them avoid 70-90% of writes that it would otherwise have to perform, writes it would be committing to consumer-grade MLC SSD drives, which aren’t built for write-endurance like enterprise-level SLC SSDs.

Array tech specs.
Array tech specs. Entry level array has only 2.7TB Raw SSD, but at 4>1 compression/dedupe ratio, Pure says that 11TB is possible. Click for larger.

What’s more, Pure includes huge amounts of RAM in even their entry-level array (96GB), which they use as ZFS-like hot cache to accelerate IO.

Dual Westmere-class 6 core Intel CPUs outfit the entry array and Pure’s philosophy on their use is simple: if the CPU isn’t being full-utilized at all times, something’s wrong and performance is being left on the table.

These clever bits of tech -inline compression, dedupe, and more- add up to a pretty compelling array that draws only 400-450 watts and takes up only 2u of your rack, and, I’m told, start at a bit under 6 figures.

Pure really took some time with us, indulging all our needs. I requested and was allowed to see the CLI interface to the “PurityOS,” and I liked what I saw. Pure also had a Hyper-V guy on deck to talk about integration with Microsoft & System Center, which made me feel less lonely in a room full of VMware folks.

Overall, Pure is the real deal and after really asking them some tough questions, hearing from their senior and very sharp science/data guys, I think I do believe in magic.

Ping them if: You suffer from spinning disks and want a low cost entry to all flash

Set Outlook reminder for when: No need. Feels pretty complete to me.Plugins for vcenter & System Center to boot

Send to /dev/null if: You believe there is no replacement for displacement (spindles)

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Eric Shanks

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 

#VFD3 Day One – Atlantis Computing’s 1 Million IOPS


Sponsor #2 : Atlantis Computing*

Agnostic doesn’t sugarcoat things and neither do my fellow delegates. We all agreed that the sharp guys at Atlantis Computing had a plan for us; all sponsors of #VFD3 have an agenda, but Atlantis really wanted to hold us to their’s. They didn’t dodge our probing and shouted questions, but at times they did ask us to wait a sec for the answer on the next slide.

And if you know Tech Field Day, then you know #VFD3 delegates aren’t virtuous….we don’t even understand what the word “patience” means, and we make sport out of violating the seven deadly sins. So when the Atlantis folks asked us again and again to wait, in effect to add some latency to our brains in the HQ of a company designed to defeat latency once and for all, I felt like the meeting was about to go horribly off the rails.

But they didn’t dodge our questions and I think, overall, the session with Atlantis Computing’s data guys was quite enlightening even if it did get a tad combative at times. On reflection and after talking to my fellow delegates, I think we measured Atlantis with our collective virtual brains and found them….splendid.

So what’s Atlantis pitching?

Oh just a little VM for VMware, Hyper-V and Xen that sits between your stupid Hypervisor (Quote of the Day : “What hypervisors end up doing is doling out IO resources in a socialist and egalitarian way,” further proof of my own thesis of the utility of applying economics in thinking about your datacenter) and your storage.

Wait, what? Why would I want a VM between my other VMs and my storage?

Because, Atlantis, argues, the traditional Compute<–>Switch<–>Storage model sucks. Fiber Channel, iSCSI, Infiniband….what is this, 2007? We can’t wait 1 millisecond for our storage anymore, we need microsecond latency.

Atlantis says they have a better way: let their virtual machine pool all the storage resources on your physical hosts together: SSD, HDD…all the GBs are invited. And then, and here’s the mindfuck, Atlantis’ VM is going to take some of your hosts’ RAM and it’s going to all you to park your premium VMs in a datastore (or CSV) inside that pool of RAM, across your HA VMware stack or Hyper-V cluster, Nutanix-style but without the hardware capex.

Then this ballsy Atlantis VM is going to apply some compression on the inbound IO stream, ask you politely for only one vCPU (and one in reserve) and when it’s all said and done, you can hit the deploy button on your Windows 7 VDI images, and bam : scalable, ultra-fast VDI, so fast that you’ll never hear complaints from the nagging Excel jockey in accounting.

1millioniopsKind of far-fetched if you ask the #VFD3 crew: there’s technology and there is science fiction. But Atlantis was prepared. They brought out a bright engineer who sat down at the table and spun up an IOMETER demo, clicked the speedo button, and looked up at us as we watched the popular benchmark utility hit 1 million IOPS.

Yes. One.Million.IOPS

I think the engineer even put his pinky in his mouth, just before he dropped the mic.

It was a true #StorageGlory moment.

Ha! Just kidding. I don’t think we even smiled.

“What’s in that IOMETER file, a bunch of zeros?” I trolled.

“What if one of my hosts just falls over while the datastore is in the RAM?” asked another.

“Yeah, when do the writes get committed to the spinners,” chimed another delegate.

Then the Other Scott Lowe, savvy IT guy who can speak to the concerns of the corner office spoke: “You want to talk about CIOs? I’ve been a CIO. Here’s the thing you’re not considering.”

You don’t get invited to a Tech Field Day event unless you’re skeptical, willing to speak up, and have your bullshit filter set on Maximum, but I have to say, the Atlantis guys not only directly answered our questions about the demo but pushed back at times. It was some great stuff.

I’ll let my colleague @Discoposse deepdive this awesome tech for you and sum Atlantis up this way: they say they were doing software defined storage before it was a thing, and that, stepping back, they’re convinced this model of in-memory computing for VDI and soon server workloads, is the way forward.

And, more than that, ILIO USX is built on the same stuff they’ve already deployed en masse to huge 50,000 + VDI desktops for giant banks, the US military and a whole bunch of other enterprises. This thing they’ve built scales and at only $200-300 per desktop, with no hardware purchases required.

If you asked me before #VFD3 whether I’d put virtual machines inside of a host’s RAM outside of a ZFS adaptive replacement cache context, I’d have said that’s crazy.

I still think it’s crazy, but crazy like a fox.

Ping them if: There’s even a hint of VDI in your future or you suffer through login storms in RDS/XenApp but can’t deploy more hardware to address it

Set Outlook reminder for when: Seems pretty mature. This works in Hyper-V and even Xen, the one Hypervisor I can actually pick on with confidence

Send to /dev/null if: You enjoy hearing the cries of your VDI users as they suffer with 25 IOP virtual machine instances

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Marco Broeken

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 

#VFD3 Day One, Modeling your IO Blender with Cloud Physics


Sponsor #1 : Cloud Physics

In reviewing the sponsors and their products ahead of #VFD3, I admit Cloud Physics didn’t get me very excited. They offered something about operations and monitoring. In the cloud.

Insert screenshot of dashboards, tachometers, and single pane of glass here. Yawn.

But I was totally wrong. The Cloud Physics CEO put his firm into some great context for me. Cloud Physics, he told us, is about building what our industry has built for every other industry in the world but hasn’t achieved for our own datacenters: aggregate data portals that help businesses make efficiency gains by looking at inputs, measuring outputs, and comparing it all on a huge scale.

Not clear yet? Ok, think of Nimble Storage’s Infosight, something I heart in my own stack. Nimble takes anonymous performance data from every one of their arrays in the field, they smash all that data together, apply some logic, heuristics, and intelligence to it, and produce a pretty compelling & interesting picture of how their customers are using their arrays. With that data, Nimble can proactively recommend configurations for your array, alert customers ahead of time before a particular bug strikes their productions, and produce a storage picture so interesting, some argue it should be open sourced for the good of humanity storage jockeys.

Cloud Physics is like that, but for your VMware stack. And soon, Hyper-V.

Only CloudPhysics is highly customizable, RESTful and easily queried. What’s more, the guys who built CloudPhysics were bigshots at VMware, giving CloudPhysics important bona fides to my virtualization colleagues who run their VMs inside datastores & NFS shares.

For the lone Hyper-V guy in the room (me), it was a pretty cool vision, like System Center Operations Manager only, better and actually usable and on a huge, macro scale.

And CloudPhysics isn’t just for your on-prem stuff either. They can apply their tech to AWS workloads (to some extent), and I think they have Azure in their sites. They get the problem (it’s tough to pull meaning and actionable intel out of syslogs of a hundred different hosts) and I think have an interesting product.

CloudPhysics Summary:

Ping them if: You know the pain of trying to sort out why your datastore is so slow and which VM is to blame and you think it’s always the storage’s fault

Set Outlook reminder for when: They can apply the same stuff to Hyper-V, Azure, or your OpenStacks and KVM and Xens

Send to /Dev/null if: You enjoy ignorance