30 Days hands-on with VMTurbo’s OpsMan #VFD3

Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo's Yuri Rabover
Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo’s Yuri Rabover

So you may recall that back in March, yours truly, Parent Partition, was invited as a delegate to a Tech Field Day event, specifically Virtualization Field Day #3, put on by the excellent team at Gestalt IT especially for the guys guys who like V.

And you may recall further that as I diligently blogged the news and views to you, that by day 3, I was getting tired and grumpy. Wear leveling algorithms intended to prevent failure could no longer cope with all this random tech field day IO, hot spots were beginning to show in the parent partition and the resource exhaustion section of the Windows event viewer, well, she was blinking red.

And so, into this pity-party I was throwing for myself walked a Russian named Yuri, a Dr. named Schmuel and a product called a “VMTurbo” as well as a Macbook that like all Mac products, wouldn’t play nice with the projector.

You can and should read all about what happened next because 1) VMTurbo is an interesting product and I worked hard on the piece, and 2) it’s one of the most popular posts on my little blog.

Now the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation wasn’t just that it played into my fevered fantasies of being a virtualization economics czar (though it did), or that it promised to bridge the divide via reporting between Infrastructure guys like me and the CFO & corner office finance people (though it can), or that it had lots of cool graphs, sliders, knobs and other GUI candy (though it does).

No, the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation was that they said it would work with that other great Type 1 Hypervisor, a Type-1 Hypervisor I’m rather fond of: Microsoft’s Hyper-V.

I didn't even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo's website and imagine it's my stack
I didn’t even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo’s website and imagine it’s my stack

And so in the last four or five weeks of my employment with Previous Employer (PE), I had the opportunity to test these claims, not in a lab environment, but against the stack I had built, cared for, upgraded, and worried about for four years.

That’s right baby. I put VMTurbo’s economics engine up against my six node Hyper-V cluster in PE’s primary datacenter, a rationalized but aging cluster with two iSCSI storage arrays, a 6509E, and 70+ virtual machines.

Who’s the better engineer? Me, or the Boston appliance designed by a Russian named Yuri and a Dr. named Schmuel? 

Here’s what I found.

The Good

  • Thinking economically isn’t just part of the pitch: VMTurbo’s sales reps, sales engineers and product managers, several of whom I spoke with during the implementation, really believe this stuff. Just about everyone I worked with stood up to my barrage of excited-but-serious questioning and could speak literately to VMTurbo’s producer/consumer model, this resource-buys-from-that-resource idea, the virtualized datacenter as a market analogy. The company even sends out Adam Smith-themed emails (Famous economist…wrote the Wealth of Nations if you’re not aware). If your infrastructure and budget are similar to what mine were at PE, if you stress over managing virtualization infrastructure, if you fold paper again and again like I did, VMTurbo gets you.
  • Installation of the appliance was easy: Install process was simple: download a zipped .vhd (not .vhdx), either deploy it via VMM template or put the VHD into a CSV and import it, connnect it to your VM network, and start it up. The appliance was hassle-free as a VM; it’s running Suse Linux, and quite a bit of java code from what I could tell, but for you, it’s packaged up into a nice http:// site, and all you have to do is pop in the 30 day license XML key.
  • It was insightful, peering into the stack from top to nearly the bottom and delivering solid APM:  After I got the product working, I immediately made the VMturbo guys help me designate a total of about 10 virtual machines, two executables, the SQL instances supporting those .exes and more resources as Mission Critical. The applications & the terminal services VMs they run on are pounded 24 hours a day, six days a week by 200-300 users. Telling VMTurbo to adjust its recommendations in light of this application infrastructure wasn’t simple, but it wasn’t very difficult either. That I finally got something to view the stack in this way put a bounce in my step and a feather in my cap in the closing days of my time with PE. With VMTurbo, my former colleagues on the help desk could answer “Why is it slow?!?!” and I think that’s great.
  • Like mom, it points out flaws, records your mistakes and even puts a $$ on them, which was embarrassing yet illuminating: I was measured by this appliance and found wanting. VMTurbo, after watching the stack for a good two weeks, surprisingly told me I had overprovisioned -by two- virtual CPUs on a secondary SQL server. It recommended I turn off that SQL box (yes, yes, we in Hyper-V land can’t hot-unplug vCPU yet, Save it VMware fans!) and subtract two virtual CPUs. It even (and I didn’t have time to figure out how it calculated this) said my over-provisioning cost about $1200. Yikes.
  • It’s agent-less: And the Windows guys reading this just breathed a sigh of relief. But hold your golf clap…there’s color around this from a Hyper-V perspective I’ll get into below. For now, know this: VMTurbo knocked my socks off with its superb grasp & use of WMI. I love Windows Management Instrumentation, but VMTurbo takes WMI to a level I hadn’t thought of, querying the stack frequently, aggregating and massaging the results, and spitting out its models. This thing takes WMI and does real math against the results, math and pivots even an Excel jockey could appreciate. One of the VMTurbo product managers I worked with told me that they’d like to use Powershell, but powershell queries were still to slow whereas WMI could be queried rapidly.
  • It produces great reports I could never quite build in SCOM: By the end of day two, I had PDFs on CPU, Storage & network bandwidth consumption, top consumers, projections, and a good sense of current state vs desired state. Of course you can automate report creation and deliver via email etc. In the old days it was hard to get simple reports on CSV space free/space used; VMTurbo needed no special configuration to see how much space was left in a CSV
  • vFeng Shui for your virtual datacenter
    vFeng Shui for your virtual datacenter

    Integrates with AD: Expected. No surprises.

  • It’s low impact: I gave the VM 3 CPU and 16GB of RAM. The .vhd was about 30 gigabytes. Unlike SCOM, no worries here about the Observer Effect (always loved it when SCOM & its disk-intensive SQL back-end would report high load on a LUN that, you guessed it, was attached to the SCOM VM).
  • A Eureka! style moment: A software developer I showed the product to immediately got the concept. Viewing infrastructure as a supply chain, the heat map showing current state and desired state, these were things immediately familiar to him, and as he builds software products for PE, I considered that good insight. VMTurbo may not be your traditional operations manager, but it can assist you in translating your infrastructure into terms & concepts the business understands intuitively.
  • I was comfortable with its recommendations: During #VFD3, there was some animated discussion around flipping the VMTurbo switch from a “Hey! Virtualization engineer, you should do this,” to a “VMTurbo Optimize Automagically!” mode. But after watching it for a few weeks, after putting the APM together, I watched its recommendations closely. Didn’t flip the switch but it’s there. And that’s cool.
  • You can set it against your employer’s month end schedule: Didn’t catch a lot of how to do this, but you can give VMTurbo context. If it’s the end of the month, maybe you’ll see increased utilization of your finance systems. You can model peaks and troughs in the business cycle and (I think) it will adjust recommendations accordingly ahead of time.
  • Cost: Getting sensitive here but I will say this: it wasn’t outrageous. It hit the budget we had. Cost is by socket. It was a doable figure. Purchase is up to my PE, but I think VMTurbo worked well for PE’s particular infrastructure and circumstances.

The Bad:

  • No sugar coating it here, this thing’s built for VMware: All vendors please take note. If VMware, nomenclature is “vCPU, vMem, vNIC, Datastore, vMotion” If Hyper-V, nomenclature is “VM CPU, VM Mem, VMNic, Cluster Shared Volume (or CSV), Live Migration.” Should be simple enough to change or give us 29%ers a toggle. Still works, but annoying to see Datastore everywhere.
  • Interface is all flash: It’s like Adobe barfed all over the user interface. Mostly hassle-free, but occasionally a change you expected to register on screen took a manual refresh to become visible. Minor complaint.
  • Doesn’t speak SMB 3.0 yet: A conversation with one product engineer more or less took the route it usually takes. “SMB 3? You mean CIFS?” Sigh. But not enough to scuttle the product for Hyper-V shops…yet. If they still don’t know what SMB 3 is in two years…well I do declare I’d be highly offended. For now, if they want to take Hyper-V seriously as their website says they do, VMTurbo should focus some dev efforts on SMB 3 as it’s a transformative file storage tech, a few steps beyond what NFS can do. EMC called it the future of storage!
  • VFD-Logo-400x398Didn’t talk to my storage: There is visibility down to the platter from an APM perspective, but this wasn’t in scope for the trial we engaged in. Our filer had direct support, our Nimble, as a newer storage platform, did not. So IOPS weren’t part of the APM calculations, though free/used space was.

The Ugly:

  • Trusted Install & taking ownership of reg keys is required: So remember how I said VMTurbo was agent-less, using WMI in an ingenious way to gather its data from VMs and hosts alike? Well, yeah, about that. For Hyper-V and Windows shops who are at all current (2012 or R2, as well as 2008 R2), this means provisioning a service account with sufficient permissions, taking ownership of two Reg keys away from Trusted Installer (a very important ‘user’) in HKLMCLSID and one further down in WOW64, and assigning full control permissions to the service account on the reg key. This was painful for me, no doubt, and I hesitated for a good week. In the end, Trusted Installer still keeps full-control, so it’s a benign change, and I think payoff is worth it. A Senior VMTurbo product engineer told me VMTurbo is working with Microsoft to query WMI without making the customer modify the registry, but as of now, this is required. And the Group Policy I built to do this for me didn’t work entirely. On 2008 R2 VMs, you only have to modify the one CLSID key

Soup to nuts, I left PE pretty impressed with VMTurbo. I’m not joking when I say it probably could optimize my former virtualized environment better than I could. And it can do it around the clock, unlike me, even when I’m jacked up on 5 Hour Energy or a triple-shot espresso with house music on in the background.

vmturboStepping back and thinking of the concept here and divesting myself from the pain of install in a Hyper-V context: products like this are the future of IT. VMTurbo is awesome and unique in an on-prem context as it bridges the gap between cost & operations, but it’s also kind of a window into our future as IT pros

That’s because if your employer is cloud-focused at all, the infrastructure-as-market-economy model is going to be in your future, like it or not. Cloud compute/storage/network, to a large extent, is all about supply, demand, consumption, production and bursting of resources against your OpEx budget.

What’s neat about VMTurbo is not just that it’s going to help you get the most out of the CapEx you spent on your gear, but also that it helps you shift your thinking a bit, away from up/down, latency, and login times to a rationalized economic model you’ll need in the years ahead.

#VFD3 Day 3 : VMTurbo wants to vStimulate your vEconomy

Going into the last day of #VFD3, I was a bit cranky, missing my Child Partition, and strict though her protocols be, the Supervisor Module spouse back home.

But #TFD delegates don’t get to feel sorry for themselves so I put on my big boy pants, and turned my thoughts to VMTurbo.

vmturbo“What the hell is a VM Turbo,” I asked myself as I walked in the meeting room and took a good hard look at the VMTurbo guys.

Two of them were older gentlemen, one was about my age. The two older guys had Euro-something accents; one was obviously Russian. I made with the introductions and got a business card from one of them. It listed an address in New Jersey or New York or something. Somewhere gritty, industrial and old no doubt, the inverse of Silicon Valley.

As I moved to my seat, there was some drama between the VMTurbo presenters and Gestalt’s Tom Hollingsworth who, even though he’s a CCIE, seems to really enjoy playing SVGA Resolution Cop. “The online viewers aren’t going to be able to see your demo at 1152×800,” Tom said. “You need to go 1024×768 to the projector” he nagged.

“But but….” the VMTurbo guys responded, fiddling with some setting which caused the projected OS X desktop to disappear and be replaced by a projector test pattern. At that point everyone’s attention shifted, once again, to the stupid presentation Mac book and its malfunctioning DisplayPort-VGA converter dongle*

Sitting now, I sighed: So this was how #VFD3 was going to end: A couple of European sales guys had flown out from the east coast to California, probably hoping to catch some sunshine after pitching a vmturbo to us, but now Deputy SVGA and the Mac Book’s stupid dongle were getting in their way.

Great! I thought as I sat down.

And then I was blown away for the next 2.5 hours.

VMTurbo isn’t a bolt-on forced induction airflow device for your 2u host, rather, it’s a company co-founded four years ago  by some brainy former EMC scientists, both of whom were now standing before me. The Russian guy, Yuri Rabover, has a PhD in operating systems, and the CEO, Shmuel Kliger, has a PhD in something sufficiently impressive.

The product they were pitching is called Operations Manager (yes, yet another OpsMan), which is unfortunate because the generic name doesn’t help this interesting product stand out from the pack….this is operations management, true, but with an economics engine, real life reporting on cost/benefits and an opportunity cost framework that seems pretty ingenious to me.

Yeah I said economics. As in animal spirits and John Maynard Keynes, the road to serfdom & Frederic Hayek, Karl Max & Das Kaptial vs Milton Friedman, ‘Merica and childhood lemonade stands…that kind of economics.

Command your vEconomy with OpsMan. Obama wishes he had these sliders & dials for the economy
Command your vEconomy with OpsMan. Obama wishes he had these sliders & dials for the economy

And I’m not exaggerating here; they opened the meeting talking about economics! They told us to step back and imagine our vertical stack at every stage -from LUN to switch to hypervisor to cpu to user-facing app- as a marketplace in which resources are bought, sold, and consumed, in which there is limited supply & unpredctable demand, in which certain resources are scarce & therefore valuable while others are plentiful and therefore cheap. There’s even a consumers/producers slider; your “maker” filer produces 30,000 IOPS, but your “taker” users are consuming 25,000. Abuse & overuuse of the modern Type 1 Hypervisor akin to over-grazing the commons with your sheep, a Tragedy of the vCommons, if you will.

Someone stop me, I’m having too much fun.

You get the idea: the whole virtualization construct is a market economy.

I can’t speak for the other delegates, but I was enraptured partly because it seemed to validate my post on Pareto efficiency & Datacenter spending as right-tracked & thoughtful rather than the ravings of a crazy man and partly because it’s a framework I’ve gotten used to operating under my entire IT career.

But that’s just me nerding out and getting some feel-good confirmation bias. Is this even a useful & practical way to think of your IT resources? Is VMTurbo’s premise solid or crazy?

I’d argue it’s a pretty solid premise & a good way to frame your virtualization infrastructure. We already do it in IT: I bet you ‘budget’ IOPS, bandwidth, CPU & memory to meet herky-jerky demand, amid expectations for the availability and performance of those resources. That’s kind of our job, especially in the SMB space, where companies buy the server, storage, and network guy for the price of one systems guy, right?

So what VMTurbo’s OpsMan actually do? Some pretty cool things.


VMTurbo's Yuri Rabover whiteboards my pain : between the user & my spindles, wherein lies the problem?
VMTurbo’s Yuri Rabover whiteboards my pain : between the user & my spindles, wherein lies the problem?

OpsMan allows you to put a high value/mission critical designation on a user-facing application. And with that information, the economic engine takes over and with OpsMan’s visibility all the way from your user-facing VM to your old 7200 rpm spinning platters, it’s going to central-plan the animal spirits out of your stack and spit out some recommendations to you, which you can then, in the words of one of the VMTurbo guys, “Hit the recommendations button and this red thing goes to green.”**

Ahh who doesn’t like green icons indicating health and balance? Of course, achieving that isn’t very hard; just give as much resources as you can to every VM you have and you’ll get green. The nag emails will go away, and all will be well.

For awhile anyway.

Any rookie button pusher can give VMs what they want and make things green (answer: always more), but it takes wisdom and discernment to give VMs what they actually need to accomplish their task yet avoid the cardinal sin of over-provisioning which leads inevitably to a Ponzi scheme-style collapse of your entire infrastructure. 

Therein lies the rub and VMTurbo says it can put some realistic $$ and statistics around that decision, not only dissuading you from over-provisioning a VM, but giving you an opportunity cost for those extra resources you’re about to assign.

Is that a Windows Hyper-V logo alongside VMware, KVM & Xen? I'm ever-so pleased. Go onnnnn
Is that a Windows Hyper-V logo alongside VMware, KVM & Xen? I’m ever-so pleased. Go onnnnn with your presentation: “Recently, Microsoft’s Hyper-V (and its surrounding ecosystem) has reached a level of technical maturity that has more enterprises considering the increased diversity deployment,” VMTurbo says in a blog.

Sure it’s tempting to add 10vCPU & 32GB of RAM to that critical SQL cluster, especially if it gets the accounting department off your back. But VMTurbo’s OpsMan sees the stack from top to bottom and it can caution you that adding those resources to SQL will degrade the performance of your XenApp PoS farm, for instance, or it might suggest you add some disk to another stack.

Neat stuff.

VMTurbo, essentially, says it can do your job (especially in the SMB space) better than you can, freeing you up from fire fighting & panicked ssh sessions to check the filer’s load to more important things, like DevOps or script-writing or whatever you fancy I suppose.

And when its done having its way with your stack, when the output from your vEconomy is incredibly larger than the input, when you’ve arrived at #VirtualizationGlory and there’s no more left to give, the VMTurbo guys say OpsMan can give you a report the CFO can read and understand, a report that says, “Here thar be performance, but no farther without CapEx, yarr.”

Am I using what I've got in the most efficient way possible? Welcome to the black arts of virtualization.
Am I using what I’ve got in the most efficient way possible? Welcome to the black arts of virtualization.

For me in Hyper-V & SMB land, where SCOM is a cruel & unpredictable, costly mistress and the consultant spend is meager, VMTurbo feels like a solid & well thought out product. For VMware, OpenStack shops? I’m unclear if VCOPS does something similar to OpsMan, but man did VMTurbo’s presentation get my VMware colleagues talking. Highly recommend you set aside an hour and watch the discussion at Tech Field Day’s youtube site.

All in all not a bad way to end #VFD3: getting challenged to think of virtualized systems as economies by some very sharp engineers from Europe, one of whom learned operating system design in Soviet Russia, where the virtualization sequence is decidedly v2p, but the thinking, understanding, and perhaps execution are first rate. I’ll know more on the execution side when I put this in my lab.

I hereby dub the VMTurbo guys Philosopher Kings of Virtualization for taking a unique and thoughtful approach.

Ping them if: You operate in environments where IT spend is limited, or you want to get the most out of your hardware

Set Outlook reminder for when: No need to wait. With solid VMware support, some flattering & nice things but also real products for Hyper-V, and KVM/Xen support, Open Stack soon, VMTurbo pleases

Send to /dev/null if: You have virtually unlimited hardware capex to spend and play with. If so, congrats. 

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Eric Shanks

Andrea Mauro

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 


* Sidenote: Still floored that one can’t escape dongle technology dysfunction even in Silicon Valley

**You can even automate VMTurbo’s recommendations, which the company says a great many of its customers do, a remark that caused a bit of discomfort among the #VFD3 crew. 

#VFD3 Day One – Pure Storage has 99 problems but a disk ain’t one

Sponsor #3 : Pure Storage

PureStorage Logo - RGB - Small-304

I love the smell of storage disruption in the morning.

And this morning smells like a potpouri of storage disruption. And its wafting over to the NetApp & EMC buildings I saw off the freeway.

I was really looking forward to my time with Pure, and I wasn’t disappointed in the least. Pure, you see, offers an all-flash array, a startlingly-simple product lineup, an AJAX-licious UI, and makes such bold IOPS claims, that their jet black/orange arrays are considered illegal and immoral south of the Mason Dixon line.

This doesn't work anymore, or so newer storage vendors would have us believe
This doesn’t work anymore, or so newer storage vendors would have us believe

Pure also takes a democratic approach to flash. It’s not just for the rich guys anymore; in fact, Pure says, they’re making the biggest splash in SMB markets like the one I play in. Whoa, really. Flash for me? For everyone?

When did I die and wake up in Willy Wonka’s Storage Factory?

It’s an attractive vision for storage nerds like me. Maybe Pure has the right formula and their growth and success portends an end to the tyranny of the spindle, to rack U upon rack U of spinning 3.5″ drives and the heat and electrical spend that kind of storage requires.

So are they right, is it time for an all-flash storage array in your datacenter?

I went through this at work recently and it came down to this: there is an element of suspending your disbelief when it comes to all-flash arrays and even newer hybrid arrays. There’s some magic to this thing in other words that you have to accept, or at least get past, before you’d consider a Pure.

I say that because even if you were to use the cheapest MLC flash drives you could find, and you were to buy them in bulk and get a volume discount, I can’t see a way you’d approach the $ per GB cost of spinning drives in a given amount of rack U, nor could you match GB per U of 2.5″ 1 terabyte spinning disks (though you can come close on the latter). At least not in 2014 or perhaps even 2015.

So here, in one image, is the magic, Pure’s elevator pitch for the crazy idea that you can get an affordable, all-flash array that beats any spinning disk system on performance and meets/exceeds the storage capacity of other arrays:


Pure’s arrays leverage the CPU and RAM to maximize capacity & performance. Your typical storage workload on a Pure will get compressed where it can be compressed, deduped in-line, blocks of zeros (or other similar patterns) won’t be written to the array at all (rather, metadata will be recorded as appropriate) and thin provisioning goes from being a debatable storage strategy to a way of life in the Pure array.

Pure says all this inline processing helps them avoid 70-90% of writes that it would otherwise have to perform, writes it would be committing to consumer-grade MLC SSD drives, which aren’t built for write-endurance like enterprise-level SLC SSDs.

Array tech specs.
Array tech specs. Entry level array has only 2.7TB Raw SSD, but at 4>1 compression/dedupe ratio, Pure says that 11TB is possible. Click for larger.

What’s more, Pure includes huge amounts of RAM in even their entry-level array (96GB), which they use as ZFS-like hot cache to accelerate IO.

Dual Westmere-class 6 core Intel CPUs outfit the entry array and Pure’s philosophy on their use is simple: if the CPU isn’t being full-utilized at all times, something’s wrong and performance is being left on the table.

These clever bits of tech -inline compression, dedupe, and more- add up to a pretty compelling array that draws only 400-450 watts and takes up only 2u of your rack, and, I’m told, start at a bit under 6 figures.

Pure really took some time with us, indulging all our needs. I requested and was allowed to see the CLI interface to the “PurityOS,” and I liked what I saw. Pure also had a Hyper-V guy on deck to talk about integration with Microsoft & System Center, which made me feel less lonely in a room full of VMware folks.

Overall, Pure is the real deal and after really asking them some tough questions, hearing from their senior and very sharp science/data guys, I think I do believe in magic.

Ping them if: You suffer from spinning disks and want a low cost entry to all flash

Set Outlook reminder for when: No need. Feels pretty complete to me.Plugins for vcenter & System Center to boot

Send to /dev/null if: You believe there is no replacement for displacement (spindles)

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Eric Shanks

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 

#VFD3 Day One – Atlantis Computing’s 1 Million IOPS


Sponsor #2 : Atlantis Computing*

Agnostic Computing.com doesn’t sugarcoat things and neither do my fellow delegates. We all agreed that the sharp guys at Atlantis Computing had a plan for us; all sponsors of #VFD3 have an agenda, but Atlantis really wanted to hold us to their’s. They didn’t dodge our probing and shouted questions, but at times they did ask us to wait a sec for the answer on the next slide.

And if you know Tech Field Day, then you know #VFD3 delegates aren’t virtuous….we don’t even understand what the word “patience” means, and we make sport out of violating the seven deadly sins. So when the Atlantis folks asked us again and again to wait, in effect to add some latency to our brains in the HQ of a company designed to defeat latency once and for all, I felt like the meeting was about to go horribly off the rails.

But they didn’t dodge our questions and I think, overall, the session with Atlantis Computing’s data guys was quite enlightening even if it did get a tad combative at times. On reflection and after talking to my fellow delegates, I think we measured Atlantis with our collective virtual brains and found them….splendid.

So what’s Atlantis pitching?

Oh just a little VM for VMware, Hyper-V and Xen that sits between your stupid Hypervisor (Quote of the Day : “What hypervisors end up doing is doling out IO resources in a socialist and egalitarian way,” further proof of my own thesis of the utility of applying economics in thinking about your datacenter) and your storage.

Wait, what? Why would I want a VM between my other VMs and my storage?

Because, Atlantis, argues, the traditional Compute<–>Switch<–>Storage model sucks. Fiber Channel, iSCSI, Infiniband….what is this, 2007? We can’t wait 1 millisecond for our storage anymore, we need microsecond latency.

Atlantis says they have a better way: let their virtual machine pool all the storage resources on your physical hosts together: SSD, HDD…all the GBs are invited. And then, and here’s the mindfuck, Atlantis’ VM is going to take some of your hosts’ RAM and it’s going to all you to park your premium VMs in a datastore (or CSV) inside that pool of RAM, across your HA VMware stack or Hyper-V cluster, Nutanix-style but without the hardware capex.

Then this ballsy Atlantis VM is going to apply some compression on the inbound IO stream, ask you politely for only one vCPU (and one in reserve) and when it’s all said and done, you can hit the deploy button on your Windows 7 VDI images, and bam : scalable, ultra-fast VDI, so fast that you’ll never hear complaints from the nagging Excel jockey in accounting.

1millioniopsKind of far-fetched if you ask the #VFD3 crew: there’s technology and there is science fiction. But Atlantis was prepared. They brought out a bright engineer who sat down at the table and spun up an IOMETER demo, clicked the speedo button, and looked up at us as we watched the popular benchmark utility hit 1 million IOPS.

Yes. One.Million.IOPS

I think the engineer even put his pinky in his mouth, just before he dropped the mic.

It was a true #StorageGlory moment.

Ha! Just kidding. I don’t think we even smiled.

“What’s in that IOMETER file, a bunch of zeros?” I trolled.

“What if one of my hosts just falls over while the datastore is in the RAM?” asked another.

“Yeah, when do the writes get committed to the spinners,” chimed another delegate.

Then the Other Scott Lowe, savvy IT guy who can speak to the concerns of the corner office spoke: “You want to talk about CIOs? I’ve been a CIO. Here’s the thing you’re not considering.”

You don’t get invited to a Tech Field Day event unless you’re skeptical, willing to speak up, and have your bullshit filter set on Maximum, but I have to say, the Atlantis guys not only directly answered our questions about the demo but pushed back at times. It was some great stuff.

I’ll let my colleague @Discoposse deepdive this awesome tech for you and sum Atlantis up this way: they say they were doing software defined storage before it was a thing, and that, stepping back, they’re convinced this model of in-memory computing for VDI and soon server workloads, is the way forward.

And, more than that, ILIO USX is built on the same stuff they’ve already deployed en masse to huge 50,000 + VDI desktops for giant banks, the US military and a whole bunch of other enterprises. This thing they’ve built scales and at only $200-300 per desktop, with no hardware purchases required.

If you asked me before #VFD3 whether I’d put virtual machines inside of a host’s RAM outside of a ZFS adaptive replacement cache context, I’d have said that’s crazy.

I still think it’s crazy, but crazy like a fox.

Ping them if: There’s even a hint of VDI in your future or you suffer through login storms in RDS/XenApp but can’t deploy more hardware to address it

Set Outlook reminder for when: Seems pretty mature. This works in Hyper-V and even Xen, the one Hypervisor I can actually pick on with confidence

Send to /dev/null if: You enjoy hearing the cries of your VDI users as they suffer with 25 IOP virtual machine instances

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Marco Broeken

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 

#VFD3 Day One, Modeling your IO Blender with Cloud Physics


Sponsor #1 : Cloud Physics

In reviewing the sponsors and their products ahead of #VFD3, I admit Cloud Physics didn’t get me very excited. They offered something about operations and monitoring. In the cloud.

Insert screenshot of dashboards, tachometers, and single pane of glass here. Yawn.

But I was totally wrong. The Cloud Physics CEO put his firm into some great context for me. Cloud Physics, he told us, is about building what our industry has built for every other industry in the world but hasn’t achieved for our own datacenters: aggregate data portals that help businesses make efficiency gains by looking at inputs, measuring outputs, and comparing it all on a huge scale.

Not clear yet? Ok, think of Nimble Storage’s Infosight, something I heart in my own stack. Nimble takes anonymous performance data from every one of their arrays in the field, they smash all that data together, apply some logic, heuristics, and intelligence to it, and produce a pretty compelling & interesting picture of how their customers are using their arrays. With that data, Nimble can proactively recommend configurations for your array, alert customers ahead of time before a particular bug strikes their productions, and produce a storage picture so interesting, some argue it should be open sourced for the good of humanity storage jockeys.

Cloud Physics is like that, but for your VMware stack. And soon, Hyper-V.

Only CloudPhysics is highly customizable, RESTful and easily queried. What’s more, the guys who built CloudPhysics were bigshots at VMware, giving CloudPhysics important bona fides to my virtualization colleagues who run their VMs inside datastores & NFS shares.

For the lone Hyper-V guy in the room (me), it was a pretty cool vision, like System Center Operations Manager only, better and actually usable and on a huge, macro scale.

And CloudPhysics isn’t just for your on-prem stuff either. They can apply their tech to AWS workloads (to some extent), and I think they have Azure in their sites. They get the problem (it’s tough to pull meaning and actionable intel out of syslogs of a hundred different hosts) and I think have an interesting product.

CloudPhysics Summary:

Ping them if: You know the pain of trying to sort out why your datastore is so slow and which VM is to blame and you think it’s always the storage’s fault

Set Outlook reminder for when: They can apply the same stuff to Hyper-V, Azure, or your OpenStacks and KVM and Xens

Send to /Dev/null if: You enjoy ignorance 

#VFD3 Day 2 : Mea Culpa, Me Paenitet CommVault Simpana

download (1)

I’d like to start this post off with an apology.

To every backup software vendor I’ve worked with in my 15 year IT Career : I’m sorry. I’m sorry for hating you all these years, for looking at you as a “Oh God….must I?” part of my stack. For resenting you and your product just a little less, and only a little, than anti-virus products. I’m sorry for abusing your support personnel, in mind and thought and via voodoo dolls & dart boards festooned with prints of your logo. I take back everything I said that was mean about you and your product and your software engineers. My bad. You’re people too after all.

Somewhere, a tape library robot is requesting quietly, insistently for that old backup tape you found in your garage
Somewhere, a tape library robot is requesting quietly, insistently for that old backup tape you found in your garage

Backups. I don’t want to deal with them. I don’t want to ever have to restore them. I have confidence in them….mostly…but in my view, if backups are anything but the last bullet in the chamber then I haven’t planned my stack correctly in the virtual age.

When you’re 100% virtual -when you’ve already taken that step to free yourself from hardware and change your thinking- why be dependent on the tape machine, it’s so-called robot and a failure-oriented practice from the 90s? I want to WIN, and winning means redundancy, portability, & speed, qualities that will empower the business.

So pitch your product as a “backup” solution and right out of the gate, you’re making me think of failure & pain. Sorry it’s just my programming. You and your software product make me think of failure…how do you like them apples? How can you look at yourself in the mirror? Damnit, sorry there I go again.

So CommVault was on deck for #VFD3 Day 2 and this issue/flaw in my psyche was bound to collide unless, by some miracle, CommVault could show me a better way, a way out of the fail swamp and into the IT Hall of Fame.

Show me the way home CommVault, or get thee in my tech dustbin.

Mission accomplished? We’ll see. But I’m intrigued.

What CommVault’s pitching is backup software, but it isn’t just backup software, it’s much more than that. Do yourself a favor and don’t pigeonhole them like I did.

Let CommVault archive that old Windows 2003 VM for you
Let CommVault archive that old Windows 2003 VM for you

Simpana, the product we spoke with them about at length yesterday, is better thought of as a sort of auxiliary storage or resource manager, parallel to, but complementary of your file system, your virtual machine manager, your storage system, and yes, your old stank-ass backup software.

What’s interesting about Simpana is that it’s so complementary of those things that it’s almost -almost- a stand-in for them. These folks are really thinking outside the box here. Simpana can give you this:

  • A Self Service Portal for your dev team giving them the ability to provision new VMs in vCenter or Hyper-V and backup or restore them
  • DASH engine, which functions almost as a WAN accelerator, easing your pain during backups and file operations at remote sites
  • The ability to restore a backed-up VMs data to a fresh, naked, just-provisioned VM
  • Backup on-prem VMs and restore them to public clouds like Azure
  • De-duplication of your backup stores
  • Auto-power off or archival of old & unused virtual machines

Each of those points above are available for VMware of course, but the cool thing about CommVault is that they’re highly aligned with the Microsoft stack & vision. In fact, they

Self Service Portal for the dev in your life who breaks his VM all too often
Self Service Portal for the dev in your life who breaks his VM all too often

seem to offer a few more nuanced features for Hyper-V than they do for VMware, and if you are a Microsoft shop, CommVault is heavily plugged into Azure, which makes them interesting in a number of ways.

So if it’s something more than backup software, it’s got to be more expensive than backup software too, right? Yes and let’s give a *golf clap* for the CommVault folks because they talked honestly about pricing, even if I think it’s kind of a crazy model: CommVault charges per Terabyte. They even put a graphic up on the screen: if you’re backing up 11TB now for $25,000/year, CommVault’s model will cost you $49,500.

Twice as much? That’s crazy, isn’t it? What if you already have Microsoft’s Data Protection Manager, which does synthetic, space-saving backups like CommVault, System Center’s VMM (which has the self-provisioning VM system) and Windows 2012, which offers  native de-duplication of Windows drives?

I challenged the CommVault folks on each of those points and it was highly comforting that they could answer each of them, even if I think some of their answers are disputable. It means they’re really familiar with the Windows environment, with Hyper-V, with how Simpana might slot into System Center & a hybrid on-prem/cloud enterprise.

In short; I’m not sold, but I’m interested and curious and when the sad and tedious topic of backups comes my way again, CommVault’s going to be at the top of my list.

Ping them if: You suffer from an old backup model that you can’t seem to break free from and want to enable new possibilities with something more than just backup software

Set Outlook Reminder for when: You can justify an increased cost or can make the ROI numbers work

Send to /dev/null if: You’re spiritually yoked to the grandfather, father, son model




#VFD3 Day 2 – Tap your Hypervisor with Spirent


One of the recent developments in virtualization I’m thinking of exploring in the home lab is NVGRE or Network Virtualization using Generic Routing Encapsulation. NVGRE is Microsoft’s offering in the nebulous, ill-defined software-defined networking space, and it’s just a few powershell cmdlets away from being turned on in my lab.

But should I bother?

No less an august networking authority than Ivan Pepelnjak has called the network virtualization model & NVGRE of Windows Server 2012 R2 “simply amazing,” but he’s also remarked on how complex and confusing it is, how it’s truly a Layer 3 NV product between Hypervisor hosts, but a muddled L2/L3 product within the hypervisor itself.

For a humble systems engineer with a mutt lab at home and a highly rationalized stack at work, I’m struggling with whether network virtualization’s benefits outweigh its risks. The goal is simple: #NetworkingGlory, the promised land where my most important /24 subnet follows the sun, hopping between datacenters over my existing MPLS network, or freeing me from paying for an MPLS network in the first place. Does NVGRE put me on the path to NetworkingGlory or is it a distraction?

NVGRE. Whoohooo, the same /24 on a single server. Great for Azure, meh for me
NVGRE. Whoohooo, the same /24 on a single server, no conflict. Great for Azure, meh for me

My sense is from the last few days here at #VFD3 that the VMware guys are in the same boat. They’ve got VXLAN & NSX in their stack, but when I see those products mentioned, I get the feeling from some of them that they’re just as “meh” on NSX as I am on NVGRE.

Enter Spirent (pronounced Spy-Rent), a large technology & testing/validation firm that could, frankly, use a bit of work on their website (just run your mouse of “Products” and check out the dizzying list that results). I wasn’t too excited to visit Spirent, but I’m glad they sponsored the event because I left their facility in Mountain View impressed.

So what’s Spirent do? When you or I are shopping for a new top of the rack switch and we want to compare baseline fabric performance of each switch in packets per second, bandwidth, and switching capacity, it’s Spirent’s test equipment that’s often been used to populate the datasheets. If you’re familiar with IXIA at all, Spirent is in the same space, but more of a dominant player, and their client list is pretty impressive, spanning web service companies, telecom, mobile, and so many more. In fact, I wouldn’t be surprised if they have certain three letter government agencies as clients.

But what can they do for me?

Well, if I ever get to a place where I’m embracing NVGRE in Hyper-V, I’m going to give Spirent a call. The firm sells network virtualization products designed to help you test, tap, validate, and troubleshoot your virtual networking stack. You can purchase, today, a virtual machine that enables you to peer inside your encrypted NVGRE tunnels, and that’s important because in an encrypted virtualized network, WireShark isn’t going to tell you squat about what’s wrong.

Glowing_Ring_White_re_LRtThey also sell some pretty neat software testing products. iTest Enterprise, a fat Win32 client, is able to capture your most complicated testing setups. Want to see what happens to your advanced caching storage array when you automate the deployment of 100 virtual machines? You can run it once and that will tell you something about the array, but true StorageGlory Wisdom will only be achieved when you’ve run that same tedious test a dozen times, which would be a major pain in the ass unless you have something like iTest Enterprise.

Wish I had that during my bakeoff with the Nimble and incumbent arrays earlier this year.

Spirent’s got more too: cloud testing products, cloud automation tools, and a slick-looking (but we couldn’t touch and play with, sadly) iPad application that looked like it could do all sorts of useful things.

These are some smart guys making some interesting products that allow you to tap into your hypervisor and find out exactly what’s going on.

Ping them if: Your virtualization environment is huge, you suffer automation & testing pains, you want to peer inside your encrypted virtual networks

Set Outlook Reminder for when: No need to wait for them on anything, they support VMware, Open Stack, Hyper-V, hell, even Xen.

Send them to /dev/null if: You don’t care about your users’ and company’s data integrity & security



#VFD3 Day One In the Books, What to Know

In the run-up to Tech Field Day  – Virtualization Style #3- a lot of the chatter in my inbox from my fellow delegates involved a bit of groaning. We’re IT guys, we bitch a lot, after all, and most of it went like this:

I’ll software-define them right into /dev/null if they mention cloud too many times. Please sponsors, don’t overload us with marketing speak, was the general sentiment.

Though I’m the newbie in this delegate crew, I mentally plus oned many of these comments. None of us, certainly me, jumped on a airplane and left our Child Partitions behind just to get powerpointed to death. We, as a group, are allergic to Silicon Valley marketing.

So imagine how refreshing it was for me -and perhaps my colleague Delegates- that on Day One of #VFD3, the hype-words were kept to a minimum, the sponsors made available to us the Smartest Guys in the Room and Powerpoint, though present, played second fiddle to live demos and whiteboards. Yes, whiteboards, those wondrous things in front of which all the IT zoo animals go to ingest data, chew on it, and then and come up with a way out of the fix we’re in.

I think if anything, Day One’s theme -and perhaps the theme of #VFD3- wasn’t cloud, or software-defined this, agile that. No the theme for Day One for me was something much more down to earth, something very basic, something we can all touch and hold in our hands and comprehend.

#VFD3 Day One was all about* the DRAM, baby:

I got me a hundred gigabytes of ram – Weird Al

Hell yeah.

RAM. The one constituent part of my enterprise stack that’s consistently fast, trouble-free, wickedly small, and easy to provision, care for, and cool. RAM…the one thing I’ll always take more of yet can never have enough of. RAM: costly but mostly drama-free save for one thing : power cycling the server makes the RAM go blank, so obviously we can’t use it for pain points like storage, right?


And yet, two of the three vendors yesterday  told us that yes, not only can you use RAM to multiply the benefits of already-fast SSD, or just bypass those slow-ass SSD drives in the first place, but that this once-heretical idea was ready for primetime, for your enterprise and maybe even your SMB.

Let’s dig in:

#VFD3 Day One – Cloud Physics

#VFD3 Day One – Atlantis Computing

 #VFD3 Day One – Pure Storage


Dear Syslog, regarding NAT & #VFD3 Day 0

Dear Syslog,

It seems like only yesterday that I was listening to Tom Hollingsworth discuss IPV6 on some podcast or other. It went something like this:

Host: So with v6, we’re free of *insert nasty habit tacked onto v4 that we all came to think of as normal, but which is really not normal*

Hollingsworth: Yes. It’s the internet the way it was meant to be.

Me: *kernel panic/bsod/head explodes* 


Syslog, how crazy is it that I was only listening to such debates over recorded podcasts a few months ago until last night, at the #VFD3 kick-off dinner, where I participated in one? And it was In Silicon Valley itself, not the technology sticks of Los Angeles!

Yeah, it’s great Syslog. Hollingsworth held forth on v6 for what felt like a good hour or two, and we #VFD3 delegates, all of us comfortable with our v4 subnetting, our 192.168s and our 10.10s and 172.16s and the whole RFC-1918 spec and our NATs, were at turns skeptical, outraged, excited, amused, or confused by Hollingsworth and we let him have it.

I have to say syslog, I’m convinced. I’m ready to make the leap. And no I’m not talking about Microsoft Direct Access (finally understood the proper context of Teredo tunneling last night syslog), or other half-measures, no….syslog, I want the real v6…I want full 128 bit routable addresses…yeah that’s it. 128 bit routable IP addresses on everything!

Yeah I know I’ve flirted with v6 in the past. Felt cool and 1337 when I hooked up a tunnel to HE.net. But then I turned it off not knowing what to do with it, fearing the unknown.

Now syslog, regarding NAT…it strikes me that Network Address Translation is not to be hated or despised, but rather, understood in the proper context of the development of the internet. All us #VFD3 guys were struggling for metaphors last night as we contemplated v6 in our own enterprises, but after sleeping on it, I think I’ve figured it out.

This is NAT + IPV4:


This is v6:

The Cannibal, Eddy Merckx
The Cannibal, Eddy Merckx

ergo syslog, NAT+IPV4 = learning to bicycle with training wheels, dad’s steady hand on your back and his encouraging voice in your ears.

ipv6 = Eddy Merckx demolishing a 7% grade somewhere in the Pyrennes during a Tour de France, no helmet, no training wheels, just poise, power and determination. Full of Win.

And so it goes in technology, syslog. NAT had its place and its time, it delayed the onset of Peak IP/v4 address exhaustion, and it let us all get comfortable in this new, hyper-connected world.

But we should have removed those rickety, rusting training wheels long ago and used the internet as it was meant to be used syslog. Instead, we’re inventing all sorts of contraptions and strange tools just to keep the training wheels on indefinitely.

Syslog, the lesson from #VFD3 Day 0 is this: There is no Eddy Merckx destroying his opponents and the mountain until we take the training wheels off. Or, to go all Biblical on you syslog:

When I was a child, I talked like a child, I thought like a child, I reasoned like a child. When I became a man, I put the ways of childhood behind me

Sincerely, Agnostic Node 1

In other news, packed day ahead with #VFD3 events. Follow the Twitter tag #VFD3, or maybe you want to tune into the live-stream, in which case I’ve ctrl-c’d/ctrl-v’d the feed below:


Check back later for more!

The ABCs of #VFD3

When I’m not in the lab or at work, I’m with the the Child Partition and having fun. Lately he’s been grabbing my finger as if it was a mouse and pointing it at the alphabet flip board below to retrieve on-demand information about it from his Parent Partition.



And so goes my life: at work it’s all about hypervisors and wringing the last bit of performance & value out of all the equipment that’s been entrusted to me, in the lab at home it’s about testing & playing with advanced ideas, concepts, and technologies and in the evening, it’s about teaching the Child Partition the ABCs.

Until today, that is, because today I’m leaving for Silicon Valley as a Delegate of GestaltIT’s awesome Tech Field Day, Virtualization Field Day 3 to be specific. I’m so excited I can barely contain myself: the sponsor line up is large and varied and I can’t wait to meet Stephen Foskett, Tom Hollingsworth and the Gestalt IT crew as well as the other delegates, some of whom are flying in from as far away as New Zealand & Europe!

On Twitter we’ve all been discussing what we’re looking forward to in the next few days, but there’s just too much, so I decided to list out what I’m excited about in Alphabet-style for the benefit of the Child Partition.

A is for Atlantis Computing: There’s been a lot of buzz about Atlantis’ new ILIO USX product, a technology the company says is the “biggest game changer since server virtualization.” That’s a bold claim and I can’t wait to dig into it. I also want to thank Atlantis for hosting a great & technical blog; back in January I used one of their blog posts to create some IOMeter workloads to test a Nimble array

B is for the Bay Area:  I’m a Southern Californian travelling to Silicon Valley who works as an IT engineer for a company with roots in California’s other great industry: entertainment. Also always love visiting the North; it’s hella cool up there.

C is for Coho Data:  “Open commodity hardware is used to create modular storage building blocks to scale out from TBs to PBs in a single system,” Coho says. And they have a cool slider demo on their website. You know what interests me about Coho? They promise storage at “web-scale” which is pretty much what my users think I have.

D is for the Delegates: I’ve been reading the blogs of my fellow delegates for quite awhile,so it’s going to be great to finally talk in the “meatspace.”

E is for Enterprise: Not NCC-1701D, rather Enterprise Technology. I’ll be among my peers and the learning will be intense & focused.

F is for Field Day: Still honored and amazed to be apart of Tech Field Day. It’s been hard to describe what it is to family: a professional organization of new & returning Delegates that meets periodically in the field to get hands on time with some of the most advanced enterprise technology that’s out there and face time with the guys who build it and sell it, organized by the tireless GestaltIT staff.

G is for Gestalt: I’ll just rip from their website: “an organized field having properties that cannot be derived from the sum of component parts; a unified whole”

H is for Hyper-V: No secret here, I’m a Hyper-V guy, and many of my esteemed Delegate colleagues are some of the sharpest minds in the VMWare ecosystem. For a Microsoftie, it’s unfamiliar territory being in second place, and so I struggle to frame it correctly: is Hyper-V the Rebellion and VMWare the Galactic Empire? Feels like it sometimes….VMware is everywhere and Hyper-V still struggles for traction. Happily, the the vendors sponsoring #VFD3 seem to get that there is another hypervisor out there besides VMWare, and yes, people actually run enterprises on it. As for me and my house lab, I’ve got VMWare here

I is for IT Blogging: We happy few who blog on technology and the enterprise!

J is for Just look at this note from previous delegates: It’s great. I wish every vendor would read it. How to Present to Engineers who blog. TFD

K is for Kicking bad habits: Whenever IT guys gather round, stories get told, lessons get learned, and laughter ensues. And when it’s all over, maybe you’ve learned enough to kick some of your own bad habits to the curb.

L is for Like buttons: You know them and you should click them as the Delegates and I bring you fresh on-the-scene reports, views and and thoughts from VFD3

M is for Management: I’m hoping to learn from the best and maybe offer my own insight to others as well. We’re engineers first but also, I’d argue, resource managers.

N is for Network Time Protocol: The Delegate Schedule is jam packed and without NTP, I’d be hopelessly lost.

O is for Operations: I’m down with the DevOps movement, I’m flexible and will flip printers with the help desk if needed, but at the end of the day I enjoy being an operations guy and I’ll be around others like me

P is for Pain Points: Let’s share them and overcome, shall we?

Q is for the q in FAQs: Struggling here.

R is for reporting: In another life, I might have been a reporter. I aim to be fair, concise, and balanced in all the blogging I do this week.

S is for Simpana: CommVault, one of the sponsors, is not your daddy’s backup vendor anymore. This Simpana product seems pretty interesting, and CommVault is Azure-friendly.

S is also for: Vendor schwag. And I’m bringing my own scwhag too

T is for Twitter: I struggle to be brief, as you can probably tell. Writing tweets focuses the mind. Follow #VFD3 or #TFD to catch it all live!

U is for Utility: My master’s degree is public policy-oriented and if there is such a thing as an amateur & totally untrained economist, I aspire to be it. Utility therefore, “represents satisfaction experienced by the consumer of a good.” I’m producing a good in the form of this blog, and I want my VFD posts to be useful for you, the reader. I hold the same standard to vendors. I demand satisfaction, sir!

V is for VFD 3 and virtualization: “Virtualization, ladies and gentlemen is good. Virtualization is right. Virtualization works. Virtualization clarifies, cuts through, and captures, the essence of IT,”  to borrow from Gordon Gecko.

X is for Purple Extreme Network switches: Just because.

Y is for Yet Another Single Pane of Glass? Sort of defeats the purpose.

Z is for ZFS: I think I know why Pure Storage, one of the sponsors, can deliver what they say they can deliver, and why their arrays come with so damn much RAM. ZFS has its drawbacks, but done right, it’s pretty amazing, and just might be at the heart of the disruption that’s occurring in the storage industry. And it’s a form of virtualization too. Good stuff!

That’s it. I’m off to BUR for a flight to SJC. See you there!