Monthly Archives: March 2014

Sponsor #3 : Pure Storage

PureStorage Logo - RGB - Small-304

I love the smell of storage disruption in the morning.

And this morning smells like a potpouri of storage disruption. And its wafting over to the NetApp & EMC buildings I saw off the freeway.

I was really looking forward to my time with Pure, and I wasn’t disappointed in the least. Pure, you see, offers an all-flash array, a startlingly-simple product lineup, an AJAX-licious UI, and makes such bold IOPS claims, that their jet black/orange arrays are considered illegal and immoral south of the Mason Dixon line.

This doesn't work anymore, or so newer storage vendors would have us believe

This doesn’t work anymore, or so newer storage vendors would have us believe

Pure also takes a democratic approach to flash. It’s not just for the rich guys anymore; in fact, Pure says, they’re making the biggest splash in SMB markets like the one I play in. Whoa, really. Flash for me? For everyone?

When did I die and wake up in Willy Wonka’s Storage Factory?

It’s an attractive vision for storage nerds like me. Maybe Pure has the right formula and their growth and success portends an end to the tyranny of the spindle, to rack U upon rack U of spinning 3.5″ drives and the heat and electrical spend that kind of storage requires.

So are they right, is it time for an all-flash storage array in your datacenter?

I went through this at work recently and it came down to this: there is an element of suspending your disbelief when it comes to all-flash arrays and even newer hybrid arrays. There’s some magic to this thing in other words that you have to accept, or at least get past, before you’d consider a Pure.

I say that because even if you were to use the cheapest MLC flash drives you could find, and you were to buy them in bulk and get a volume discount, I can’t see a way you’d approach the $ per GB cost of spinning drives in a given amount of rack U, nor could you match GB per U of 2.5″ 1 terabyte spinning disks (though you can come close on the latter). At least not in 2014 or perhaps even 2015.

So here, in one image, is the magic, Pure’s elevator pitch for the crazy idea that you can get an affordable, all-flash array that beats any spinning disk system on performance and meets/exceeds the storage capacity of other arrays:

pure

Pure’s arrays leverage the CPU and RAM to maximize capacity & performance. Your typical storage workload on a Pure will get compressed where it can be compressed, deduped in-line, blocks of zeros (or other similar patterns) won’t be written to the array at all (rather, metadata will be recorded as appropriate) and thin provisioning goes from being a debatable storage strategy to a way of life in the Pure array.

Pure says all this inline processing helps them avoid 70-90% of writes that it would otherwise have to perform, writes it would be committing to consumer-grade MLC SSD drives, which aren’t built for write-endurance like enterprise-level SLC SSDs.

Array tech specs.

Array tech specs. Entry level array has only 2.7TB Raw SSD, but at 4>1 compression/dedupe ratio, Pure says that 11TB is possible. Click for larger.

What’s more, Pure includes huge amounts of RAM in even their entry-level array (96GB), which they use as ZFS-like hot cache to accelerate IO.

Dual Westmere-class 6 core Intel CPUs outfit the entry array and Pure’s philosophy on their use is simple: if the CPU isn’t being full-utilized at all times, something’s wrong and performance is being left on the table.

These clever bits of tech -inline compression, dedupe, and more- add up to a pretty compelling array that draws only 400-450 watts and takes up only 2u of your rack, and, I’m told, start at a bit under 6 figures.

Pure really took some time with us, indulging all our needs. I requested and was allowed to see the CLI interface to the “PurityOS,” and I liked what I saw. Pure also had a Hyper-V guy on deck to talk about integration with Microsoft & System Center, which made me feel less lonely in a room full of VMware folks.

Overall, Pure is the real deal and after really asking them some tough questions, hearing from their senior and very sharp science/data guys, I think I do believe in magic.

Ping them if: You suffer from spinning disks and want a low cost entry to all flash

Set Outlook reminder for when: No need. Feels pretty complete to me.Plugins for vcenter & System Center to boot

Send to /dev/null if: You believe there is no replacement for displacement (spindles)

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Eric Shanks

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 

rfxylmys

Sponsor #2 : Atlantis Computing*

Agnostic Computing.com doesn’t sugarcoat things and neither do my fellow delegates. We all agreed that the sharp guys at Atlantis Computing had a plan for us; all sponsors of #VFD3 have an agenda, but Atlantis really wanted to hold us to their’s. They didn’t dodge our probing and shouted questions, but at times they did ask us to wait a sec for the answer on the next slide.

And if you know Tech Field Day, then you know #VFD3 delegates aren’t virtuous….we don’t even understand what the word “patience” means, and we make sport out of violating the seven deadly sins. So when the Atlantis folks asked us again and again to wait, in effect to add some latency to our brains in the HQ of a company designed to defeat latency once and for all, I felt like the meeting was about to go horribly off the rails.

But they didn’t dodge our questions and I think, overall, the session with Atlantis Computing’s data guys was quite enlightening even if it did get a tad combative at times. On reflection and after talking to my fellow delegates, I think we measured Atlantis with our collective virtual brains and found them….splendid.

So what’s Atlantis pitching?

Oh just a little VM for VMware, Hyper-V and Xen that sits between your stupid Hypervisor (Quote of the Day : “What hypervisors end up doing is doling out IO resources in a socialist and egalitarian way,” further proof of my own thesis of the utility of applying economics in thinking about your datacenter) and your storage.

Wait, what? Why would I want a VM between my other VMs and my storage?

Because, Atlantis, argues, the traditional Compute<–>Switch<–>Storage model sucks. Fiber Channel, iSCSI, Infiniband….what is this, 2007? We can’t wait 1 millisecond for our storage anymore, we need microsecond latency.

Atlantis says they have a better way: let their virtual machine pool all the storage resources on your physical hosts together: SSD, HDD…all the GBs are invited. And then, and here’s the mindfuck, Atlantis’ VM is going to take some of your hosts’ RAM and it’s going to all you to park your premium VMs in a datastore (or CSV) inside that pool of RAM, across your HA VMware stack or Hyper-V cluster, Nutanix-style but without the hardware capex.

Then this ballsy Atlantis VM is going to apply some compression on the inbound IO stream, ask you politely for only one vCPU (and one in reserve) and when it’s all said and done, you can hit the deploy button on your Windows 7 VDI images, and bam : scalable, ultra-fast VDI, so fast that you’ll never hear complaints from the nagging Excel jockey in accounting.

1millioniopsKind of far-fetched if you ask the #VFD3 crew: there’s technology and there is science fiction. But Atlantis was prepared. They brought out a bright engineer who sat down at the table and spun up an IOMETER demo, clicked the speedo button, and looked up at us as we watched the popular benchmark utility hit 1 million IOPS.

Yes. One.Million.IOPS

I think the engineer even put his pinky in his mouth, just before he dropped the mic.

It was a true #StorageGlory moment.

Ha! Just kidding. I don’t think we even smiled.

“What’s in that IOMETER file, a bunch of zeros?” I trolled.

“What if one of my hosts just falls over while the datastore is in the RAM?” asked another.

“Yeah, when do the writes get committed to the spinners,” chimed another delegate.

Then the Other Scott Lowe, savvy IT guy who can speak to the concerns of the corner office spoke: “You want to talk about CIOs? I’ve been a CIO. Here’s the thing you’re not considering.”

You don’t get invited to a Tech Field Day event unless you’re skeptical, willing to speak up, and have your bullshit filter set on Maximum, but I have to say, the Atlantis guys not only directly answered our questions about the demo but pushed back at times. It was some great stuff.

I’ll let my colleague @Discoposse deepdive this awesome tech for you and sum Atlantis up this way: they say they were doing software defined storage before it was a thing, and that, stepping back, they’re convinced this model of in-memory computing for VDI and soon server workloads, is the way forward.

And, more than that, ILIO USX is built on the same stuff they’ve already deployed en masse to huge 50,000 + VDI desktops for giant banks, the US military and a whole bunch of other enterprises. This thing they’ve built scales and at only $200-300 per desktop, with no hardware purchases required.

If you asked me before #VFD3 whether I’d put virtual machines inside of a host’s RAM outside of a ZFS adaptive replacement cache context, I’d have said that’s crazy.

I still think it’s crazy, but crazy like a fox.

Ping them if: There’s even a hint of VDI in your future or you suffer through login storms in RDS/XenApp but can’t deploy more hardware to address it

Set Outlook reminder for when: Seems pretty mature. This works in Hyper-V and even Xen, the one Hypervisor I can actually pick on with confidence

Send to /dev/null if: You enjoy hearing the cries of your VDI users as they suffer with 25 IOP virtual machine instances

Other Links/Reviews/Thoughts from #VFD3 Delegates:

Eric Wright

Marco Broeken

* GestaltIT, the creator and organizer of Tech Field Day events like this one, paid for airfare, lodging, and some pretty terrific meals and fine spirits for #VFD3 Delegates like me. No other compensation was given and no demands were made upon me other than to listen to Sponsors like this one. 

cloudphysics

Sponsor #1 : Cloud Physics

In reviewing the sponsors and their products ahead of #VFD3, I admit Cloud Physics didn’t get me very excited. They offered something about operations and monitoring. In the cloud.

Insert screenshot of dashboards, tachometers, and single pane of glass here. Yawn.

But I was totally wrong. The Cloud Physics CEO put his firm into some great context for me. Cloud Physics, he told us, is about building what our industry has built for every other industry in the world but hasn’t achieved for our own datacenters: aggregate data portals that help businesses make efficiency gains by looking at inputs, measuring outputs, and comparing it all on a huge scale.

Not clear yet? Ok, think of Nimble Storage’s Infosight, something I heart in my own stack. Nimble takes anonymous performance data from every one of their arrays in the field, they smash all that data together, apply some logic, heuristics, and intelligence to it, and produce a pretty compelling & interesting picture of how their customers are using their arrays. With that data, Nimble can proactively recommend configurations for your array, alert customers ahead of time before a particular bug strikes their productions, and produce a storage picture so interesting, some argue it should be open sourced for the good of humanity storage jockeys.

Cloud Physics is like that, but for your VMware stack. And soon, Hyper-V.

Only CloudPhysics is highly customizable, RESTful and easily queried. What’s more, the guys who built CloudPhysics were bigshots at VMware, giving CloudPhysics important bona fides to my virtualization colleagues who run their VMs inside datastores & NFS shares.

For the lone Hyper-V guy in the room (me), it was a pretty cool vision, like System Center Operations Manager only, better and actually usable and on a huge, macro scale.

And CloudPhysics isn’t just for your on-prem stuff either. They can apply their tech to AWS workloads (to some extent), and I think they have Azure in their sites. They get the problem (it’s tough to pull meaning and actionable intel out of syslogs of a hundred different hosts) and I think have an interesting product.

CloudPhysics Summary:

Ping them if: You know the pain of trying to sort out why your datastore is so slow and which VM is to blame and you think it’s always the storage’s fault

Set Outlook reminder for when: They can apply the same stuff to Hyper-V, Azure, or your OpenStacks and KVM and Xens

Send to /Dev/null if: You enjoy ignorance 

download (1)

I’d like to start this post off with an apology.

To every backup software vendor I’ve worked with in my 15 year IT Career : I’m sorry. I’m sorry for hating you all these years, for looking at you as a “Oh God….must I?” part of my stack. For resenting you and your product just a little less, and only a little, than anti-virus products. I’m sorry for abusing your support personnel, in mind and thought and via voodoo dolls & dart boards festooned with prints of your logo. I take back everything I said that was mean about you and your product and your software engineers. My bad. You’re people too after all.

Somewhere, a tape library robot is requesting quietly, insistently for that old backup tape you found in your garage

Somewhere, a tape library robot is requesting quietly, insistently for that old backup tape you found in your garage

Backups. I don’t want to deal with them. I don’t want to ever have to restore them. I have confidence in them….mostly…but in my view, if backups are anything but the last bullet in the chamber then I haven’t planned my stack correctly in the virtual age.

When you’re 100% virtual -when you’ve already taken that step to free yourself from hardware and change your thinking- why be dependent on the tape machine, it’s so-called robot and a failure-oriented practice from the 90s? I want to WIN, and winning means redundancy, portability, & speed, qualities that will empower the business.

So pitch your product as a “backup” solution and right out of the gate, you’re making me think of failure & pain. Sorry it’s just my programming. You and your software product make me think of failure…how do you like them apples? How can you look at yourself in the mirror? Damnit, sorry there I go again.

So CommVault was on deck for #VFD3 Day 2 and this issue/flaw in my psyche was bound to collide unless, by some miracle, CommVault could show me a better way, a way out of the fail swamp and into the IT Hall of Fame.

Show me the way home CommVault, or get thee in my tech dustbin.

Mission accomplished? We’ll see. But I’m intrigued.

What CommVault’s pitching is backup software, but it isn’t just backup software, it’s much more than that. Do yourself a favor and don’t pigeonhole them like I did.

Let CommVault archive that old Windows 2003 VM for you

Let CommVault archive that old Windows 2003 VM for you

Simpana, the product we spoke with them about at length yesterday, is better thought of as a sort of auxiliary storage or resource manager, parallel to, but complementary of your file system, your virtual machine manager, your storage system, and yes, your old stank-ass backup software.

What’s interesting about Simpana is that it’s so complementary of those things that it’s almost -almost- a stand-in for them. These folks are really thinking outside the box here. Simpana can give you this:

  • A Self Service Portal for your dev team giving them the ability to provision new VMs in vCenter or Hyper-V and backup or restore them
  • DASH engine, which functions almost as a WAN accelerator, easing your pain during backups and file operations at remote sites
  • The ability to restore a backed-up VMs data to a fresh, naked, just-provisioned VM
  • Backup on-prem VMs and restore them to public clouds like Azure
  • De-duplication of your backup stores
  • Auto-power off or archival of old & unused virtual machines

Each of those points above are available for VMware of course, but the cool thing about CommVault is that they’re highly aligned with the Microsoft stack & vision. In fact, they

Self Service Portal for the dev in your life who breaks his VM all too often

Self Service Portal for the dev in your life who breaks his VM all too often

seem to offer a few more nuanced features for Hyper-V than they do for VMware, and if you are a Microsoft shop, CommVault is heavily plugged into Azure, which makes them interesting in a number of ways.

So if it’s something more than backup software, it’s got to be more expensive than backup software too, right? Yes and let’s give a *golf clap* for the CommVault folks because they talked honestly about pricing, even if I think it’s kind of a crazy model: CommVault charges per Terabyte. They even put a graphic up on the screen: if you’re backing up 11TB now for $25,000/year, CommVault’s model will cost you $49,500.

Twice as much? That’s crazy, isn’t it? What if you already have Microsoft’s Data Protection Manager, which does synthetic, space-saving backups like CommVault, System Center’s VMM (which has the self-provisioning VM system) and Windows 2012, which offers  native de-duplication of Windows drives?

I challenged the CommVault folks on each of those points and it was highly comforting that they could answer each of them, even if I think some of their answers are disputable. It means they’re really familiar with the Windows environment, with Hyper-V, with how Simpana might slot into System Center & a hybrid on-prem/cloud enterprise.

In short; I’m not sold, but I’m interested and curious and when the sad and tedious topic of backups comes my way again, CommVault’s going to be at the top of my list.

Ping them if: You suffer from an old backup model that you can’t seem to break free from and want to enable new possibilities with something more than just backup software

Set Outlook Reminder for when: You can justify an increased cost or can make the ROI numbers work

Send to /dev/null if: You’re spiritually yoked to the grandfather, father, son model

 

 

 

spirent

One of the recent developments in virtualization I’m thinking of exploring in the home lab is NVGRE or Network Virtualization using Generic Routing Encapsulation. NVGRE is Microsoft’s offering in the nebulous, ill-defined software-defined networking space, and it’s just a few powershell cmdlets away from being turned on in my lab.

But should I bother?

No less an august networking authority than Ivan Pepelnjak has called the network virtualization model & NVGRE of Windows Server 2012 R2 “simply amazing,” but he’s also remarked on how complex and confusing it is, how it’s truly a Layer 3 NV product between Hypervisor hosts, but a muddled L2/L3 product within the hypervisor itself.

For a humble systems engineer with a mutt lab at home and a highly rationalized stack at work, I’m struggling with whether network virtualization’s benefits outweigh its risks. The goal is simple: #NetworkingGlory, the promised land where my most important /24 subnet follows the sun, hopping between datacenters over my existing MPLS network, or freeing me from paying for an MPLS network in the first place. Does NVGRE put me on the path to NetworkingGlory or is it a distraction?

NVGRE. Whoohooo, the same /24 on a single server. Great for Azure, meh for me

NVGRE. Whoohooo, the same /24 on a single server, no conflict. Great for Azure, meh for me

My sense is from the last few days here at #VFD3 that the VMware guys are in the same boat. They’ve got VXLAN & NSX in their stack, but when I see those products mentioned, I get the feeling from some of them that they’re just as “meh” on NSX as I am on NVGRE.

Enter Spirent (pronounced Spy-Rent), a large technology & testing/validation firm that could, frankly, use a bit of work on their website (just run your mouse of “Products” and check out the dizzying list that results). I wasn’t too excited to visit Spirent, but I’m glad they sponsored the event because I left their facility in Mountain View impressed.

So what’s Spirent do? When you or I are shopping for a new top of the rack switch and we want to compare baseline fabric performance of each switch in packets per second, bandwidth, and switching capacity, it’s Spirent’s test equipment that’s often been used to populate the datasheets. If you’re familiar with IXIA at all, Spirent is in the same space, but more of a dominant player, and their client list is pretty impressive, spanning web service companies, telecom, mobile, and so many more. In fact, I wouldn’t be surprised if they have certain three letter government agencies as clients.

But what can they do for me?

Well, if I ever get to a place where I’m embracing NVGRE in Hyper-V, I’m going to give Spirent a call. The firm sells network virtualization products designed to help you test, tap, validate, and troubleshoot your virtual networking stack. You can purchase, today, a virtual machine that enables you to peer inside your encrypted NVGRE tunnels, and that’s important because in an encrypted virtualized network, WireShark isn’t going to tell you squat about what’s wrong.

Glowing_Ring_White_re_LRtThey also sell some pretty neat software testing products. iTest Enterprise, a fat Win32 client, is able to capture your most complicated testing setups. Want to see what happens to your advanced caching storage array when you automate the deployment of 100 virtual machines? You can run it once and that will tell you something about the array, but true StorageGlory Wisdom will only be achieved when you’ve run that same tedious test a dozen times, which would be a major pain in the ass unless you have something like iTest Enterprise.

Wish I had that during my bakeoff with the Nimble and incumbent arrays earlier this year.

Spirent’s got more too: cloud testing products, cloud automation tools, and a slick-looking (but we couldn’t touch and play with, sadly) iPad application that looked like it could do all sorts of useful things.

These are some smart guys making some interesting products that allow you to tap into your hypervisor and find out exactly what’s going on.

Ping them if: Your virtualization environment is huge, you suffer automation & testing pains, you want to peer inside your encrypted virtual networks

Set Outlook Reminder for when: No need to wait for them on anything, they support VMware, Open Stack, Hyper-V, hell, even Xen.

Send them to /dev/null if: You don’t care about your users’ and company’s data integrity & security

 

 

coho

So it’s a big day for the NFS and VMware guys here at #VFD3, they can’t stop talking about the VSAN announcement and the #VFD3 Awesomeness that was the last two and a half hours at Coho Data with some of Silicon Valley’s great Storage Philosopher Kings.

For your Hyper-V blogger, it’s time to put on a brave face, and soldier on. Coho’s gotta launch their array (“get to startup escape velocity” as someone on twitter put it) and that means focusing on NFS first. And that’s ok; my delegate friends here seem really interested and excited by this product, and when any virtualization engineer is excited for some new tech, I’m excited with them, even if I have to return home to my tired CSVs.

So what is Coho Data? Aside from having the greatest vendor schwag present ever (I kid!) and the actual best vendor schwag present so far (Chrome bike bag with the Coho logo, seriously a nice bag, thanks!), Coho is a startup with a unique storage product.

And I mean unique. Not sure I even understand it fully.

The Coho Storage architecture, borrowed from another blogger below, looks like any other storage solution, except that it’s completely and totally different. First, it involves a software-defined switch; more or less a switching model in which you let the Coho controller push your storage packets around so that your storage is closer to your hypervisor.

It’s real software-defined switching here; even Tom Hollingsworth was tweeting his approval for the messaging around these switches. For virtualization admins who touch on and worry about storage, compute, and network, it was refreshing for me to hear that Coho’s really putting some thought & interesting tech into the switch, even if I’m wary of letting go of my precious ASICs and my show fabric utilization.

Coho-Data-Rack-Layout-Marketing-Ref-Architecture

On the storage side, Coho sparked my interest for two reasons: Cheap, rebranded Supermicro arrays, SATA spinnners, and -unlike anyone we’ve talked to so far at #VFD3 thus far- PCIe SSD, not SATA/SAS SSD.

Coho’s performance model isn’t RAM-enabled like Atlantis & Pure yesterday. This is not a ZFS-derived model; it’s seemingly been grown organically in response to two things: the difficulty of managing and correctly using SSD, and the flexibility of cloud storage models. Coho has thought hard on maximizing SSD performance, on “not leaving any SSD performance on the table,” as the CTO put it, and in response to cloud flexibility, Coho’s model is designed to scale like that.

Hearing my delegate colleagues talk about Coho, I’ve realized they’ve got something unique and potentially game-changing here. It’s all we could talk about on the VMBus following, and I want to congratulate Coho on the general availability of their new product, something they savvily used #VFD3 to announce today.

Ping them if : IO Blending problems send you into a cold sweat, or you hate your ASIC on your switch

Set Outlook Reminder for When : They get Hyper-V SMB 3.0 support or iSCSI or OpenStack

Send them to /dev/null if: You aren’t brave enough to challenge storage paradigms

In the run-up to Tech Field Day  – Virtualization Style #3- a lot of the chatter in my inbox from my fellow delegates involved a bit of groaning. We’re IT guys, we bitch a lot, after all, and most of it went like this:

I’ll software-define them right into /dev/null if they mention cloud too many times. Please sponsors, don’t overload us with marketing speak, was the general sentiment.

Though I’m the newbie in this delegate crew, I mentally plus oned many of these comments. None of us, certainly me, jumped on a airplane and left our Child Partitions behind just to get powerpointed to death. We, as a group, are allergic to Silicon Valley marketing.

So imagine how refreshing it was for me -and perhaps my colleague Delegates- that on Day One of #VFD3, the hype-words were kept to a minimum, the sponsors made available to us the Smartest Guys in the Room and Powerpoint, though present, played second fiddle to live demos and whiteboards. Yes, whiteboards, those wondrous things in front of which all the IT zoo animals go to ingest data, chew on it, and then and come up with a way out of the fix we’re in.

I think if anything, Day One’s theme -and perhaps the theme of #VFD3- wasn’t cloud, or software-defined this, agile that. No the theme for Day One for me was something much more down to earth, something very basic, something we can all touch and hold in our hands and comprehend.

#VFD3 Day One was all about* the DRAM, baby:

2012052317024939

I got me a hundred gigabytes of ram – Weird Al

Hell yeah.

RAM. The one constituent part of my enterprise stack that’s consistently fast, trouble-free, wickedly small, and easy to provision, care for, and cool. RAM…the one thing I’ll always take more of yet can never have enough of. RAM: costly but mostly drama-free save for one thing : power cycling the server makes the RAM go blank, so obviously we can’t use it for pain points like storage, right?

46869718

And yet, two of the three vendors yesterday  told us that yes, not only can you use RAM to multiply the benefits of already-fast SSD, or just bypass those slow-ass SSD drives in the first place, but that this once-heretical idea was ready for primetime, for your enterprise and maybe even your SMB.

Let’s dig in:

#VFD3 Day One – Cloud Physics

#VFD3 Day One – Atlantis Computing

 #VFD3 Day One – Pure Storage