Uncategorized

Morpheus Data was our first sponsor at #CFD3 and, as is my custom before Tech Field Day events, I had done zero prep work on Morpheus. I had never heard of the firm, and as first-at-bat sponsors for #CFD3, they were facing 12 delegates full of energy and with decades of Information Technology experience between them. So how’d they do? I came away impressed. Let me tell you why: they have a heart for operations, and I’m an operations guy.

Morpheus Data – Background

I found Morpheus Data’s story pretty compelling when I read up on it later. The company started off more or less as an internal product inside a cost center of Bertram Capital, a private equity firm in the Bay Area. Now every company has a founding mythology, but Morpheus’s range true to me. Here, I’ll quote from their site:

Bertram Labs is a world-class team of software developers and ops professionals whose sole purpose is to rapidly implement IT solutions to fuel the growth of the Bertram portfolio. In 2010, that team needed a 100% infrastructure agnostic cloud management platform which would integrate with the DevOps tools they were using to develop and deploy applications for a range of customers on an unpredictable mix of heterogeneous infrastructure. Such a tool didn’t exist so Bertram Labs created their own solution…

Just that phrase right there -an unpredictable mix of heterogenous infrastructure- comprises the je nais se qua of my success as an 18 year IT Pro. Using ratified standards sent to us from on high by the greyhairs at the IETF & IEEE ivory towers, a competent IT Pro like myself can string together disparate hardware systems into something rational because most vendors sometimes follow those standards.

But it’s very hard work.  It’s not cheap either. And that act -that integration of a Cisco PoE switch with an Aruba access point or an iSCSI storage array with a bunch of Dell servers- isn’t bringing much value to the business. Perhaps it would be different if IT Shops could just start over with a rational greenfield infrastructure design, but that’s rare in my experience because the needs of IT aren’t necessarily aligned with the needs of the business.

Morpheus Data says they grew out of that exact scenario, which is immediately familiar to me as an ops guy. I find that story pretty encouraging; an internal DevOps team working for a private equity firm was able to productize their in-house scripts & techniques and are now a separate company. Damn near inspiring!

So what are they selling?

It’s Glue, basically. But well-articulated & rational glue

Morpheus’ pitch is that their suite of products can take the pain out of managing & provisioning services from your stack of heterogenous stuff whether it’s on-premises, in one cloud, or several clouds. And by taking the pain out, you can move faster and bring more value to the business.

I’m not going to get into each product because frankly, I think they’re poorly named and not very exciting (Sharepoint-esque in a way: Analytics, Governance, Automation, Evolution, Integrations). But don’t let the naming confuse or dissaude you; it’s an exciting product and the pricing model is simple to understand.clover-b4ff8d514c9356e8860551f79c48ff7c

Instead, let me describe to you what I saw during Morpheus’ Demo at #CFD:

  • Performance data from On-Premise virtualization servers running Hyper-V, VMware, and even Citrix’s XenServer all in one part of the Morpheus web-based portal
  • You can drill-down from each host to look at VM performance data too. Morpheus says they’re able to hook into both Hyper-V performance counters and VMware’s performance counters. That’s pretty awesome for a hetergeonous shop
  • Performance & controls over IaaS & PaaS instances in both Azure & AWS, again in the same screen
  • Menu-driven wizards that let you instantly provision a new virtual machine pre-configured for whatever service you want to run on it. Again -this could be done in the same tool and you can pick where you want it to go
  • Cost data from each public clouds
  • Rich RBAC controls, which is very important to me from a security & integrity standpoint
  • A composable role-based interface. Por ejemplo, you can let your dev team login to Morpheus and not worry about him or her offlining a .vhdx on a Hyper-V server

This chart from their website sums up their offering nicely in comparison with other vendors in this space.

morpheus

Concluding Thoughts

I’ve worked in IT environments where purchasing has been less than most people would consider as rational. Indeed, I’ve worked at places where we had the very best equipment from multiple vendors, but nobody had the time or talent to integrate it all into a smooth & functional machine in service to the business.

Stepping back, the very nature of the integration puzzle has changed. I mentioned above that a competent IT Pro could stitch together infrastructure that used IETF, IEEE, w3c and other standards-based technologies. Indeed that’s been the story of my career.

But in 2018, the world’s moved on from that, for better and worse. The world’s moved on to proprietary Application Programming Interfaces (APIs), and so I’ve moved with it, creating my own Powershell functions and Python scripts to interact with cloud-based APIs. You can do this too, given enough time & study.

But let’s be honest: it’s hard enough to manage & integrate a heteregenous stack of best-of-breed stuff on-premises. Now your boss comes to you and wants you to add some Azure services & Office 365. And then someone on the business side orders up some Lambdas in AWS, surprise! Or perhaps a distant IT group at your company just went and bought Cloudflare or Rackspace. If you’re still trying to solve standards-based puzzles of yesteryear, while learning how to develop scripts & tools for use in a world of proprietary APIs, you’re probably not bringing much value to the business.

And that’s where Morpheus sees itself slotting in nicely…they’ve done the hard work of integrating with both your legacy on-premises standards-based systems and the API-driven cloud ones, and they release new integrations ‘every two or three weeks.’ They even take requests, so if you’ve got a bespoke stack of stuff that doesn’t surface SNMP properly, you can propose Morpheus build an integration for it.

Sidenote: One of the more dev-focused delegates at #CFD3 criticized the prodcut as too ops-friendly (nobody cares to see all that stuff! he said), but I had to push back on him because details are important for ops teams, and Morpheus can surface an interface that’s safe for devs to use. And that’s why I say they’ve got a heart for operations teams.

On pricing: the products which again, have somewhat confusing names, at least offer simplified pricing. To get workload & ‘core features’ running on a VM in your datacenter, you’ll need to spend $25k to start. That seems high to me, but you’re essentially buying a DevOps integrator & engineer who can work 24/7 and doesn’t need health insurance or take vacation, which is pretty cool, and which helps you bring value to the business.

Disclosures
This blog post was written by me, Jeff Wilson, for publication on my blog, wilson.tech. I was not compensated by Morpheus Data to compose this blog post, and Morpheus did not see it prior to its publication. I learned about the Morpheus Data products during Cloud Field Day 3, an event for IT & Enterprise influencers that was held in April 2018 in Santa Clara California. The Gestalt IT group paid for my airfare, accomodations, and meals during the time I was in Santa Clara. Morpheus and other sponsors paid Gestalt IT to bring Delegate influencers like me to #CFD3
Morpheus Data shwag I took home
  • Cool stickers
  • A t-shirt

Can’t recommend the latest Packet Pushers Podcast enough. I mean normally, Packet Pushers (Where too Much Networking would Never be Enough) is great, but their Storage Networking episode this week was excellent.

Whether you’re a small-to-medium enterprise with a limited budget & your only dream is getting your Jumbo frames to work from end-to-end on your 1GigE (“a 10 year old design,” one of the panelists snarked), or you’re a die-hard Fiber Channel guy and will be until you die, the episode has something for you.

Rock star line-up too: Chris Wahl, Greg Ferro and J. Metz, a Cisco PhD.

The only guy missing is Andrew Warfield of Coho Data, who blew my mind and achieved Philosopher King of Storage status during his awesome whiteboarding session at #VFD3.

Check it out.

coho

So it’s a big day for the NFS and VMware guys here at #VFD3, they can’t stop talking about the VSAN announcement and the #VFD3 Awesomeness that was the last two and a half hours at Coho Data with some of Silicon Valley’s great Storage Philosopher Kings.

For your Hyper-V blogger, it’s time to put on a brave face, and soldier on. Coho’s gotta launch their array (“get to startup escape velocity” as someone on twitter put it) and that means focusing on NFS first. And that’s ok; my delegate friends here seem really interested and excited by this product, and when any virtualization engineer is excited for some new tech, I’m excited with them, even if I have to return home to my tired CSVs.

So what is Coho Data? Aside from having the greatest vendor schwag present ever (I kid!) and the actual best vendor schwag present so far (Chrome bike bag with the Coho logo, seriously a nice bag, thanks!), Coho is a startup with a unique storage product.

And I mean unique. Not sure I even understand it fully.

The Coho Storage architecture, borrowed from another blogger below, looks like any other storage solution, except that it’s completely and totally different. First, it involves a software-defined switch; more or less a switching model in which you let the Coho controller push your storage packets around so that your storage is closer to your hypervisor.

It’s real software-defined switching here; even Tom Hollingsworth was tweeting his approval for the messaging around these switches. For virtualization admins who touch on and worry about storage, compute, and network, it was refreshing for me to hear that Coho’s really putting some thought & interesting tech into the switch, even if I’m wary of letting go of my precious ASICs and my show fabric utilization.

Coho-Data-Rack-Layout-Marketing-Ref-Architecture

On the storage side, Coho sparked my interest for two reasons: Cheap, rebranded Supermicro arrays, SATA spinnners, and -unlike anyone we’ve talked to so far at #VFD3 thus far- PCIe SSD, not SATA/SAS SSD.

Coho’s performance model isn’t RAM-enabled like Atlantis & Pure yesterday. This is not a ZFS-derived model; it’s seemingly been grown organically in response to two things: the difficulty of managing and correctly using SSD, and the flexibility of cloud storage models. Coho has thought hard on maximizing SSD performance, on “not leaving any SSD performance on the table,” as the CTO put it, and in response to cloud flexibility, Coho’s model is designed to scale like that.

Hearing my delegate colleagues talk about Coho, I’ve realized they’ve got something unique and potentially game-changing here. It’s all we could talk about on the VMBus following, and I want to congratulate Coho on the general availability of their new product, something they savvily used #VFD3 to announce today.

Ping them if : IO Blending problems send you into a cold sweat, or you hate your ASIC on your switch

Set Outlook Reminder for When : They get Hyper-V SMB 3.0 support or iSCSI or OpenStack

Send them to /dev/null if: You aren’t brave enough to challenge storage paradigms

As Southern California is the center of the universe as far as I’m concerned, I know you’re all worried sick about me, this website, and other Southern Californians as we endure a a frightening precipitation event of some kind and scale. The Live MegaDoppler 7000 StormSageRadar XXTreme v 2.0 Beta can’t tell us yet if the great California Dampnening of 2014 is Noahic in nature or a $deity-punishment ripped straight from the Book of Revelations, but one thing is for certain: the water is everywhere.

Yes, it’s so wet out there that even the mighty Los Angeles River is flowing once again. It may even be navigable.*

But fret not! Let me calm your nerves. Your blogger Jeff, @agnostic_node1 on the twitters, is ok. So is the Child Partition and the Supervisor Module spouse. We’re all safe, our flood pants all fit and we’ve got buckets and bags of some sort of pre-silicon material at the ready.

The Converged Fabric Agile DevOps ITIL Waterfall Software-Defined Lab @ Home, however, I worry.

See I built it in the garage. The Supervisor Module would never permit such equipment inside the living spaces.

Not only is it in the garage, but it’s close to the garage door, bolted down properly to the wooden workbench.

A few inches outside but very close to being under the garage door tracks that lift what’s now become a very wet garage door.

*Gulp*

I may be able to push 4 or 5,000 IOPS to the home-built ZFS array sitting in the garage. I’m quite confident in my ability to take my home lab, and my skillset too, to new heights. I can spin up a dozen VMs on this handsome 12u stack at once…no problem at all. I can build a lab that’s agnostic and welcoming to all type 1 and 2 hypervisors, no discrimination here!

What I can’t do is anticipate inclement weather that seems to come at me sideways sometimes.

So what’s an Agile guy to do?

Pull out his IT MacGyver manual: Bungie cord, two sticks, and some plastic sheeting:

contingency

Agility Defined. Waterfall no more.

Like I said, not my finest hour in contingency planning, but it’s working. So far. I won’t be putting this lab work on my resume, however.

And yes, it’s okay to laugh.

* By kayak

Your enterprise’s mileage may vary, but in every place I’ve ever worked at, I’ve taken a pretty dogmatic approach to disk space utilization on VMs, especially ones hosting specialty workloads, such as Engineering or financial applications.

And that dogma is: No workload is special enough that it needs greater than 15% free disk space on its attached volume, non-boot volume. 

This causes no end of consternation and panic among technicians who deploy & support software products.

“Don’t fence me in!” they shout via email. “Oh, give me space lots of space on your stack of spindles, don’t fence me in. Let me write my .isos and .baks till the free space dwindles! Please, don’t fence me in,” they cry.

“I can’t stand your fences. Let my IO wonder over yonder,” they bleat, escalating now, to their manager and mine.

Look, I get it. Seeing that the D: drive is down to 18% free space makes such techs feel a bit claustrophobic. And I mean no disrespect to my IT colleagues who deploy/support these applications. I know they are finicky, moody things, usually ported from a *nix world into Windows. I get it. You are, in a sense, advocating for your customer (the Engineering department, or Finance) and you think I’m getting in your way, making your job harder and your deployment less than optimal.

But from my seat, if you’ve got more than 15% free space on your attached volume in production, you’re wasting my business’ money. I know disk space is cheap, but if I gave all the specialty software vendors what they asked for when deploying their product in my stack, my enterprise would :

  • Still have a bunch of physical servers doing one workload, consuming electricity and generating heat instead of being hyper-rationalized on a few powerful hosts
  • Lots of wasted RAM & disk resources. 400GB free on this one, 500GB free on that one, and pretty soon we’re talking about real storage space

One of the great things about the success of virtualization is that it killed off the sacred cows in your 42U rack. It gave us in the Infrastructure side of the house the ability to economize, to study the inputs to our stack and adjust the outputs based not on what the vendor wanted, or even what us in IT wanted, but on what the business required of us.

And so, as we enter an age in which virtualization is the standard (indeed, some would argue we passed that mark a year or two ago), we’ve seen various software vendors remove the “must be physical server” requirement from their product literature. Which is a great thing cause I got tired of fighting that battle.

But they still ask for too much space. If you need more than 15% free on any of the attached, MPIO-based, highly-available, high performing LUNs I’ve given you, you didn’t plan something correctly. Here’s a hint: in modern IT, discrimination is not only allowed, but encouraged. I’m not going to provision you space on the best disk I have for backups, for instance. That workload will get a secondary LUN on my slow array!

Agnostic Computing is brand new as tech blogs go, rolled out on a whim in August 2013 just to vent some angst, to wax philosophical on some high technology magic (would you believe my first post was about Sharepoint 2013? Uhhh yeah).

My thinking in starting the site was simple: I wanted to write a blog that was as fun and as passionate as the tech debates my friends & colleagues and I enjoyed at work for years. These are debates that start innocently enough (“Check out my new 1080p Android phone”….or “Do you really buy music from the iTunes store?”) but soon escalate into a 45 minute verbal fisticuffs, where low blows & sucker punches are not only permitted, but encouraged.

The geekier the reference, the harder the punch: “That’s a user interface only the mother of Microsoft bob could love,” “You’re just a sad and broken man because both BeOS & WebOS died and you were the only one who noticed,” or “We can’t trust someone who buys music off iTunes to be able to program a switch.” “You’re acting pretty confident for a guy who broke Exchange just last month.”

Good times. I love those debates, and not just cause the normals don’t get them. They’re genuinely fun, so I set to out to capture a bit of that spirit on this blog, and, I hoped, post some genuinely interesting stuff, like a storage bakeoff between bitter rivals, a sincere, screenshot or gifcam heavy how-to sent from my virtualized stack to your’s, and more.

And so it goes for bloggers, who, like chefs, try a little of this, test a little of that, mix it all up and then taste what’s in the pot. And most of the time, it’s forgettable at best, shame-inducing at worst.

Which makes it all the more surprising for me because apparently I’m doing something right.

You see, I’ve been invited as a delegate to Virtualization Field Day #3. In the Disneyland of High Tech, Silcon Valley, where the combined brainpower is bound to rub off on me, I mean, how can it not?

VFD-Logo-400x398That’s right. Me. Agnostic Comptuing guy. Going to Enterprise Tech’s Woodstock.

If you don’t know what #VFD is, then you haven’t been paying close enough attention. From all the interviews I’ve heard of delegates from past #Tech Field Days (Storage, Network, Wireless Network…it’s spreading into all our sacred sub-disciplines and dark arts, surely the ERP & SQL guys will be next) going to a TFD as a delegate puts you face to face with the companies, and more importantly, the engineers who designed the stuff you deploy, support, break, fix, and depend on to keep your enterprise running.

Notice I said engineers. Not sales people. Or not just sales people at any rate.

Deep dives, white papers, new horizons opened, the potential to leave behind painful memories of broken processes and old ways of doing things by meeting the other delegates, some of whom I’ve been reading for years…..these are the things I’m looking forward to as a #VFD delegate.

Oh and challenging vendors and discerning which product is the right one for the business, which is among the most important jobs we as IT pros have.

As a former boss of mine put it memorably: “We’re only as good as our vendors.” And he was right: whether the device in your rack is amazing and incredible, or prone to failure, or the service you’ve contracted is game-changing or more trouble than its worth, managing “the stack” and interfacing with the stack builders and stack sellers is important to your success, and the business’ success.

Two of the sponsor firms at this year’s #VFD already have me excited. I just finished buying a Nimble array at work (gamechanger! no regrets!), but I won’t lie: I’m Coho-Curious. And Atlantis Computing: sharp guys, A++++ on the blogs, would read again, eager to hear about the products.

Thanks to the GestaltIT group (Add them to your RSS feed stat!) for the invite and be sure to check back here -as well as the other delegates’ blogs- for some #VFD thoughts in the weeks ahead.

All the sweat equity, money, and time I’ve put into the home lab is finally paying off at the Agnostic Computing.com HQ.

In fact, it’s been great: satisfying and pleasing little green health icons are everywhere, I read with satisfaction the validated Microsoft cluster configuration reports without any warnings at all, and the failover testing? Let’s just say I can remove “ish” from the end of the word “redudant.” This stack is as solid as it’s going to get on my low budget, single-psu setup designed to draw fewer than 5 amps and less than 500 watts (I’m at about ~325w & 3.5 amps more or less)

0218140307a

But standing up Hyper-V clusters on consumer-grade hardware isn’t exactly expanding my portfolio, even if all my storage is parked in a (new to me) ZFS box.  So last weekend it was time to tackle Hyper-V’s nemesis: VMWare’s market-dominating ESXi 5.5, which I’ve got running on a stable 2-core Athlon II box, 12GB of RAM, and an Intel 2x1GbE NIC.

For a Hyper-V guy who hasn’t touched ESXi since probably 2011, building out the ESXi box involved some trips down memory lane.

A memory lane called Pain Street.

The last time I worked in ESXi on anything meaningful was during an eight month span during 2011 in which my colleagues and I were charged with replacing ESXi with Hyper-V 2.0, baked into the just-released 2008 R2 edition.

We had Hyper-V 2.0, a few brand-new PowerEdge servers with quad-Nehalem CPUs, something like 512GB of RAM, a FAS 2210, System Center Virtual Machine Manager, 2007 Edition, and a brand new file system-like layer on top of NTFS called Cluster Shared Volumes.

Oh, and a handful of V2V tools & .vmdk to .vhd conversion scripts with which we planned to stick it to VMWare.

I mentioned that this was a painful time in my life, right?

I’ll save the Hyper-V war stories and show you my scars (Hyper-V virtual switch ARP storms, oh my!) another time, but here’s what I learned from that experience: Hyper-V 2.0, was in all ways inferior to ESXi when it debuted in Server 2008 R2. And not just a little inferior. No, we are talking NBA vs 8th Grade Boys Basketball team scale inferiority.

The Hyper-V 2.0 guys will know what this is.

The Hyper-V 2.0 guys will know what this is.

It was half-baked, not entirely thought out, difficult to scale, prone to random failures, hard to backup (even risky…sometimes the CSVs would just drop off when the IO was supposed to be redirected to another host), and the virtual drivers written by Microsoft for Microsoft Hyper_v virtual machines running on Microsoft virtual synthetic NICs weren’t stable. It was a hypervisor that made you pound your keyboard, sit back in your chair, scratch your head and ask, “Has anyone at Microsoft ever tried to use this thing?”

And you couldn’t team it and expect Microsoft support. I had to delay my love letter to LACP for years because of that.

Even so, I loved Hyper-V 2.0. Wore the admin hat like a badge of honor. Proud and boastful of the things I could make Hyper-V 2.0 do in the face of so much adversity, so much genetic disadvantage. Yeah the other guys had Ferraris tuned up by Enzo himself and all I had was a leaky Fiesta with a suspect axle, but that Fiesta could, in the right hands, make it across the finish line.

We, we happy few, we band of brothers, who persisted in our IT careers through the days of Hyper-V 2.0 and even excelled.

Backing up your VMs in 2008 R2 involved this, which worked....mostly. But pucker factor was high. In 2012 I never worry.

Backing up your VMs in 2008 R2 involved this, which worked….mostly. But pucker factor was high. In 2012 I never worry.

All that to say that the hey-day of VMWare, ESXi, the Nexus 1000v, and now VSAN have kind of passed me by. Just can’t seem to get exposed to it, to sink my teeth into that whole wondrous stack. It’s expensive.

But it’s been alright with me because in the same span I’ve adopted Hyper-V 3.0 with relish and become convinced that we Microsofties finally had a Hypervisor worthy of respect. “Feature Parity” is a term that’s been bandied about, and with 2012 R2, it got even better. EMC, parent company of VMWare, even called SMB 3.0 “the future of storage.” Haha, take that NFS!

So has it?

It’s not easy for me to admit this but while I like Hyper-V much more in some areas and feel like it can scale and serve any enterprise well, I have to admit after playing with ESXi at home, Hyper-V still has deficits purely from a Hyper-visor perspective (System Center is a different animal).

Deficits other virtualization bloggers are eager to demonstrate, with barely-concealed glee. Take Mike Laverick, a sharp ESXi guy, for instance. This February readers of his blog have been treated to post after post In Which the ESXi Guy Plays with Hyper-V 3.0.

I’m always up for a good tech debate, but after devouring his posts, letting them sink in, I got nothin’ except a few meek responses and maybe some envy.

He concludes at the bottom of this great screenshot-by-screenshot comparison:

I guess to be fair – taken individually this lack of hotness of the Gen2 Windows 2012 Hyper-VM might not be a deal breaker for some. For me personally, they collectively add up big pain in the rear, especially if you coming off the back of virtualization product like VMware vSphere that does have them. For me the whole point of virtualization is it liberates us from the limitations of the physical world. What’s the point of software-defined-virtual-machines, when it feels more like the hardware-defined-physical-machines….

’tis true in some respects. I have long wanted to stop mapping LUNs directly from the SAN, through the Hyper-V switch to a virtual machine, but it was not possible to resize .vhdx drives on a live VM until October 2013, when R2 was released. And even now in R2, it’s not as simple or more importantly -reliable- enough to depend on in production, at least not compared to resizing an RDM in a NetApp or Nimble or even my ZFS array.

I will offer some resistance in the following two areas though.

Hyper-V runs on whatever piece of junk you throw at it. That’s interesting news if you’re a value-oriented enterprise, and really great news if you’re building a home lab or trying to learn the trade. VMWare, in contrast, won’t even install without supported NICs…the cheap realtek in your Asus? Not supported. The Ferrari metaphor is apt: You’ve got to shell out some bucks for the High Octane stuff before you can stand-up ESXi in a meaningful way.

Second observation is that I’m not comprehending the switching model very well. I was really excited to see Cisco Discovery Protocol just work on mouse-hover with zero configuration, but this 1:1 stuff feels archaic, devoid of the abstract fabric goodness:

What am I missing here?

vswitch

On my ESXi box, I’ve got two Intel GigE adapters. I have the option to make them active/passive (cool), team them, but I’m not seeing the same converged fabric concept that’s liberated me in Hyper-V 3.0 from, guess what, worrying about hardware.

The three NICs on my Hyper-V host, for instance, are joined in an LACP team, which then is used to build a true & advanced virtual switch for both the host & the guests. And an LACP-capable switch is not a requirement here; I could use the dumb switch in my rack and have the same fault-tolerant (though lower performing) converged team.

Some very simple powershell lines later, and you’ve got vethernets  on the management OS tagged with the appropriate VLAN.

All ports on the physical Cisco switch? Trunked.

Freedom.

hyperv

I know I’m missing something here…PowerCLI? I’ll be testing that tonight.