Going full Marxist up in the datacenter

Suppose you are:

  • An IT engineer at a a small to medium enterprise, responsible for the company’s enterprise stack, from Cisco to storage to compute
  • Suppose further that in your budget for next year is a modest five figure sum allocated to you to upgrade/care for your datacenter stack that the entire SME runs on 24/6
  • Finally suppose that in the absence of strategic direction from the business, you have the inordinate and unusual power of determining how to use the money to modernize your stack.

Further, suppose you actually do believe you need some TLC for your stack as it’s a faithful but aging unit. What’s more, you feel you owe it to her; she’s been there giving it her all every time Captain Kirk called upon you to jump her to warp. True she doesn’t run quite as fast as she used to but she’s reliable and keeps the light on for you. And though you’d never admit this in her presence, you can hardly trade her in for the hottest, latest, slimmest stack for the five figures you’ve been given.

So, pop quiz, what do you do hotshot? What do you do?

If you’re like me, you go all Marxist on the problem and borrow from your policy/economics courses in grad school and central plan the ever-loving shit out of the next phase of your virtualization stack.

I invoke Marx & the idea of central planning because I’m really convinced that planning out a virtualization stack is a lot like being an old school, Cold War era Party Secretary in some dreary eastern European capital, slogging out your miserable life in the Central Economic Planning Bureau, trying to decide if this year’s harvest should go to socks or hand grenades.

Or as Wikipedia puts it:

Different forms of economic planning have been featured in various models of socialism. These range from decentralized-planning systems, which are based on collective-decision making and disaggregated information, to centralized-systems of planning conducted by technical experts who use aggregated information to formulate plans of production. In a fully developed socialist economy, engineers and technical specialists, overseen or appointed in a democratic manner, would coordinate the economy in terms of physical units without any need or use for financial-based calculation.

The highlighted portion describes the modern virtualization engineer almost to a T, does it not? We aren’t just technicians, but technocrats, balancing inputs, measuring outputs, carefully calibrating an entire highly complex system (perhaps not rivaling an economy, but surely, it’s up there), with imperfect but useful aggregated information (the business’ strategy, workflow, the calendar, our own instruments & measurements) against the backdrop of real hard stop supply constraints and sometimes outrageous and unpredictable demand. That’s somehow more than just what an engineer does; is it not?

And so from your technocrat’s seat, how do you keep the good times rolling yet  make sensible upgrades when funding becomes available? Where do you put your spend when no one’s telling you how to spend?

Don your central planner’s hat and forget the old virtualization rule book because you need to think like an economist as their toolset offers the best utility in planning your virtualization spend.

De-Abstract and Assign Values

A modern, fully-abstracted datacenter is still made up of just a few constituent elements at its core, and I maintain you can assign values to those elements and see which upgrade path makes the most sense. For my situation, it came down to storage or compute, with network a distant but potentially disruptive and game-changing third.

So you simply take Mr. Pareto’s amazingly useful technique and plot units of storage vs units of compute (I know, I know, how dare I do this on the CPU side, but bear with me!) just like the guns vs butter charts:


Notice that I’ve generalized these resources even though there’s a vast array of different storage technologies, speeds, cores per cpu and such. That’s all fine; the Pareto exercise requires you at some point to de-abstract each item you’re deciding between, so that you can compare them and find the most efficient mix. From your lofty seat in the Central Planning Bureau of your IT Department, you’re still engineering against resource depletion, but at a different scale and from a different perspective than when you’re loading up CPU Ready or watching context switching in perfmon.

Notice too that I went a little beyond Pareto’s example by including the blue “outliers” and the yellow “Game Changer/Value Multiplier.”

Outliers, in this scenario, are the shiny new hotness. You know. The Nutanixes of the world (not that I have anything against them, but they are shiny, new and hot), the million+ IOPS solid state PCIe card that’s super expensive, but promises to make your database as fast to read and write to DDR3 RAM itself. These outliers are the exotic playboy bunnies of the Virtualization World: neat to read about, and you’d definitely like to get your hands all over one and benchmark it again and again, but you’re just a Central Planning virtualization nerd, stuck in a cramped office trying to get the job done. Come back down to earth big fellow.

The Game Changer/Value Multiplier, however, is another story. This is a potential element in your datacenter that has such amazing potential, it threatens to tear up the Pareto efficiency rule-set all together and force you to write a new one. For something to be a value multiplier in my datacenter today, it’d have to be as significant as server virtualization was in a data-center of ten years ago. What could that possibly be at this point?

In my case, I know vendors will try and convince me that their specialty niche product is that yellow game change button on my chart. But I’ve already determined, to an extent, what that game change element would be by putting the various elements into a cheesy but effective “Value pyramid”, that rips off the celebrated and very-appropriate for this post MoSCow Method:

Image 60

For your bread and butter virtual stack, stuck on 2010 era hardware that while still fast, can’t take advantage of some of the new stuff in Hyper-V, I reckon this pyramid is pretty accurate and perhaps useful.

The pyramid shows that what I need most is storage, but plain old iscsi storage is also of the least value to me as it doesn’t enable anything new; it just throws TBs at an old stack. No sorry NetApp, I don’t want the one with the bigger GBs.

Much more interesting to me and probably to a lot of IT engineers out there is what happens as you go up the list. SMB3 offers near game-change levels of disruption, but I’ve already got it in Windows 2012, what I don’t have is space or compute to use it with, to build out a Windows 2012 R2 scaled-out file server SAN killer (not that I’d run production on that…yet) or at the least do real shared-nothing live migrations.

Giving me more storage and compute and suddenly we’re in serious, high-value territory, which is as close as a Central Planning Technocrat comes to unadulterated, mathematically-pure joy.

I’ve already got software defined networking in my System Center suite and I’m using elements of it, but at this heady level, to really use it well, to start thinking about geo-independence, ingress and egress to Azure,  or VDI, perhaps I need to start thinking about replacing my 6509e switch, or “underlay” as the fancypants network virtualization guys call it now. Or at least I may need to get some new blades. Or maybe not…I’m not sure. Part of the exercise is to put a value on features and find out what you don’t know.

At the very tip of the pyramid, our mythical vendor would be able to supply every element from top to bottom, scaling back capacity the further up the pyramid he goes to keep costs down in your five figure range.

The top of the pyramid -a sum of all the parts below- represents a true game change scenario, one in which the old Pareto efficiency rules get torn up and you have the fun task of thinking up a new ruleset.

One last tool/visualization crutch I’ll leave you with if you’re in a similar situation is this: chart the rise in capacity, speed, or feature-set over time against your company’s own business cycle, then try to map out and think of new technologies that could disrupt the whole equation, getting you and your business to your destination more quickly and for less money, but more risk.

What do you aim for? How do you prioritize? That awesome new disruptive gamechanging technology could leapfrog you past ipv6 implementation hurdles and beyond 10GbE, but how do you hit it? Do you even bother aiming?

Image 59

I’ll know in a few weeks if my approach to upgrading my Hyper-V farm is successful or bears the right kind of solution I’m aiming at. In the meantime, I hope you found some utility in reading about Pareto and Marx on a tech blog.

Author: Jeff Wilson

20 yr Enterprise IT Pro | Master of Public Admin | BA in History | GSEC #42816 | Blogging on technology & trust topics at our workplaces, at our homes, and the spaces in between.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: