Look ma, no MPLS!

One of the big dollar technology items organizations like mine will likely look to kill in the next few years are MPLS networks, private lines, point to point T1s, T3s, you know, the 1990s-2000s way corporations connected HQ & Branch Offices securely over the internet. I’ve worked on such networks for all of my career, from being nervous around the dusty old Cisco router with a T-1 WIC card at my first post-college job to being part of a team that deployed 100MegE, 10MegE and T-1s to branch offices in dozens of spots around the world.


For all the hype about the “Cloud,” this is one area that doesn’t get a lot of attention. And it should. Because in many cases, emerging and established technologies could lead the way to saving thousands, tens of thousands or even hundreds of thousands of dollars per month.

Take a look at your IT spend. I bet leasing private lines over commercial carriers is a big part of it, and potentially a huge part of it if you use a managed MPLS service. In some cases, it might even cost as much as one or two FTEs! Certainly the business would be happy to get some of that spend back if it were possible to merge the security, privacy and SLA-backed service of a leased line with the rapid time-to-deploy, ubiquity and ease of provisioning a standard internet circuit or two at a remote office.

This is the model you grew to love and hate over the last 10-15 years if you cut your teeth in corporate IT with Microsoft. Providing software for this topology that was redundant and survivable was Microsoft’s bread and butter during the late Gates era and much of the Ballmer era.

A Typical Active Directory instance spread over a WAN using ipv4, private lines, firewalls, NAT, and routers. A focus on keeping the Internet out, the duality of LAN vs WAN, NAT rules and DMZ. All the classics are here.  If you were lucky, in the early days before people really understood QOS, you got to experience the joy of bakchauling Internet from Site B to Site A and the resulting crush on business traffic

Models like this had their problems: expensive, prone to failure, and slow in the days before Ethernet circuits. You had to buy a bunch of equipment and outfit each site too,which meant more licenses. But this model could scale relatively well, at least for SMEs.

And while the architecture above looks positively archaic if you’ve got your head buried deep in SDN and such, it’s still in use in a lot of SMEs around the world. I’d even go further and say 9 out of 10 enterprises still think of network architecture in the context of Inside vs Outside. And who can blame them? At least you can control what’s inside your network, and it’s useful to think of it in that context.

But cloud providers from Amazon to Google to Azure have failed to abstract this model -or build a hybrid model that offsets this model’s shortcomings- to the cloud. Oh sure, you could move your TLD to Google Apps today and be done with it, but you’ve got a bunch of IT generalists & employees who are aces on Microsoft products. And you like the control of management ability of AD.

All you want to do is kill your expensive monthly leased circuits and effectively put your AD on the internet with proper security & robust A/B internet links, or hire Azure to do that for you. But you’re out of luck because believe it or not, this is how you go from on-prem AD to something else with Azure, ipv6, and all the new shiny stuff we’ve been talking about for the last few years:


You see that? This graphic, ripped off from Azure somewhere, shows how you move your enterprise to the cloud. You tack on another f*(#$#$ VPN device and federate against Azure! And your remote workers? They VPN into Azure or via Remote Access! Hurray, our problems are solved! Why didn’t I think of adding another VPN point-to-point device!

O365 with Azure offers much the same:


Not one, but two clouds to federate against now! What’s not shown in this topology is that your end users aren’t sitting in an Azure cloud as in the diagram; they’re on prem, behind your old ipv4 firewall & router, fat, dumb and happy to be “at work” where their “work stuff” is located. And you’re in your office, jamming through Technet links on provisioning, assigning and deploying certs correctly, tearing your hair out.

Is this the best Azure and all the rest can do? Can’t the Cloud guys figure out a way for me to have my cake and eat it to, to move my Active Directory instance to a cloud provider, kill my premium, high-cost, inflexible, slow-to-deploy leased circuit inventory, end the LAN/WAN duality that haunts us all, and save me from buying server iron for offices with only a handful of people?

So far I don’t think Azure is compelling enough and it’s for the reason above alone. Cheap storage? Sure. Scalable compute? Take my credit card! But while the spillover effect from MS’ experience running Azure is evident in 2012 R2, it’s all one way. Microsoft is learning a bunch of stuff about how to run multi-tenant data facilities that ends up in my hands, but their knowledge of plain vanilla Active Directory on a WAN isn’t being reproduced in a compelling way in Azure.

End result: Keep my expensive leased lines. What a fail.

That’s why I’m excited and optimistic about network startups like Pertino. Pertino offers a brain-dead simple ipv6 service that traverses consumer or enterprise NATs, connects computers over an ipv6 network, and even allows you to run Active Directory over it. Genius!

They’re a startup, yes, and they require a piece of software on the PC which skeptics would point out is not different at all from a VPN client (they’re right), and I don’t think this particular product could scale far and wide, yet, it works. You can run AD and get to domain resources from a remote device on the internet. No Direct Access needed, no VPN devices, no routers, no goddamn certs, no worrying about subject alternative names and no waiting on some provider to stand up a VPN between my house and the server in Virginia.

If you’re an IT Generalist, the potential is this: It’s the Active Directory you know and love. On the fucking internet. Right now.

Last night I stood up a demo of 2012 R2 on my Hyper-V client at home, built a domain at home behind my Netgear wifi router, then built another Windows box on AWS somewhere in Virginia, installed Pertino client on both of them and bam! Just like that -for free- I had two domain controllers pinging, authenticating, routing over ipv6, no leased lines necessary. It just worked.

I’m not a networking guy (to the extent that any virtualization engineer is not a networking guy), so I don’t know how exactly it worked, couldn’t tell you if 6-to-4 was used or pure ipv6, all I can tell you is that I have an Active Directory instance on the internet with just a small client application.


If I can figure out how to engineer this with existing stuff, or if Pertino can scale and really build this technology out, I could eventually kill my leased lines. Game change.

Going full Marxist up in the datacenter

Suppose you are:

  • An IT engineer at a a small to medium enterprise, responsible for the company’s enterprise stack, from Cisco to storage to compute
  • Suppose further that in your budget for next year is a modest five figure sum allocated to you to upgrade/care for your datacenter stack that the entire SME runs on 24/6
  • Finally suppose that in the absence of strategic direction from the business, you have the inordinate and unusual power of determining how to use the money to modernize your stack.

Further, suppose you actually do believe you need some TLC for your stack as it’s a faithful but aging unit. What’s more, you feel you owe it to her; she’s been there giving it her all every time Captain Kirk called upon you to jump her to warp. True she doesn’t run quite as fast as she used to but she’s reliable and keeps the light on for you. And though you’d never admit this in her presence, you can hardly trade her in for the hottest, latest, slimmest stack for the five figures you’ve been given.

So, pop quiz, what do you do hotshot? What do you do?

If you’re like me, you go all Marxist on the problem and borrow from your policy/economics courses in grad school and central plan the ever-loving shit out of the next phase of your virtualization stack.

I invoke Marx & the idea of central planning because I’m really convinced that planning out a virtualization stack is a lot like being an old school, Cold War era Party Secretary in some dreary eastern European capital, slogging out your miserable life in the Central Economic Planning Bureau, trying to decide if this year’s harvest should go to socks or hand grenades.

Or as Wikipedia puts it:

Different forms of economic planning have been featured in various models of socialism. These range from decentralized-planning systems, which are based on collective-decision making and disaggregated information, to centralized-systems of planning conducted by technical experts who use aggregated information to formulate plans of production. In a fully developed socialist economy, engineers and technical specialists, overseen or appointed in a democratic manner, would coordinate the economy in terms of physical units without any need or use for financial-based calculation.

The highlighted portion describes the modern virtualization engineer almost to a T, does it not? We aren’t just technicians, but technocrats, balancing inputs, measuring outputs, carefully calibrating an entire highly complex system (perhaps not rivaling an economy, but surely, it’s up there), with imperfect but useful aggregated information (the business’ strategy, workflow, the calendar, our own instruments & measurements) against the backdrop of real hard stop supply constraints and sometimes outrageous and unpredictable demand. That’s somehow more than just what an engineer does; is it not?

And so from your technocrat’s seat, how do you keep the good times rolling yet  make sensible upgrades when funding becomes available? Where do you put your spend when no one’s telling you how to spend?

Don your central planner’s hat and forget the old virtualization rule book because you need to think like an economist as their toolset offers the best utility in planning your virtualization spend.

De-Abstract and Assign Values

A modern, fully-abstracted datacenter is still made up of just a few constituent elements at its core, and I maintain you can assign values to those elements and see which upgrade path makes the most sense. For my situation, it came down to storage or compute, with network a distant but potentially disruptive and game-changing third.

So you simply take Mr. Pareto’s amazingly useful technique and plot units of storage vs units of compute (I know, I know, how dare I do this on the CPU side, but bear with me!) just like the guns vs butter charts:


Notice that I’ve generalized these resources even though there’s a vast array of different storage technologies, speeds, cores per cpu and such. That’s all fine; the Pareto exercise requires you at some point to de-abstract each item you’re deciding between, so that you can compare them and find the most efficient mix. From your lofty seat in the Central Planning Bureau of your IT Department, you’re still engineering against resource depletion, but at a different scale and from a different perspective than when you’re loading up CPU Ready or watching context switching in perfmon.

Notice too that I went a little beyond Pareto’s example by including the blue “outliers” and the yellow “Game Changer/Value Multiplier.”

Outliers, in this scenario, are the shiny new hotness. You know. The Nutanixes of the world (not that I have anything against them, but they are shiny, new and hot), the million+ IOPS solid state PCIe card that’s super expensive, but promises to make your database as fast to read and write to DDR3 RAM itself. These outliers are the exotic playboy bunnies of the Virtualization World: neat to read about, and you’d definitely like to get your hands all over one and benchmark it again and again, but you’re just a Central Planning virtualization nerd, stuck in a cramped office trying to get the job done. Come back down to earth big fellow.

The Game Changer/Value Multiplier, however, is another story. This is a potential element in your datacenter that has such amazing potential, it threatens to tear up the Pareto efficiency rule-set all together and force you to write a new one. For something to be a value multiplier in my datacenter today, it’d have to be as significant as server virtualization was in a data-center of ten years ago. What could that possibly be at this point?

In my case, I know vendors will try and convince me that their specialty niche product is that yellow game change button on my chart. But I’ve already determined, to an extent, what that game change element would be by putting the various elements into a cheesy but effective “Value pyramid”, that rips off the celebrated and very-appropriate for this post MoSCow Method:

Image 60

For your bread and butter virtual stack, stuck on 2010 era hardware that while still fast, can’t take advantage of some of the new stuff in Hyper-V, I reckon this pyramid is pretty accurate and perhaps useful.

The pyramid shows that what I need most is storage, but plain old iscsi storage is also of the least value to me as it doesn’t enable anything new; it just throws TBs at an old stack. No sorry NetApp, I don’t want the one with the bigger GBs.

Much more interesting to me and probably to a lot of IT engineers out there is what happens as you go up the list. SMB3 offers near game-change levels of disruption, but I’ve already got it in Windows 2012, what I don’t have is space or compute to use it with, to build out a Windows 2012 R2 scaled-out file server SAN killer (not that I’d run production on that…yet) or at the least do real shared-nothing live migrations.

Giving me more storage and compute and suddenly we’re in serious, high-value territory, which is as close as a Central Planning Technocrat comes to unadulterated, mathematically-pure joy.

I’ve already got software defined networking in my System Center suite and I’m using elements of it, but at this heady level, to really use it well, to start thinking about geo-independence, ingress and egress to Azure,  or VDI, perhaps I need to start thinking about replacing my 6509e switch, or “underlay” as the fancypants network virtualization guys call it now. Or at least I may need to get some new blades. Or maybe not…I’m not sure. Part of the exercise is to put a value on features and find out what you don’t know.

At the very tip of the pyramid, our mythical vendor would be able to supply every element from top to bottom, scaling back capacity the further up the pyramid he goes to keep costs down in your five figure range.

The top of the pyramid -a sum of all the parts below- represents a true game change scenario, one in which the old Pareto efficiency rules get torn up and you have the fun task of thinking up a new ruleset.

One last tool/visualization crutch I’ll leave you with if you’re in a similar situation is this: chart the rise in capacity, speed, or feature-set over time against your company’s own business cycle, then try to map out and think of new technologies that could disrupt the whole equation, getting you and your business to your destination more quickly and for less money, but more risk.

What do you aim for? How do you prioritize? That awesome new disruptive gamechanging technology could leapfrog you past ipv6 implementation hurdles and beyond 10GbE, but how do you hit it? Do you even bother aiming?

Image 59

I’ll know in a few weeks if my approach to upgrading my Hyper-V farm is successful or bears the right kind of solution I’m aiming at. In the meantime, I hope you found some utility in reading about Pareto and Marx on a tech blog.

A Chromebook defiled

So I was one of the lucky ones (68,000+ according to Wikipedia) to get one of the original prototype Chromebooks from Google, the legendary, all black, totally murdered-out CR-48 Chromebook.

I had forgotten that I even signed up for it when it showed up on my doorstep several weeks later about this time three years ago.

One look. One click. One foray into the browser-as-an-OS concept and I was smitten. I resolved then and there to hold the CR-48 near and dear to my heart, to keep it forever and treasure it as another item in my huge junk heap of out-dated computers nascent computer museum.

CR48-previewOf course, the CR-48 wasn’t much to write home about. This was no Model 100 or Apple Macintosh. No, this was more like a Lisa, Apple Mac Cube or Windows Me. Nice to look at, neat concept, but once you turned it on, it kind of sucked. It was slow, and back in 2010-2011, ChromeOS was truly just a browser. There were no “apps” for desktop, NaCL hadn’t been implemented yet, and this thing ran on a single core Atom, a CPU architecture so slow that you had time curse it and every Intel exec responsible for  fumbling the mobile revolution so badly (by name!), all while waiting for the wimpy Atom to render a single website.

Neat novelty laptop, and I’m glad I didn’t pay a dime for it, but really, I couldn’t do work on this thing, as my colleagues relentlessly teased. So after several months of non-use, I cleaned it up, looked up instructions on how to wipe/format it, and prepped it for sale on eBay.

Alas I’m a man of conscience. Google gave this laptop to me for free. How could I go and turn a profit on it? What kind of Google fanboi would I be if I did that?!? A pretty shitty one, I reflected.

So I didn’t sell it. I couldn’t. And so back into the box it went until this weekend, when an acute need for an extra laptop arose in my house after family members took some of my old ragged and spare netbooks.

“What’s that in the colorful cardboard box,” I thought when I came across the CR-48’s original packaging. “No. It couldn’t be!”

And yes, there it was, just as black and menacing and monolithic as the day I got it: the CR-48. Still looking good, three Mac Book Air cycles later.

My other Chromeboxen, all of which have been used and abused in multiple ways.
My other Chromeboxen, all of which have been used and abused in multiple ways.

I’m an experienced Chromehead, owning not one or two, but three (possibly four if you count the Chromecast) Chrome devices, including a Google IO 2012 edition Chromebox and the ubiquitous, best-selling series 3 ArmBook. And so I didn’t really need this CR-48, but I couldn’t sell it either…what to do what to do.

And so naturally, since I’m the sort who would really enjoy the irony, I resolved then and there to build myself a Windows laptop, and not just any Windows laptop, but a Windows 7 Thin Client laptop on my free Chromebook CR-48.

That’s right. Win 7 Thin PC.  Not just the familiar closed-source proprietary operating system of yesteryear, but the thin client version, the version that is/was pined after by PC efficiency nuts & custom system builders for so long, the version that isn’t available to the public, even more closed, locked away, and protected than Windows 7 itself.

Let me just repeat that and let it sink in: this is an exclusive, hard to get version of Microsoft’s last real successful operating system that weighs in at just a hair over 2.5GB installed!

The 16GB hard drive has slower random r/w than my first 512MB USB stick. Not even worth an upgrade, really.
The 16GB SSD has slower random r/w than my first 512MB USB stick. Not even worth an upgrade, really. And I’m a sucker for hard disk upgrades.

Truly putting this OS on this type of laptop would be a tech sin of the worst order. And so, naturally, I dived into the process post haste, tearing the CR-48 apart, placing masking tape over the BIOS safety switch (no electrical tape in my house and no time to get some!), losing a few screws in the process before finally booting the black beast up, sliding in the USB drive with the Win7TPC .iso and formally sticking some Redmond code way up where it didn’t belong.

Felt good. For awhile anyway. A sort of tech high, a mega byte of temporary euphoria. Yum.

Of course even Win7 TC can’t make up for the horribleness that is the Atom. You can’t really term what the CR-48 does as “performance,” it’s more like the opposite of performance; perhaps “level of degradation off baseline” is more accurate. “Is it any faster,” is replaced with, “How much and to what extent is it slower?” even with a thin PC operating system.

And what’s even worse about this experience is that I skipped the section on remapping the ChromeOS keyboard to Windows and sat for a good 5 minutes trying to figure out how to do CTRL-ALT-DEL after joining the machine to my domain. Thank god for the on-screen keyboard.

So that was the highlight of my weekend. I defiled my free Chromebook with an OS straight out of Linus’ dystopian hell-scape, experienced the thrill of doing something so naughty followed by the inevitable disappointment & headaches such experimentation is bound to yield.