In defense of pizza boxes

Lately on the Twitters there has been much praise among my friends and colleagues for what I like to think of as datacenters on dollies: Cisco’s UCS, FlexPod, Dell’s vStart etc…You know what these are as I’m sure you’ve come across them: pre-configured, pre-engineered datacenters you can roll out to the datacenter floor, align carefully and then -put your back into it lads!- carefully drop onto the elevated tiles. Then you grab that bulky L1430P and jack the stack into your 220v 30 amp circuit that has A/B power and bam! #InfrastructureGlory achieved.

feat_fig1_flexpod_expressSupport’s not a concern because the storage vendor, the compute vendor, and the network vendor are simpatico under the terms of an MOU…you see, the vendors engineered it out so you don’t have to download and memorize the mezzanine architecture PDF. All you have to do now is turn it on and build some VMs in vSphere or VMM or what-have-you.

Where’s the fun in that?

Don’t get me wrong, I think UCS is awesome. I kind of want an old one in my lab.

But in my career, it’s always been pizza boxes. Standard 2U, 30″ deep enclosures housing drives & fans up front, two or four CPU sockets in the middle surrounded by gobs of RAM, and NICs…lots and lots of NICs guarding the rear.

And I wonder why that is. Maybe it’s just the market & space I tend to find employment in, but it seems to me that most IT organizations aren’t purchasing infrastructure in a strategic way…they don’t sit down at a table and say, ‘Right. Let’s buy some compute, storage, and network, let’s make it last five years, and then, this time five year’s from now, we’ll buy another stack. Hop to it lads!”

A good IT strategic planner would do that, but that’s not the reality in many organizations.

So I’ve come to love pizza boxes because they are almost infinitely configurable. Like so:

  • Say you buy five pizza boxes in year 1 but in year 2, a branch office opens and it’s suddenly very critical to get some local infrastructure on-prem. Simple: strip a node out of your handsome 10U compute cluster and drop-ship it to the branch office. Even better: you contemplated this branch when you bought the pizza boxes and pre-built a few of them with offlined but sufficiently large direct attached storage.
  • You buy a single pizza box with four sockets but only two are populated in year 1. By year three, headcount is surging and demand on the server -for whatever reason- is extraordinary. What do you do hotshot, what do you do? Easy: source some second-hand Xeons & heatsinks, drop them into the server and watch your cpu queue lengths fall (not quite in half, but still). But check your SQL licensing arrangements first and be prepared to consolidate and reduce your per-socket VMs!
  • Or maybe you need to reduce your footprint in the datacenter. If you bought pizza boxes in a strategic way, you just dump the CPUs and memory out of  node1 into node 2, node 3 into node 4 and so on. You won’t achieve the same level of VM density but maybe you don’t need to.
  • Or maybe you don’t want or need 10GbE this year; that would require new switching. But in year 2? Break a node out and drop in some PCIe SFP+ cards and Bob’s your uncle.

I guess the thing about Pizza boxes I like the most is that they are, in reality, just big, standardized PCs. They are whatever architecture you decide you want them to be in whatever circumstances you find yourself in.

A FlexPod or vStart, in contrast, feel more constricting, even if you can break an element or two out and use it in another way.  I know I’d be hesitant to break apart the UCS fabric.

You’d think a FlexPod would be perfect for small to medium enterprises, and in many cases, it is. Just not in the ones I’ve worked at, where costs are tight, strategic planning rare, and the business’ need for agility outstrips my need for convenience.

Also, isn’t it interesting that when you compute at “Google-scale” (love that term, is it still en-vogue with VARs?) or if you’re Facebook, you pick a simple & flexible architecture (in-house x86/64 pizza boxes) with very little or no shared storage at all. You pick the seemingly more primitive architecture over the highly-evolved pod architecture.

30 Days hands-on with VMTurbo’s OpsMan #VFD3

Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo's Yuri Rabover
Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo’s Yuri Rabover

So you may recall that back in March, yours truly, Parent Partition, was invited as a delegate to a Tech Field Day event, specifically Virtualization Field Day #3, put on by the excellent team at Gestalt IT especially for the guys guys who like V.

And you may recall further that as I diligently blogged the news and views to you, that by day 3, I was getting tired and grumpy. Wear leveling algorithms intended to prevent failure could no longer cope with all this random tech field day IO, hot spots were beginning to show in the parent partition and the resource exhaustion section of the Windows event viewer, well, she was blinking red.

And so, into this pity-party I was throwing for myself walked a Russian named Yuri, a Dr. named Schmuel and a product called a “VMTurbo” as well as a Macbook that like all Mac products, wouldn’t play nice with the projector.

You can and should read all about what happened next because 1) VMTurbo is an interesting product and I worked hard on the piece, and 2) it’s one of the most popular posts on my little blog.

Now the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation wasn’t just that it played into my fevered fantasies of being a virtualization economics czar (though it did), or that it promised to bridge the divide via reporting between Infrastructure guys like me and the CFO & corner office finance people (though it can), or that it had lots of cool graphs, sliders, knobs and other GUI candy (though it does).

No, the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation was that they said it would work with that other great Type 1 Hypervisor, a Type-1 Hypervisor I’m rather fond of: Microsoft’s Hyper-V.

I didn't even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo's website and imagine it's my stack
I didn’t even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo’s website and imagine it’s my stack

And so in the last four or five weeks of my employment with Previous Employer (PE), I had the opportunity to test these claims, not in a lab environment, but against the stack I had built, cared for, upgraded, and worried about for four years.

That’s right baby. I put VMTurbo’s economics engine up against my six node Hyper-V cluster in PE’s primary datacenter, a rationalized but aging cluster with two iSCSI storage arrays, a 6509E, and 70+ virtual machines.

Who’s the better engineer? Me, or the Boston appliance designed by a Russian named Yuri and a Dr. named Schmuel? 

Here’s what I found.

The Good

  • Thinking economically isn’t just part of the pitch: VMTurbo’s sales reps, sales engineers and product managers, several of whom I spoke with during the implementation, really believe this stuff. Just about everyone I worked with stood up to my barrage of excited-but-serious questioning and could speak literately to VMTurbo’s producer/consumer model, this resource-buys-from-that-resource idea, the virtualized datacenter as a market analogy. The company even sends out Adam Smith-themed emails (Famous economist…wrote the Wealth of Nations if you’re not aware). If your infrastructure and budget are similar to what mine were at PE, if you stress over managing virtualization infrastructure, if you fold paper again and again like I did, VMTurbo gets you.
  • Installation of the appliance was easy: Install process was simple: download a zipped .vhd (not .vhdx), either deploy it via VMM template or put the VHD into a CSV and import it, connnect it to your VM network, and start it up. The appliance was hassle-free as a VM; it’s running Suse Linux, and quite a bit of java code from what I could tell, but for you, it’s packaged up into a nice http:// site, and all you have to do is pop in the 30 day license XML key.
  • It was insightful, peering into the stack from top to nearly the bottom and delivering solid APM:  After I got the product working, I immediately made the VMturbo guys help me designate a total of about 10 virtual machines, two executables, the SQL instances supporting those .exes and more resources as Mission Critical. The applications & the terminal services VMs they run on are pounded 24 hours a day, six days a week by 200-300 users. Telling VMTurbo to adjust its recommendations in light of this application infrastructure wasn’t simple, but it wasn’t very difficult either. That I finally got something to view the stack in this way put a bounce in my step and a feather in my cap in the closing days of my time with PE. With VMTurbo, my former colleagues on the help desk could answer “Why is it slow?!?!” and I think that’s great.
  • Like mom, it points out flaws, records your mistakes and even puts a $$ on them, which was embarrassing yet illuminating: I was measured by this appliance and found wanting. VMTurbo, after watching the stack for a good two weeks, surprisingly told me I had overprovisioned -by two- virtual CPUs on a secondary SQL server. It recommended I turn off that SQL box (yes, yes, we in Hyper-V land can’t hot-unplug vCPU yet, Save it VMware fans!) and subtract two virtual CPUs. It even (and I didn’t have time to figure out how it calculated this) said my over-provisioning cost about $1200. Yikes.
  • It’s agent-less: And the Windows guys reading this just breathed a sigh of relief. But hold your golf clap…there’s color around this from a Hyper-V perspective I’ll get into below. For now, know this: VMTurbo knocked my socks off with its superb grasp & use of WMI. I love Windows Management Instrumentation, but VMTurbo takes WMI to a level I hadn’t thought of, querying the stack frequently, aggregating and massaging the results, and spitting out its models. This thing takes WMI and does real math against the results, math and pivots even an Excel jockey could appreciate. One of the VMTurbo product managers I worked with told me that they’d like to use Powershell, but powershell queries were still to slow whereas WMI could be queried rapidly.
  • It produces great reports I could never quite build in SCOM: By the end of day two, I had PDFs on CPU, Storage & network bandwidth consumption, top consumers, projections, and a good sense of current state vs desired state. Of course you can automate report creation and deliver via email etc. In the old days it was hard to get simple reports on CSV space free/space used; VMTurbo needed no special configuration to see how much space was left in a CSV
  • vFeng Shui for your virtual datacenter
    vFeng Shui for your virtual datacenter

    Integrates with AD: Expected. No surprises.

  • It’s low impact: I gave the VM 3 CPU and 16GB of RAM. The .vhd was about 30 gigabytes. Unlike SCOM, no worries here about the Observer Effect (always loved it when SCOM & its disk-intensive SQL back-end would report high load on a LUN that, you guessed it, was attached to the SCOM VM).
  • A Eureka! style moment: A software developer I showed the product to immediately got the concept. Viewing infrastructure as a supply chain, the heat map showing current state and desired state, these were things immediately familiar to him, and as he builds software products for PE, I considered that good insight. VMTurbo may not be your traditional operations manager, but it can assist you in translating your infrastructure into terms & concepts the business understands intuitively.
  • I was comfortable with its recommendations: During #VFD3, there was some animated discussion around flipping the VMTurbo switch from a “Hey! Virtualization engineer, you should do this,” to a “VMTurbo Optimize Automagically!” mode. But after watching it for a few weeks, after putting the APM together, I watched its recommendations closely. Didn’t flip the switch but it’s there. And that’s cool.
  • You can set it against your employer’s month end schedule: Didn’t catch a lot of how to do this, but you can give VMTurbo context. If it’s the end of the month, maybe you’ll see increased utilization of your finance systems. You can model peaks and troughs in the business cycle and (I think) it will adjust recommendations accordingly ahead of time.
  • Cost: Getting sensitive here but I will say this: it wasn’t outrageous. It hit the budget we had. Cost is by socket. It was a doable figure. Purchase is up to my PE, but I think VMTurbo worked well for PE’s particular infrastructure and circumstances.

The Bad:

  • No sugar coating it here, this thing’s built for VMware: All vendors please take note. If VMware, nomenclature is “vCPU, vMem, vNIC, Datastore, vMotion” If Hyper-V, nomenclature is “VM CPU, VM Mem, VMNic, Cluster Shared Volume (or CSV), Live Migration.” Should be simple enough to change or give us 29%ers a toggle. Still works, but annoying to see Datastore everywhere.
  • Interface is all flash: It’s like Adobe barfed all over the user interface. Mostly hassle-free, but occasionally a change you expected to register on screen took a manual refresh to become visible. Minor complaint.
  • Doesn’t speak SMB 3.0 yet: A conversation with one product engineer more or less took the route it usually takes. “SMB 3? You mean CIFS?” Sigh. But not enough to scuttle the product for Hyper-V shops…yet. If they still don’t know what SMB 3 is in two years…well I do declare I’d be highly offended. For now, if they want to take Hyper-V seriously as their website says they do, VMTurbo should focus some dev efforts on SMB 3 as it’s a transformative file storage tech, a few steps beyond what NFS can do. EMC called it the future of storage!
  • VFD-Logo-400x398Didn’t talk to my storage: There is visibility down to the platter from an APM perspective, but this wasn’t in scope for the trial we engaged in. Our filer had direct support, our Nimble, as a newer storage platform, did not. So IOPS weren’t part of the APM calculations, though free/used space was.

The Ugly:

  • Trusted Install & taking ownership of reg keys is required: So remember how I said VMTurbo was agent-less, using WMI in an ingenious way to gather its data from VMs and hosts alike? Well, yeah, about that. For Hyper-V and Windows shops who are at all current (2012 or R2, as well as 2008 R2), this means provisioning a service account with sufficient permissions, taking ownership of two Reg keys away from Trusted Installer (a very important ‘user’) in HKLMCLSID and one further down in WOW64, and assigning full control permissions to the service account on the reg key. This was painful for me, no doubt, and I hesitated for a good week. In the end, Trusted Installer still keeps full-control, so it’s a benign change, and I think payoff is worth it. A Senior VMTurbo product engineer told me VMTurbo is working with Microsoft to query WMI without making the customer modify the registry, but as of now, this is required. And the Group Policy I built to do this for me didn’t work entirely. On 2008 R2 VMs, you only have to modify the one CLSID key

Soup to nuts, I left PE pretty impressed with VMTurbo. I’m not joking when I say it probably could optimize my former virtualized environment better than I could. And it can do it around the clock, unlike me, even when I’m jacked up on 5 Hour Energy or a triple-shot espresso with house music on in the background.

vmturboStepping back and thinking of the concept here and divesting myself from the pain of install in a Hyper-V context: products like this are the future of IT. VMTurbo is awesome and unique in an on-prem context as it bridges the gap between cost & operations, but it’s also kind of a window into our future as IT pros

That’s because if your employer is cloud-focused at all, the infrastructure-as-market-economy model is going to be in your future, like it or not. Cloud compute/storage/network, to a large extent, is all about supply, demand, consumption, production and bursting of resources against your OpEx budget.

What’s neat about VMTurbo is not just that it’s going to help you get the most out of the CapEx you spent on your gear, but also that it helps you shift your thinking a bit, away from up/down, latency, and login times to a rationalized economic model you’ll need in the years ahead.

Hyper-V 29% of Hypervisors shipped and Second Place Never Felt so Good


I couldn’t help but cheer and raise a few virtual fist bumps to the Microsoft Server 2012 and 2012 R2 team as I read the latest report out of some industry group or other. Hyper-V 3.0, you see, is cracking along with just a tick under 1/3rd of the hypervisor market.

Meanwhile, VMware -founder of the genre, much respect for the Pater v-Familias- is running about 2/3rds of virtualized datacenters.

And that’s just fine with me. 

Hyper-V is still in a distant second place. But second place never felt so good as it does right now. And we got some vMomemntum on our side, even if we don’t have feature parity, as I’ve acknowledged before. 

Hyper-V is up in your datacenter and it deserves some V.R.E.S.P.E.C.T.

Testify IDC, testify:

A growing number of shops like UMC Health System are moving more business-critical workloads to Hyper-V. In 2013, VMware accounted for 53 percent of hypervisors deployed last year, according to data released in April by IT market researcher IDC. While VMware still shipped a majority, Hyper-V accounted for 29 percent of hypervisors shipped.

The Redmond Magazine report doesn’t get into it beyond some lame analyst comments, but let me break it down for you from a practitioner point of view.

Why is Hyper-V growing in marketshare, stealing some of the vMomentum from the sharp guys at VMware?

Four reasons from a guy who’s worked it:

  • The Networking Stack: It’s not that Windows Server 2012 & 2012 R2 and, as a result, Hyper-V 3.0, have a better network stack than VMware does. It’s that the Windows server team rebuilt the entire stack between 2008 R2 & Server 2012. And it’s OMG SO MUCH BETTER that the last version. Native support for Teaming. Extensible VM switching. Superb layer 3 and layer 2 cmdlets. You can even do BGP routing with it. It’s built to work, with minimal hassle, and it’s solid on a large amount of NICs. I say that as someone who ran 2008 R2 Hyper-V clusters then upgraded the cluster to 2012 in the space of about two weekends. Trust me, if you played around with Windows Server 2008 R2 and Hyper-V and broke down in hysterics, it’s time for another look.
  • SMB 3.0 & Storage Spaces/SOFS…don’t call it CIFS and also, it’s our NFS: There’s a reason beyond the obvious why guys like Aidan Finn, the Hyper-Dutchman and DidierV are constantly praising Server Message Block Three dot Zero. It kicks ass. Out of the box, multi-channel is enabled on SMB 3.0, meaning that anytime you create a Hyper-V-Kicks-Ass file share on a server with at least two distinct IP addresses, you’re going to get two distinct channels to your share. And that scales. On Storage Spaces and its HA (and fault tolerant?) big brother Scaled out File Server: what Microsoft gave us was a method by which we could abstract our rotational & SSD disks and tier them. It’s a storage virtualization system that’s quite nifty. It’s not quite VSAN except that both Storage Spaces/SOFS & VSAN seem to share common cause: killing your SAN.

    "Turn me on!" Hyper-V says to the curious
    “Turn me on!” Hyper-V says to the curious
  • Only half the Licensing headaches of VMware: I Do Not Sign the Checks, but there’s something to be said for the fact that the features I mention above are not SKUs. They are part & parcel of Server 2012 R2 Standard. You can implement them without paying more, without getting sign-off from Accounts payable or going back to the well for more spend.Hyper-V just asks that you spend some time on Technet but doesn’t ask for more $$$ as you build a converged virtual switch.
  • It’s approachable: This has always been one of Microsoft’s strengths and now, with Hyper-V 3.0, it’s really true. My own dad -radio engineer, computer hobbyist, the original TRS-80 fan- is testing versions of radio control system software within a Windows 7 32 bit & 64 bit VM right from his Windows 8.1 Professional desktop. On the IT side: if you’re a generalist with a Windows server background, some desire to learn & challenge yourself, and, most importantly, you want to Win #InfrastructureGlory, Hyper-V is tier one hypervisor that’s approachable & forgiving if you’re just starting out in IT.

It’s also pretty damn agnostic. You can now run *BSD on it, several flavors of linux and more. And we know it scales: Hyper-V, or some variant of it, powers the X-Box One (A Hypervisor in Every Living Room achieved), it can power your datacenter, and it’s what’s in Azure.

Turning the page

WP_20140605_23_00_24_ProToday (Thursday) I voluntarily concluded my employment with a well-known Southern California company where I’ve worked as Sr. Systems Engineer for the last four years. On Monday, I open a new page in my IT Career with another firm, and I’m very excited to start.

But tonight, I’m in a mood to reminisce and reflect.

I know it’s cliche, but truly, when I consider where I was at four years ago this night compared with where I’m at professionally & personally tonight, this was the job opportunity of a lifetime. It literally lifted me out of the IT ghetto and put me on a track on which I could, if I executed properly, end up in the IT Hall of Fame, clutching my #InfrastructureGlory trophy as if it was the Stanley Cup.

And I capitalized on it in just about every way I knew how, both for myself, and for the infrastructure I fretted over constantly.

Parting is always bittersweet, but I’m resting tonight knowing that I -thanks to some IT strategery from the IT management guys who hired me- have left my former employer a higher performing, more durable, and cost effective Infrastructure stack than I had when I started.

Some superlatives & memories from my time with this company for the enjoyment of other engineers like me:

  • Proudest Engineering feat: Planning, wargamming and executing -in concert with my former boss- on an overnight virtual datacenter relocation involving two Dell R810s running Windows Server 2008 R2 & Hyper-V in Denver and four 2008 R2 nodes in Los Angeles over a 100meg Layer 2 VPWS circuit with two NetApp DoT 7.3.x filers at each end doing SnapMirrors of CSVs & RDMs by the hour, then the half-hour, then by the minute during Go-Live week. Sixty+ VMs, countless direct-mapped iSCSI LUNs, 8 vFilers & and the entire /24 subnet moved in the space of about four hours in spring 2012 with minimal consultant help in a plan I nicknamed the “Double Trident” (don’t ask). And yeah. This was in Hyper-V 2.0 days, where there was nothing awesome about Hyper-V switching.
  • Most humbling defeat: Missing a key “but….” in a Technet article about Exchange 2010 to 2013 migration. And no, it didn’t involve the basics. And yes, I’m sorry I didn’t spot the queues filling up sooner.
  • If I could make a bumper sticker from my time here: “Virtualization Engineers Find ’em Physical and Leave ’em Virtual,” or “Give me spindles or give me death,” or “Oh me, oh my NUMA Nodes” or, of course, “I Heart LACP”
  • Funnest project: Storage Refresh & bakeoff.  Picked the best array under the circumstances and achieved #StorageGlory. No regrets and like that Nimble is as hungry for glory & success as I am.
  • The Work/Blog effect: After storage bakeoff post, got noticed by the GestaltIT crew and invited to Virtualization Field Day #3. Sat among some incredibly sharp VMware-certified & OpenStack-familiar engineers and architects in the heart of Silicon Valley where we, in the best traditions of agnostic computing, challenged vendors on the products they try to sell guys like you and me (well, mostly guys like you if you’re VMware). And yes, we made fun of each other’s stacks. #PurpleScreenofDeath
  • Racked Gear I’ll miss the most: My old, power-hungry, 6509E and its twin WS-6748-GETX blades onto which I mapped out Hyper-V 3.0’s awesome converged switching architecture. Sure, it may not be a distributed vSwitch, but I made it purr like a kitten, and I extended iSCSi to the limit. Also, Wargamming Live Storage Migration is one of my most popular posts, so I suppose it’s a somewhat famous 6509E.
  • The 3am call that woke me up the most: Session virtualization (RDS/XenApp)
  • Dipped into dev on: .net, Visual Studio & ClickOnce architecture. Also SOAP & REST, which aren’t so dev anymore and are actually quite critical for operations guys
  • Engineering focus: Value.
  • Started With/Ended With Pairs: ESXi 4.5/Hyper-V 3.0, Motorola Droid/Lumia Icon, TDM & Analog Circuits/Cloud-hosted VoIP, 100Mbit Cisco/Gigabit Dell
  • Worst mobile phone I used for work: Toss-up. Windows Phone 7 (HTC Trophy) or Palm Pixi. But they had ActiveSync so there you are.
  • Most Favoritiest Visualization I created: A 24 hour clock arrayed against Netflow egress data on my 6509e, filtered by iSCSI & Live MIgration VLANs, with flags representing the regions as they put load on the infrastructure. Average Gb/s & GB/hr calculated with Excel Pivot tables via spider chart tool & 30 days of data, averaged out hour-by-hour. Netflow v7 & Manage Engine. Wish I hadn’t left the image on work laptop.

Those are some of my fondest memories from this employer, but of course, above & beyond the technology, the hardware, the underlay and the storage are the people. I’m leaving friends, colleagues and fellow veterans behind and it’s hard….can’t believe how thoughtful they were at my going away lunch. The photo at top is of my nameplate + one they made for me.. Hashtag Sickburn was something I ripped from The Vergecast and used liberally in our wild technology debates.

Most of all I’m thankful for this awesome time in my professional life and I wish my friends, colleagues and former colleagues the best.

On Monday I start a new chapter. I’m not sure where that leaves this blog, but I at least want to finish up my Cloud Praxis series, post a hands-on review of VMTurbo, and more so look for that over the days ahead.

Cloud Praxis lifehax: Disrupting the consumer cloud with my #Office365 E1 sub

E1, just like its big brothers E3 & E4, gives you real Microsoft Exchange 2013, just like the one at work. There’s all sorts of great things you can do with your own Exchange instance:

  • Practice your Powershell remoting skills
  • Get familiar with how Office 365 measures and applies storage settings among the different products
  • Run some decent reporting against device, browser and fat client usage

But the greatest of these is Exchange public-facing, closed-membership distribution groups.

Whazzat, you ask?

Well, it’s a distribution group. With you in it. And it’s public facing. Meaning you can create your own SMTP addresses that others can send to. And then you can create Exchange-based rules that drop those emails into a folder, deletes them after a certain time, runs scripts against them, all sorts of cool stuff before it hits your device or Outlook.

All this for my Enterprise of One, Daisetta For $8/month.

You might think it’s overkill to have a mighty Exchange instance for yourself, but your ability to create a public-facing distribution group is a killer app that can help you rationalize some of your cloud hassles at home and take charge & ownership of your email, which I argue, is akin to your birth certificate in the online services world.

My public facing distribution groups, por ejemplo:



There are others, like career@, blog@ and such.

The only free service that offers something akin to this powerful feature is Microosft’s own If the prefixed email address is available, you can create aliases that are public-facing and use them in a similar way as I do.

But that’s a big if. names must be running low.

Another, perhaps even better use of these public-facing distribution groups: exploiting cloud offerings that aren’t dependent on a native email service like Gmail. You can use your public-facing distribution groups to register and rationalize the family cluster’s cloud stack!


It doesn’t solve everything, true, but it goes along way. In my case, the problem was a tough one to crack. You see, ever since the child partition emerged out of dev, into the hands of a skilled QA technician, and thence, under extreme protest, into production, I’ve struggled to capture, save & properly preserve the amazing pictures & videos stored on the Supervisor Module’s iPhone 5.

Until recently, Supe had the best camera phone in the cluster (My Lumia Icon outclasses it now). She, of course, uses Gmail so her pics are backed up in G+, but 1) I can’t access them or view them, 2) they’re downsized in the upload and 3) AutoAwesome’s gone from being cool & nifty to a bit creepy while iCloud’s a joke (though they smartly announced family sharing yesterday, I understand).

She has the same problems accessing the pictures I take of Child Partition on the Icon. She wants them all, and I don’t share much to the social media sites.

And neither one of us want to switch email providers.


Consumer OneDrive via Microsoft account registered with with MFA. Checks all the Boxes. I even got 100GB just for using Bing for a month

Available on iPhone, Windows phone, desktop, etc? Check.

Easy to use, beautifully designed even? Check

Can use a public-facing distribution group SMTP address for account creation? Check

All tied into my E1 Exchange instance!

It works so well I’m using to sync Windows 8.1 between home, work & in the lab. Only thing left is to convince the Supe to use OneNote rather than Evernote.

I do the same thing with Amazon (caveat_emptor@), finance stuff, Pandora (general@), some Apple-focused accounts, basically anything that doesn’t require a native email account, I’ll re-register with an O365 public-facing distribution group.

Then I share the account credentials among the cluster, and put the service on the cluster’s devices. Now the Supe’s iPhone 5 uploads to OneDrive, which all of us can access.

So yeah. E1 & public facing distribution groups can help sooth your personal cloud woes at home, while giving you the tools & exposure to Office 365 for #InfrastructureGlory at work.

Good stuff!

vSympathy under vDuress

An engineer in a VMware shop that’s using VMware’s new VSAN converged storage/compute tech had a near 12 hour outage this week. He reports in vivid detail at Reddit, making me feel like I’m right there with him:

At 10:30am, all hell broke loose. I received almost 1000 alert emails in just a couple minutes, as every one of the 77 VM’s in the cluster began to die – high CPU, unresponsive, applications or websites not working. All of the ESXi hosts started emitting a myriad of warnings, mostly for high CPU. DRS attempted to start migrating VM’s but all of the tasks sat “In progress”. After a few minutes, two of the ESXi hosts became “disconnected” from vCenter, but the machines were still running.

Everything appeared to be dead or dying – the VM’s that didn’t immediately stop pinging or otherwise crash had huge loads as their IO requests sat and spun. Trying to perform any action on any of the hosts or VM’s was totally unresponsive and vCenter quickly filled up with “In progress” tasks, including my request to turn off DRS in an attempt to stop it from making things worse.

I’m a Hyper-V guy and (admittedly) barely comprehend what DRS is but wow. I’ve got 77 VMs in my 6 node cluster too. And I’ve been in that same position, when something unexpected…rare…almost impossible to wargame…happens and the whole cluster falls apart. For me it was an ARP storm in the physical switch thanks in part to an immature understanding 2008 R2’s virtual switching.

I’m not ashamed to say that in such situations intuition plays a part. Logs are an incomprehensible firehose and not useful and may even distract you from the real problem. Your ops manager VM, if stored within the cluster (cf observer effect) is useless, and so, what do you have?

You have what lots of us have, no matter the platform. A support contract. You spend valuable minutes explaining your situation to a guy on the phone who handles many such calls per day. Minutes, then a half hour, then a full hour tick by. The business is getting restless & voices are being raised. If your IT group has an SLA, you’re now violating it. Your pulse is rising, you’re sweating now.

So you escalate.  Engage the sales team who sold you the’re desperate. This guy got a vExpert on the phone. At times, I’ve had MVPs helping me. Yet with some problems, there are no obvious answers, even for the diligent & extraordinary.

But if you’re good, you’ve a keen sense of what you know versus what you don’t know (cf Donald Rumsfeld for the win), and you know when to abandon one path in favor of another. This engineer knew exactly the timing of his outage…what he did, when he finished the  work he did, and when the outage started. Maybe he didn’t have it down in a spread and proving it empirically in court would never work, but he knew: he was thinking about what he knew during his outage, and he was putting all his knowns and unknowns together and building a model of the outage in his head.

I feel simpatico with this guy…and I’m not too proud to say that sometimes, when nothing’s left, you’ve got to run to the server room (if it’s near, which it’s not in my case or in this engineer’s case I think) and check the blinky lights on the hard drives on each of your virtualization nodes. Are they going nuts? Does it look odd? The CPUs are redlined and the putty session on the switch is slow…why’s that? ‘

Is this signal, or is this noise?

Observe the data, no matter how you come by it. Humans are good at pattern recognition. Observe all you can, and then deduce.

Bravo to this chap for doing just that and feeling -yes feeling at times- his way through the outage, even if he couldn’t solve it.

High five from a Hyper-V guy.