This Valentine’s Day, a Love Note for a Protocol

Dear Link Aggregation Control Protocol,

How do I love thee, let me count the ways. I love thee to the depth and breadth and height of a 42u rack. A love so deep and vast it rivals ipv6 address space, yes, this is how I heart you LACP.

We’ve known each other for quite a long time and I have to say, you’re still the #1 Protocol in my heart, even after all these years and all those Layer 7 load balancers, DNS Round Robins and all the rest competed with you for my affection. You and I have been through some tight spots together, haven’t we LACP?

Hey LACP, remember that time the CIO was going to tour the datacenter on short notice, which sent waves of panic throughout the IT Department? Yeah, we all rushed to the server room to clean up the lazy patch cable runs we had made for the last several months.

The other guys were nervous…”Can we do this during production?” they shrieked while holding patch cables strung across the back of three racks, connecting the local Domain Controllers, Print Servers and SMB shares to the switch stack.

"Can we team it?" asks Bob the System Engineer. "Yes we can!' replies LACP.
“Can we team it?” asks Bob the System Engineer. “Yes we can!’ replies LACP.

“Relax, ” I told them. I knew you had our back, LACP. I didn’t even have to look at the switch, I just knew, in my bones, that you would work.

“LACP,” I told the junior guys. “Learn it, love it, live it,” I said as I removed one, then two, then three patch cables out of a server during production hours.

“LACP,” I repeated to the slack-jawed and frightened help desk guys, emphasizing each letter like it was sacred, special, even secret.

“It’s a way of life,” I said seriously.

I acted as though I was the guy who wrote you, but really, you were the one that saved us that day LACP, you were the one that allowed us to re-patch much of the server room just minutes before the CIO rolled up with zero whining from the users.

It was all you LACP. You’re the sort of quiet,low-level Layer 2 protocol that a guy like me could get along with.

You know what else i like about you LACP? You’re one cool, calm, collected and low-drama protocol. Not like those other protocols.

MPIO, that bitch got nothing on you. IP Multipathing? Ha! OpenSolaris is buggy, and Apple is too expensive. I’d take you LACP over MPIO or IP Multipathing any day, even for iSCSI, even if it’s against every vendor’s manual. Damn the best practice guides, damn the fancy new stuff, I trust you LACP. I get you and you get me.

lacp4eva knucklesAnd forget layer 7 load balancers. Talk about high maintenance drama queens. Always demanding this policy or that update, whining when the stuff behind them fails. Who needs that LACP, who? Yeah I know L7 load balancers do some things you can’t do, but so what? Everything up there in Layer 7 can do more than you, but no one can do their job quite as well as you, LACP.

LACP, when I think of you, I want to break out in song. You’re just so beautiful, so perfect, so harmonious. You’re what Plato was going on and on about in that cave, you’re the real technology equivalent of that Footprints poem, you’re what inspired Bill Whiters to write this:

Lean on me when you’re not strong
And I’ll be your friend, I’ll help you carry on
For it won’t be long
‘Til I’m gonna need somebody to lean on


You’re always there LACP, in the background, just a few lines in my switch config, just working in the background getting the job done, bringing balance to the force. You never ask for recognition or praise, you don’t need renewal contracts or support agreements, no vendor has a lock on you.

You even work on Dell PowerConnect switches. Damn!

IEEE 802.11ax? More like IEEE 802.11awesomeee!

Yeah LACP, you got all that and a bag of chips humble attitude too.


lacpHumanity could learn a thing or five from you LACP:

  • Teaming is better than going solo
  • Balance the load among all your colleagues and team members
  • To each according to his need, from each according to his ability
  • The needs of the many outweigh the needs of the one, even if that means failing over the other 7 members of the team to the one that remains while the staff re-patches everything to make the switch stack look neat

Damn. I’m getting weepy here LACP.

This Valentine’s Day, I want you to know that you’ll always have my heart LACP. You are the greatest of the Layer 2/3 Protocols, the first among all, you inspire me and you complete me.

I heart you LACP and you keep at it, ok?

It’s Satya Nadella for Microsoft CEO

This is really interesting. Microsoft picked Satya Nadella, a total Microsoft insider, not to mention a younger ,fitter man (at least when compared to Ballmer) for its next CEO.

And in what feels like a huge setback for those on the Microsoft board who wanted to sideline Bill Gates, put him out to pasture and what not, Gates is getting stronger in the new regime, becoming “Founder and Technology Advisor” in line with other neat titles bestowed on Senior Microsofties (Microsoft has all these cool, almost ritualistic & academic titles for its thought leaders: Technical Fellow, Distinguished Engineer…it’s all very Knights of Columbus/Skull and Bones what have you).

I have that same t-shirt
I have that same t-shirt

Was he pushed or did he jump into the new role, asks Tech Crunch. Feels to me like he cracked some skulls and emerged victorious:

Microsoft notes that Gates “will devote more time to the company, supporting Nadella in shaping technology and product direction.” He will remain a part of the board of directors.

But enough about Gates.

I hadn’t really though much about Nadella until last fall, when he became a useful foil to me and my company as we negotiated an Enterprise Agreement with Microsoft. We weren’t quite ready to jump whole hog into the cloud -Azure or O365- but we were, you could say, cloud-curious. But we were also a bit upset at some quality problems in Microsoft’s enterprise stack in 2013 and how MS was pitching Azure to us.

So it was great that the very same week we were sitting down with MS, Nadella got a nice profile in the NY Times Bits blog, where he basically said Microsoft doesn’t make any money on its old crap (Windows, Office, Server), only on its new hotness, Azure. An Enterprise Agreement is all about buying the old crap:

“You can rightfully criticize us on mobile and tablets,” Mr. Nadella said.


“What does Windows Azure have to do with Windows Server?” he said, naming the older server product. “Nothing, besides the name ‘Windows.’ Azure is a new operating system, designed not just for our cloud, but for anybody to build a cloud with.”


“You look at our earnings,” he said. “You don’t see the old stuff growing. You see the new stuff growing.”


Yep. That guy is the CEO of Microsoft. The guy who thinks of Office, Server, hell probably even XBox, as the “old stuff,” and Azure as the new hotness. Nadella’s the guy who wants to empower corporations to lay waste to the IT staff and put Azure, essentially a technology utility provider, in our place.

And that’s okay by me. I like a Microsoft that’s focused on business and the enterprise, that realizes it can compete in that space far more effectively than in the consumer space with the likes of Apple and Google. We’ve all got to adapt to the environment, and by picking Nadella, I think Microsoft is saying that it will be focused on business first, consumer second. This is a win for those who want Microsoft to continue some of the cool things it’s doing in the enterprise and have watched in horror as Microsfot stumbles around like a blind man as it tries desperately to sling Surfaces to consumers.

Here’s hoping Nadella can right some of the wrongs Microsoft made in 2013 (like their disastrous QA track record). Maybe he can start by reading my piece on how Microsoft can win in 2014.

Nadella CEO –

Converged Fabric Lab @ Home

The Home IT Lab is just about finished and boy does it feel good after almost three months of scavenging for parts, buying certifiable junk off eBay, crawling through a dusty attic to run cable, and constructing something robust & redundant(ish) enough that I could use it as an experimental tech lab while providing four, maybe 5, 9s of reliability for my most important users, the fam.

At times it felt like one step forward and two steps back, but this is the final configuration. Yes, I’m done with the bare metal provisioning:

SG300 at top, dumb switch in the middle, 1U box is intended for vMWare, second 1u box is a Hyper-V host, and 2U server is running NexentaSTOR

Not bad eh? Sure the cabling could be dressed up a bit, but overall, I’m pretty happy with how it came together.

So, here’s the setup (So far):


Already I’ve learned so much that’s applicable at work and in the lab.

The first thing I’ve learned: ZFS is awesome, and the open source guys and companies who labor over these amazing storage network operating systems are heros in my book.

Child Partition testing fit 'n finish of my cabling
Child Partition testing fit ‘n finish of my cabling

Over the last 30 days, I’ve tested FreeNAS, OpenFiler, NAS4Free, and NexentaStor. Initially I didn’t want to sacrifice an entire physical host for simple storage duties, but building a Scaled Out File Server running Windows Server (ie the Windows SAN idea that’s legitimately interesting yet completely frightening at he same time) wasn’t a possibility here, so I had to sacrifice.

Now let’s be honest: what kind of performance would you expect out of a bunch of 2009-era AMD Semprons, some REALTEK Gbit adapters, consumer grade SSD & HDD, and Cat5e cabling pulled through this attic over the course of a few weekends?

Well, I think I’m hitting this stack about as hard as it can be hit without enabling jumbo frames or going RAID 0. FreeNAS and the others, while feature rich and sporting great cacti reporting, fell over at times during some really harsh and cruel SQLIO benchmark runs.

But NexentaSTOR didn’t. Here’s what the Sempron + 4xHDD, 1 SSD cache, 1xSSD logs and 16GB RAM turned in last night:


Yeah, I know, kind of unbelievable. Like, so unbelievable I had to run it a few times to make sure, and even now, I’m worried something isn’t quite right with the stats I’ve gathered. I mean I don’t know squat about L2ARC caching (except that it costs a lot of RAM) and logs, but I do know quite a bit about optimizing switch, LACP, and Hyper-V.

This is what it looked like from the Nexenta dashboard and one of my vEthernet NICs. Notice the read rate on Drive E/Disk 13:

OMG the poor Sempron.

Perfmon recorded this on Node1, comprised of a 3x1GbE team, with two Intel server NICs and a RealTek:

170,000+ packets per second. Keepin’ it RealTek

Fun times indeed.

One of my favorite virtualization bloggers, Everything Should Be, also found Nexenta to be a great fit for his home lab.

So the lab is built now. And it’s solid and robust enough for my needs. I have Hyper-V virtual machine failover between Node1 (office) and Node2 (Garage), I have passed cluster validation, thrown a ton of packets at the NexentaStor, and nothing caught fire, with the Nexenta giving me some CPU warnings but still able to write & read more data than a 1GbE NIC would have allowed otherwise.

I still have to get the vSphere host up. 5.5 doesn’t like RealTek nics, and I’m cheap, but I’ll figure something out. And if I don’t, Xen or KVM will go on that box.

Shitbox SAN under construction
Shitbox SAN under construction

Next up: I have about 80 days left on these Windows trials. I’m going to take full advantage and spin up a System Center instance. Then I’m going to play with Splunk and start again on building an OwnCloud VM to capture, index and app-ify all my personal family photos, music, videos, etc.

Also thinking of making the SG-300 my default gateway, or getting around to finally using a Vyatta VM Router or pfSense VM as my gateway. Following that: ipv6 experiments with Pertino, Hurricane Electric and more.

Good times!

Storage Glory at 30,000 IOPS

It’s been a bit quiet here on the AC blog because I’m neck deep in thinking about storage at work. Aside from the Child Partition at home (now 14 months and beginning to speak and make his will known, love the little guy), all my bandwidth over the last three weeks has been set to Priority DSCP values and directed at reading, testing, thinking, and worrying about a storage refresh at work.

You could say I’m throwing a one-man Storage Field Day, every day, and every minute, for the last several weeks.

And finally, this week: satisfaction. Testing. Validation. Where the marketing bullshit hits the fan and splashes back onto me, or magically arrays itself into a beautiful Van Gogh on the server room wall.

Yes. I have some arrays in my shop. And some servers. And the pride of Cisco’s 2009 mid-level desktop switching line connecting them all.

Join me as I play.

My employer is a modest-sized company with a hard-on for value, so while we’ve tossed several hundred thousand dollars at incumbent in the last four years (only to be left with a terrifying upgrade to clustered incumbent OS that I’ll have to fit into an 18 hour window), I’m being given a budget of well-equipped mid-level Mercedes sedan to offset some of the risk our stank-ass old DS14MK2 shelves represent.

We’re not replacing our incumbent, we’re simply augmenting it. But with what?

After many months, there are now only two contenders left. And I racked/stacked/cabled them up last week in preparation for a grand bakeoff, a battle royale between Nimble & incumbent.

Meet the Nimble Storage CS260 array. Sixteen total drives, comprised of 12x3TB 7.2K spinners + 4x300GB SSDs, making for around 33TB raw, and depending on your compression rates, 25-50TB usable (crazy I know).

Nimble appeals to me for so many reasons: it’s a relatively simple, compact and extremely fast iSCSI target, just the kind of thing I crave as a virtualization engineer. The 260 sports dual controllers with 6x1GbE interfaces on each, has a simple upgrade path to 10GbE if I ever get that, new controllers and more. On the downside the controllers are Active/Passive, there’s no native support for M/CS (but plenty for MPIO) and well, it doesn’t have a big blocky N logo on the front, which is a barrier for entry because who ever got fired for buying the blue N?

On top of the Nimble is a relatively odd array: a incumbent (incumbent now) incumbent array with 2x800GB SanDisk Enterprise SSDs & 10x1TB 7200 RPM spinning disks. This guy sports dual controllers as well, 2x1GbE iSCSI interfaces & 2x1GbE MGMT interfaces per controller and something like 9TB usable, but each volume you create can have its own RAID policy. Oh, did I mention it’s got an operating system very different from good old Data OnTAP?

So that’s what I got. When you’re a SME with a very limited budget but a very tight and Microsoft-oriented stack, your options are limited.

Anyway onto the fun and glory at 30,000 IOPS

Here’s the bakeoff setup, meant to duplicate as closely as possible my production stack. Yes it’s a pretty pathetic setup, but I’m doing what I can here with what I got:

Nimble v. incumbent Bakeoff

  • 1x Cisco Catalyst 2960s with IOS
  • 1x 2011 Mac Pro tower with 2x Xeon 5740, 16GB RAM, and 2xGbE
  • 1x Dell PowerEdge 1950 with old-ass Xeon (x1), 16GB RAM, and 2xGbE
  • 1x Dell PowerEdge R900, 4x Xeon 5740, 128GB RAM, 4xGbE
  • OS: Server 2012 R2
  • Hyper Visor: Hyper-V 3.0
  • VMs: 2012 R2
  • NICs: I adopted the Converged Fabric architecture that’s worked out really well for us in our datacenter, only instead of clicking through and building out vSwitches in System Center VMM, I did it in Powershell without System Center. So I essentially have this:
    • pServer 1 &2: 1 LACP team (2x1GbE) with converged virtual switch and five virtual NICs (each tagged for appropriate VLANs) on the management OS
    • pServer3: 1 LACP team (4x1GbE) on this R900 box, which is actually a production Hyper-V server at our HQ. So pServer2 is not a member of my Hyper-V Cluster, but just a simple host with a 4gb teamed interface and a f(#$*(@ iSCSI vswitch on top (yes, yes, I know, don’t team iSCSI they say, but haven’t you ever wanted to?)
    • All the virtual switch performance customization you can shake a stick at. Seriously, I need to push some packets. And I want angry packets, jacked up on PCP, ready to fight the cops. I want to break that switch, make smoke come out of it even. The Nimble & incumbent sport better CPUs than any of these physical servers so I looked for every optimization on virtual & physical switches
  • Cisco Switch: Left half is Nimble, right half is incumbent, host teams are divided between the two sides with -hopefully- equal amounts of buffer memory, ASIC processing power etc. All ports trunked save for the iSCSI ports. incumbent is on VLAN 662, Nimble is on VLAN 661. One uplink to my MDF switches.
  • VM Fleet: Seven total (so far) with between 2GB and 12GB RAM, 2-16vCPU and several teams of teams. Most virtual machines have virtual nics attached to both incumbent & Nimble VLANs
  • Volumes: 10 on each array. 2x CSV, 4xSQL, and 4xRDM (raw disk maps, general purpose iSCSI drives intended for virtual machines). All volumes equal in size. The incumbent, as I’m learning, requires a bit more forethought into setting it up, so I’ve dedicated the 2x800GB SSDs as SSD cache across a disk pool, which encompasses every spinner in the array

The tests:

I wish I could post a grand sweeping and well considered benchmark routine ala AnandTech,  but to be honest, this is my first real storage bakeoff in years, and I’m still working on the nice and neat Excel file with results. I can do a follow-up later but so far, here’s the tools I’m using and the concepts I’m trying to test/proof:

  • SQLIO: Intended to mimic as closely as possible our production SQL servers, workloads & volumes
  • IOMETER: same as above plus intended to mimic terminal services login storms
  • Robocopy: Intended to break my switch
  • Several other things I suffer now in my production stack
  • Letting the DBA have his way with a VM and several volumes as well

All these are being performed simultaneously. So one physical host will be robocopying 2 terabytes of ISO files to a virtual machine which is parked inside a CSV in the NImble in the same CSV as another VM which is running a mad SQLIO test on a Nimble RDM. You get the idea. Basically everything you always wanted to but couldn’t do on your production SAN.

So far, from the Nimble, I’m routinely sustaining 20,000 IOPs with four or five tests going on simultaneously (occasionally I toss in an ATTO 2GB random throughput test just for shits, grins, and drama) and sometimes peaking at 30,000 IOPS.

The Nimble isn’t missing a beat:



What else can we throw at this thing? ATTO, the totally non-predictive, non-enterprise storage benchmarking application!
What else can we throw at this thing? ATTO, the totally non-predictive, non-enterprise storage benchmarking application! I ran this Saturday night in the midst of 2 SQLIO runs, one IOMETER SQL-oriented run, and two robocopies.

So yeah. The Nimble is taking all that my misfit army of Mac hardware, a PowerEdge 1950 that practically begs us to send it to a landfill in rural China every time I power it on, and a heavyweight R900 whose glory days were last decade, and its laughing at me.

Choke on this I say.

Please sir may I have another? the Nimble responds.

So what did we do? What any mid-career, sufficiently caustic and overly-cynical IT Pro would do in this situation: yank drives. Under load. 2xSSD and 1xHDD to be specific.

And then pull the patch cables out of the active controller.

Take that Nimble!  How you like me now and so forth.


And lo, what does the Nimble do?

Behold the 3U wonder box that you can setup in an afternoon,  sustains 25-30,000 IOPs, draws about 5.2 amps, and yet doesn’t lose a single one of my VMs after my boss violently and hysterically starts pulling shit out of its handsome, SuperMicro-built enclosure.

Sure some of the SQLIO results paused for about 35-40 seconds. And I still prefer M/CS over MPIO. But I can’t argue with the results. I didn’t lose a VM in a Nimble CSV. I dropped only one or two pings during the handover, and IO resumed post-gleeful drive pulls.



Storage Glory.

I mean this is crazy right? There’s only 16 drives in there. 12 of which spin. I can feel the skepticism in you right now….there’s no replacement for displacement right? Give me spindle count or give me death. My RAID DP costs me a ton of spindles, but that’s the way God intended it, you’re thinking.

So in the end (incumbent tests forthcoming), what I/we really have to choose is whether to believe the Nimble magic.

I’m sold on it and want that array in my datacenter post-haste. Sure, it’s not a Filer. I’ll never host a native SMB 3.0 share on it. I’ll miss my breakfast confection command line (Nimble CLI feels like Busy Box by the way, but can’t confirm), but I’ll have CASL to play with. I can even divvy out some “aggressive” cache policies to my favorite developer guys and/or my most painful, highest-cost, user workloads.

As far as the business goes? From my seat, it’s the smart bet. The Nimble performs extremely well for our workloads and is cost effective.

For a year now I’ve been reading Nimble praise in my Enterprise IT feed. Neat to see that, for once, reality measured up to the hype.

More on my bakeoff at work and storage evolution at home later this week. Cheers.


Editor note: This post has been edited and certain information removed since its original posting on Jan 21.

It’s back! Linksys blue & black routers revealed at CES

My jaw dropped. Belkin nailed it. Feast your eyes on this:



As Ars Technica notes, the colors, shape and strangeness are immediately recognizable, even to people like my mom who still, to this day, looks for the “blue and black router box” when internet problems occur at her house.

Here’s the original in all its early 2000s, plastic-fantastic glory:



It’s rare for a tech company outside of One Infinite Loop to build iconic technology hardware. Even rarer for said device to be networking hardware.

And yet, the old WRT54G (still have one) has reached icon stature, something Belkin was smart enough to recognize (and why they feel they can justify a $300 price tag on it!)

The look of the new one makes me want to buy it and mount it in a place that’s visible, just as a conversation piece. Crazy.

The only consumer networking gear to get a starring role in Southpark

Going to be hard to turn down this one when it’s released soon. It’s coming with OpenWRT!

Home Lab Update

It’s been awhile since I blogged about my home IT lab, the purpose of which is to 1) ensure near HA-levels of service for my most important and critical users -the wife and fam- and 2) build something with which I can approximate and simulate conditions at work while hopefully learning a thing or two.

In November I blogged:

Sucks to admit it, but I think I’ve got to spend. But what? I want a small footprint but capable PC running at least a Core i3 or i5 and that can support up to 32GB of RAM to make sure I can continue to use it in a few years (Lenovo tops out at 16GB in my current box).


I’m thinking Mac Mini (an appropos choice for the Agnostic Computing lab), a Gigabyte BRIX, or a custom PC inside a shuttle case (offers 2GigE built in) and have a total budget of about $700.


Boy how a couple of months, a birthday & holiday season changed that picture. I went from thinking I’d build a humble little lab -mostly virtual- to building this:

I’ve tagged each element of the stack to ease comprehension and foster the reader’s amusement.

  1. TrendNet 16 port Cat 5e patch panel: $20, Fry’s
  2. Cisco SG-200 no PoE: A gift from a vendor. Yes, I’m not above that kind of thing. 10 GbE ports, love this switch
  3. 1U Cable Management: $17 from a local business IT systems retailer. Great for hiding the shame
  4. 24 Port TP-Link GbE switch, unmanaged: Where I plug the the stuff that shan’t be messed with. It’s a stupid switch but it’s rack-mountable and if something broke while I was away, I could, in the worst case scenario, have my wife plug in the blue “internet” cable into the TP-Link and all would be right again Borrowing
  5. Frankencuda: Behold the depths I’ll go to. I’ve re-purposed and re-built a dead Barracuda Load Balancer 340. Not only that, but I bolted 3.5″ HDD trays & 2TB drives onto the top of the ‘Cuda’s modified 1U SuperMicro case. Frankencuda parts: Motherboard $50, 8GB RAM, $69, 2x128GB SanDisk SSD ($180, Amazon), re-used/borrowed all other parts including the dapper little AMD Sempron which can be unlocked into an Athlon II dual core
  6. TV Convergence, almost: With the over-weight ‘cuda threatening to collapse on it, this stack represents my home internet connection (Surfboard docsis 3.0 modem on right) and telvision (Time Warner, via HD HomeRun Prime and shitty TWC tuning adapter). Cable Modem: $110 in 2012, HDHomeRun Prime: $99 on Woot.com11
  7. Lenovo PC: My old standby, a 2011 M91p with a core i-7 2600, 16GB RAM, and a half-height 4x1GbE Broadcome NIC I’m borrowing from work. 2TB drive inside. $950 in 2011
  8. NetGear ReadyNAS 102 w/ Buffalo 3TB “Caching” External USB 3.0: I got the ReadyNAS in October when I was convinced I could do this cheaply and with a simple iSCSI box and adequate LUN management. Alas, I quickly overwhelmed the ReadyNAS; the poor thing falls over just booting three VMs simultaneously, but it’s freaking amazing as a DLNA media server and a general purpose storage device. The Buffalo is on-loan from work; decent performer, good for backups. $250
  9. StarTech USA two post 12U rack: Normally $60, I got mine used on eBay for $25. Great little piece of kit. It’s bolted down to my wooden workbench.
  10. Latest Fisher Price cable tester: It makes a smiley face and plays a happy sound when the four pairs are aligned. $10

Not pictured is my new desktop PC at home, a Core i-5 4670K, Asus Z87 Premiere or Dope or Awesome line motherboard, 32GB RAM, 1x256GB Samsung EVO SSD and some cheap $50 mini-tower case.

So yeah, I blew past he $700 limit, but only if you consider purchases made in 2012 and earlier, which really shouldn’t be counted. And much of this was funded via the generosity of friends and family vis a vis Christmas and birthday gift certificates.

Thank you everyone. You’ve only made it worse.

What have I learned from this experience? Building a home IT lab is not like the procurement processes you’re used to at pretty much any organized job you’ve ever been employed at. It basically involves you pestering vendors (or sucking up to them), nagging others for old parts, debasing yourself by dumpster diving for old, inferior gear, and generally just doing unsavory things.

But it’s all in pursuit of IT Excellence so it is justified.

So what have I got with this crazy stack? Well material is only one piece; sweat equity costs are very high as well. I’ve run about 17 Cat 5e cables of varying lengths through an attic that hasn’t seen this much human attention since the Nixon administration:

The mess on the left isn’t mine….entirely. I only claim the ones on the right

I spent three solid Saturdays (in between other chores)  navigating this awful, dusty attic and its counterpart in the garage above my server stack, all in pursuit of this:


And though the cable management out of frame is obscene and not suitable for a family-friendly blog like AC, I will say I’ve accomplished something important here.

Who else can say they’ve unified TV & Compute resources in such a singular stack in their home? All those goddamned ugly black power bricks are located in one corner of the garage, the only area suitable for such things. The only non-endpoint device in the living quarters of the house is the Netgear Nighthawk AC wifi router (DD-WRT, currently my gateway).

Everything else in the living quarters -save for my computer which now has a nice 3x1GbE drop in the wall- is a simple endpoint device. Ethernet is my medium: data & Television are the payload, all from this one spot. Yes, even my wife can appreciate that.

And from an IT Lab perspective, I’ve got this:

  • Three compute nodes with a total of 10 cores
  • 50GB DDR3 minimum 1333mhz RAM
  • 2TB in the NetGear which runs some light iSCSI LUNs
  • 6TB in RAID 0 on the Frankencuda with 256GB SSD
  • 3Gb/s fabric-oriented networking to each node, LACP on the Cisco switch

So now the fun begins. Benchmarks are underway, followed by real workload simulation. I’ll update  you diligently as I try to break what I just built.

How Microsoft can win in 2014, part 1

Love them or hate them, any fair observer of the tech industry has to admit that Microsoft -once the untouchable, intimidating, indomitable giant of tech- has stumbled badly in recent years. So before we ponder how they can win next year, let’s review 2013 and see where they succeeded and failed.

2013 was the nasty hangover from 2012

While 2012 was bad for Microsoft in the consumer space, I think 2013 was even worse. Debuting in 2012, Windows 8 flopped like no other Microsoft OS since Windows Me (yes worse even than Vista). Consumers disliked it, enterprises shied away from 8 (though not Server 2012; I welcomed that with gusto), and even technology pros were confused by it.

2013, as a result, was spent repairing the damage done by the (lukewarm/disastrous, take your pick) reception to Windows 8. By the middle of the year, I was personally overjoyed to see MS correcting some of the flaws of 8 with Windows 8.1 Pro; 8.1 (and 2012 R2) felt like what Microsoft should have pushed out in 2012.

Windows Phone – in worse shape?

On the phone side of things, 2013 wasn’t much better for Microsoft until Q3 & Q4. I ended 2012 by ditching my Windows phone Series 7 (or whatever it was called back then) HTC Trophy and embracing, once again, Android’s refreshed stack with a post-ICS Samsung Galaxy Note 2. I gave Microsoft a fair chance on WP7 yet it felt like they kicked me in the teeth: my Trophy would never run Windows Phone 8 (new NT Kernel apparently) so it felt, more than ever like I was using a dead platform. I’ve seen that movie before with BeOS & WebOS and I didn’t like the heartbreak all over again. No thank you.

But that’s just me. How about WP8 and the market?

Windows Phone 8 & the acquisition of Nokia weren’t big winners in the first half of 2013 for those who stayed loyal or were attracted to the platform. Sure Nokia brought some real credibility & design chops to WP8; but you still couldn’t run Instagram or even Pandora (which runs on just about anything with a transistor) for much of the year. And as 2013 progressed, the Asian phone OEMs stopped caring about Windows Phone. Today, I think Nokia is the only Windows phone maker, which is a pretty bad outcome for the Windows Phone team. But at least they got Instagram, Pandora and a few other apps, though their bruising & public fight with Google has crippled the platform in other ways they haven’t recovered from yet. Sorry Microsoft, but YouTube owns the online video space; you best suck up to Google in 2014 if you want people to think about your phones again.

ARM : The continuing disaster of Windows RT

All the marketing muscle in the world can't hide the fact that 2013 was about washing the bad taste out of your mouth that Windows 8 & Windows RT have left behind
All the marketing muscle in the world can’t hide the fact that 2013 was about washing the bad taste out of your mouth that Windows 8 & Windows RT left behind

Microsoft’s 2012 strategy to attack the lower power/high battery-life ARM computing space yielded terrible results in 2013. One year ago, you could, if you were feeling daring, purchase a Windows RT device from Samsung, Asus, Microsoft, and a few other OEMs.

But today, as a type this?

There are only two manufacturers of Windows RT arm devices. One is Microsoft itself (Surface 2), and the other is owned by Microsoft (Nokia).

In that same time-span, the Chrome Operating System has gained more OEM partners: Samsung (granted, they’ll build anything for anyone) has been joined by HP, Asus, LG, and most surprising- Dell in building cheap, almost-disposable Chromebooks. That’s four heavyweight OEMs (one of which -Dell- built its entire empire on Windows) jumping on board the Chrome bandwagon and ditching Microsoft’s low-power ARM-based loser, probably for good.

This is why Bill Gates cried
This is why Bill Gates cried

I think Windows RT is as good as dead. If there is to be a product in which you do Windows computing on an ARM device, it’s likely going to run Windows Phone 8.1, which debuts next spring. And that upsets me because I still see Best Buy guys pitching Surface 2 tablets to consumers who think it will run their Windows applications. Slinging a dead product that may not survive the winter? That’s bad karma Microsoft!

XBox One

I feel like Microsoft blew all the goodwill & positive feelings that it won shepherding the Xbox platform from long-shot to king of the consoles over the last ten years by mishandling the rollout of the One in 2013. It was a fumble so spectacular it almost calls to mind George W. Bush blowing the Clinton surplus in the space of a few years. Almost.

I’m not a console geek or gamer, but it’s hard to imagine Microsoft handling the XBox One as a story any worse than it did. Couple that with the NSA story and it’s no surprise the One started at a disadvantage behind the PS4.

But it’s not dead; far from it. More on the One tomorrow.

Enterprise – some major wins

Finally, in the enterprise space, 2013 was mixed for Microsoft. As a virtualization admin, I couldn’t wait to be freed of Server 2008 R2 Hyper-V’s limitations; I upgraded my enterprise to 2012 Hyper-V 3.0 in Spring 2013 and immediately enjoyed the benefits.

While SMB 3.0 was released in 2012, I think it went mainstream in the enterprise in 2013, and by Q3/Q4 & with further revisions courtesy of 2012 R2, it’s reached hero status. Not only is it Microsoft’s answer to NFS, it’s arguably superior to it. Indeed, by the latter half of the year, EMC –which owns VMWare!– was calling SMB 3.0 the future of storage. Don’t call it CIFS anymore! 


I think it’s fair to say that among Microsoft’s many product launches in the last 18 months, SMB 3.0 is the most underrated but game-changing product Redmond has pushed out. Yes, it’s that good.

And some meh

On the other hand: Some enterprises may be benefiting from Microsoft’s new & rapid release cycle and the much-hyped virtuous feedback loop whereby Microsoft actually uses its products at scale in Azure, then rolls fixes, updates and other goodies downstream to its enterprise customers. But I’m not sure where these enterprises are. Most of the enhancements to System Center VMM 2012 R2, for instance, would definitely appeal to me if I were running a hosting firm and needed to house the same two /24 subnets on my Hyper-V farm.

But guess what? I’m not in the hosting space. And probably most Hyper-V admins aren’t either. Yes I’m down with fabric management, private/hybrid cloud deployments, and bare-metal Hyper-V server provisioning, but is NVGRE all there is to the Microsoft software defined networking story? Do I have to buy a virtual Cisco Nexus switch to play with this and proof it out? The networking guys are gaga over Microsoft’s NVGRE and Layer 3 tech, but I’m not seeing the vision. How does this help me kill the MPLS and save my business money? Why do I still need a VPN to Azure or O365?

Speaking of System Center, I am more confused now about it than ever before. 2013 saw the debut and further refinement of core Microsoft technologies: Powershell 4.0 + Windows Management Framework 4.0. These are some kick-ass developments I’m eager to master. Coupled together, WMF & Powershell 4.0 make Windows more Unix-like, more agile, and faster to deploy if you believe no less an authority than Jeffrey Snover, father of Powershell and Microsoft Technology Fellow (really great interview). Snover and his team released Desired State Configuration -a document-based, declarative template system for Windows & other devices/operating systems- in 2013.

I haven’t tested it much yet, but DSC feels to me like it could be a big winner in 2014. In fact, if you drink the DSC kool-aid, you look around and wonder why you’d want System Center at all anymore. The Puppet & Chef guys don’t need a huge stack of virtual servers, database engines, and whatnot to configure & run their stack from switch to server to PC; why should us Windows guys? Puppet just needs documents; DSC says it can deliver the same type of simplicity to those of us on the MIcorosft stack, dare I say to those of us with the System Center blues (VMM excepted of course, I’m looking at SCOM & Config Manager primarily).

ADFS – the New Religion

If there’s one thing Microsoft enterprises learned in 2013 it’s the centrality of Active Directory Federation Services in the new regime. Whether you want to go to Office 365, Azure, or you’re in bed with other cloud providers who are Microsoft-centric, you need to bone up on your ADFS right quick son.

mmmm. You know you want some of this – squiggly lines and all- in your enterprise. Come get some.

I think ADFS’ elevation points to the importance of identity management in 2013, and especially going forward into 2014. With the NSA scandal, the extension of the cloud into the enterprise (or is it the enterprise into the cloud, sans firewall?), identity management & security are one of the biggest challenges facing IT. Who’s going to offer the best solution? Federating my workplace credentials to my cloud services feels like a half-measure when what I really want to do is just slap my Active Directory domains onto the internet, but ADFS is getting some momentum behind it, so what do I know?

Tomorrow I’ll opine a bit about how Microsoft can win in 2014 no matter which CEO wins the sweepstakes. I think they stand better-than-even odds in your living room, the one place more hotly-contested by the giants of tech than any other space.

The Lean Back Computing Manifesto of 2014

Some great discussion on This Week in Tech last Sunday. Essentially the panelists, including my man Fr. Robert Ballecar, the Digital Jesuit and host of the solid This Week in Enterprise Tech podcast, gave more than passing consideration to the challenges inherent in creating a cohesive and stupid-proof lean back computing experience in the living room way to consume the stuff you and your family want in the living room, without getting hassled by technology.

Ahh yeah. This is some fertile territory. Lean back computing, as I like to think of it, touches everything in tech: law, consumer technology, enterprise technology, cloud stuff, mobile, storage, everything! This is what Jobs “cracked” before he died; this is where the promise of high technology, it’s amazing potential, the Holodeck if you will, dies a sad and wretched death inside a rats nest of copper cables piled and twisted up behind your ikea entertainment center.

Your living room. Your stuff. Your family. The Holy Grail of tech.

As the TWiT crew pointed out, Google is rumored to release a NexusTV early next year, their third solid (fourth I guess, if you count the Nexus Q) assault on the standards-less, walled-off, crutch-dependent technology fortress that is the living room. Amazon supposedly is building a Roku-knockoff as well, hoping you’ll pony up $100 or so to get what most smart TVs come with already. The XBox One, what Nilay Patel has mockingly called the world’s greatest GoogleTV thanks to its HDMI pass-through feature, has sold 2 million units and of course, Apple is in the space as well.

And that’s before you get to the big network TV providers, not to mention the consumer TV makers, the wannabe disruptors (Aereo!) and the content makers.

"An IR Blaster! That's a great idea to solve this problem," said no one ever
“By Jove I’ve got it! We’ll invent an IR Blaster! Consumers will love it!” said no one ever

It’s as if your living room and your family’s digital stuff is the prom queen, all dolled up, with sweet perfume, rouge colored cheeks,  a knock ’em dead smile and a hot mother, while Google, Microsoft, Amazon, Apple and TWC, Comcast and Rokus of the world are the high school starting QB, its captain of the basketball team, and the clutch swimmer on the 4×400 relay, and all of them are competing just to get into your stuff. The analogy goes even further: like teenage boys, they make stupid bone-headed mistakes in an attempt to impress you, to get you to surrender. A GoogleTV here, an IR blaster there, a rented SciAtlanta cable box with CIFS access here,…you know the drill. Just so many guys revving their IROC Camaros in the school parking lot, trying to impress you with the new shiny.

But it’s 2013 and we’ve been bitten many times by the shiny and we’re jaded now. Boys are all liars! Men suck and all that.

If you’re the household technologist, then you’re like me and you’ve been through some serious battles on this front and have thought a lot about it. And just as a virtuous prom queen on prom night can call the shots for her potential suitors, so too am I going to lay out the ground rules for the competition when it comes to winning the lean back computing space…for scoring on prom night as it were. This is my manifesto but you can use it too if you like.

What We Want: 

  • Single sign on & on-demand access to our on-prem media, our app-based subscription media (whether streamed live or stored in the cloud), and all other forms of content we legally are allowed access to from the couch
  • A comprehensible and consistent UI. Don’t ask me to jump in and out of different UIs, and don’t over-lay a nice XBox One or GoogleTV UI on top of a shitty Comcast DVR 8-bit color interface. Don’t piss on my leg and call it rain, in other words.
  • A f*(*#$  remote control that lasts. Sorry Microsoft, but my mother-in-law -64, speaks little English, cranky and paranoid (more on her later)- will never tell the XBox that she wants to watch HGTV. My wife will never lean back with a wireless Logitech keyboard either. My mom’s brain short-circuits if she has anything other than a Tivo remote. Do you hear me? Give the people what they want: A goddamned old fashioned normal clicker. The channel paradigm will not die; people still love to just lean back and ‘content-flip’ even today. The solution is not to hope such people die off, but to give them what they want.
  • Drop “HD” from everything. It’s not special anymore: There is no HD. There is only normal 1080p content and shitty, 20th century 480i content. I mean at some point we stopped talking about color tv right? You know what else was cool? Super VGA. How often do you think of Super VGA these days? You’re not fooling anyone Time Warner.
  • There is No TV or computer or tablet, there are only screens: Does it have pixels? Is it flat or slightly curved? Is it big and hung on the wall, medium and on a stand, or small and in my pocket? Does it emit light, have mass and require electricity? Is it matte plastic and warm, or cool and highly reflective? If yes to any of these, then I should be able to get to the content I want with no hassle or fuss on that screen. Just work baby, to borrow from Jobs & Al Davis
  • Fewer black boxes: My cell phone can do some amazing things. Take pictures. Record a video at 60 frames per second. Act as a flashlight. Show me my email. It can even talk to me and tell me where I’m at on the planet when I’m lost or confused. And guess what? It’s only a little bit taller than a deck of cards, and quite a bit thinner. It lasts all day on a battery and is discrete enough I can take it to the bathroom. It has no f$#$*( wires, which is still incredible to me. And as Louis CK said, it’s going to outer space.. It’s an amazing and wondrous device. So don’t expect me to be impressed by the eight pound metal box (whose volume is only 35% filled) and its four pound power brick that you’re trying to get me to put under my tv. I’m not. You know what gets me excited? Simplicity and fewer wires.

And because I’m a nice guy, here’s a helpful chart for Big Tech/Media/Last Mile providers to chew on as they role out their next bag of crap for us starting at CES 2014. Styled in an If This/Then That way, it’s designed to help Samsung or Google or Apple or Time Warner kill a lean back gadget while it’s still in its cradle so that you and I won’t have to deal with it when it drops into the living room, causing near-riotous conditions because the family hates switching inputs/doesn’t understand that concept:



Is this really too much to ask? I’m just about 85% towards realizing all these goals and avoiding all those pitfalls in the lists above, and I’m just an average-intelligence IT dork with a knack for finding open box items at Best Buy. I’m almost there…single pane of glass, single remote, single TV & box, no drama! I’ve got Windows Media Center + CableCARD + DVR for live TV, some goofy but earnest WMC plugins for Pandora, YouTube and such (no input switching finally!!!) I’ve got the family media on an SMB share on my little NAS which is indexed (rather poorly) by WMC, I’m putting together an OwnCloud instance for the mobile presentation of the same data, and I’ve got two mediums via which all this is moved to the end point device: good old Cat5e or 802.11n & ac on 2.4GhZ and 5GhZ respectively.

So close I can taste it. An end to the TV/VCR crutch. Just a few pieces out of place.

If I can do it with my limited resources -whilst building a lab across the same hardware mind you- why can’t these titans put something together?


Winning the Dongle Sweepstakes

I’ve had the intense misfortune lately of being tasked with deploying some high-end engineering software for two groups of engineers.

Now as anyone who’s been in IT since the Clinton or early Bush years knows, with engineering software comes licenses. And with licenses comes activation or licensing dongles. Or at least it did yesteryear.

OLYMPUS DIGITAL CAMERADongle. A word comical by its very nature. An appendage, seemingly out of place, begging to be cut off and thrown away. As useless as an appendix or your tail bone, a vestigal organ in your IT Department, ready to burst at any moment, leaking toxins all over your nascent IT career.

Dongles. Yeah, you looked around and saw 2013, software defined networks, cloud, virtual SANs,  IT freedom and business agility and then bam!

You get dongled. Out of nowhere. Getting dongled is like getting slapped upside the head with the rotting carcass of an inedible fish. We’re talking some serious old school, non agnostic-computing shit here people.

But I plugged the dongle in. Why won’t NT4 recognize it?

Yes back in the days when I was running several bare-metal CAD servers like Ideas M8 and, if I recall correctly, even Mathematica, the software manufacturers required serial dongle devices to hang off the back of the gigantic NT4 box. The dongle served two purposes for IT: 1) seeing it hanging off the back of a server was like a huge neon warning sign that constantly blinked GET THE F*(*@ AWAY FROM ME NOW, DON’T TOUCH! and 2) it was a physical manifestation of your intense pain in setting it up, worrying about it falling over, and fretting over whether your backups of that server would really work on different hardware.

Oy vey.

Nowadays things are a bit easier. In 2013, we at least have the option to license our engineering software via USB dongles or via FlexLM, the industry standard licensing manager for engineering programs. You still have to tie your server product to the hardware in some way (in most cases, the activation or license file is tied to IP or MAC address), but that’s easy in a virtual world where we’ve been freed from the tyranny of hard-coded MAC addresses.

Anyway, long introduction to say that there is something even worse than engineering software dongles. You might even call it the Dongle of Dongles, or perhaps the Head Dongle in Charge.

What is it this device, this Super Dongle, this slayer of project plans, this inflexible technology, this digital equivalent of an Islamic Fatwa?


CableCARD baby. Yeah you know the name. To talk of CableCARD, the dongle of dongles, you need to invoke some religion, to go biblical as it were. For, as it is written in 1 Samuel 18:7:

 The women sang as they played, and said, “Dongles have slain their thousands, but CableCARD its tens of thousands.”

Even the Hebrews writing thousands of years ago knew about the evils of CableCARD, the supposedly consumer-friendly, agnostic-computing-ish device whose purpose was to free you from having to rent a goddamned six year old black Scientific Atlanta Cable box (likely sticky and with stains on it for good measure) from your local government-backed cable monopoly just so you could have the privilege to pay for broadcast & cable TV with commercials.

cablecardCableCARD: the dongle that fools even sharp technologists by its simplicity. “Why what’s so complicated? It looks like one of those old school PCMCIA laptop cards. How hard could this possibly be?”

CableCARD: The pain it inflicts on those who try to deploy it at home echoes around the internet, haunting tech, tv, and internet forums alike, with ephemeral echoes of tales of horror, let down, dystopia and depression, and few -precious few!- stories of the brave persevering the fire and passing through the eye of the needle into freedom.

CableCARD: a device so nefarious, it turns normal non-geeks into Giant-Slayers, like this guy from some Tivo forum who inspired me as I was wrestling the beast:

It’s 4:52, Halloween, late afternoon. I’ve been on the phone with either one of two Time Warner phone support services 3 times; TiVo’s phone support service (twice); getting in my car and driving 10 miles for the distinct privilege of waiting in line for 20 minutes with the deadbeat and disgraced to pickup a new CableCard; 3 times with the CableCard activation service; all while searching and posting to the Tivo Community Forum… since 7:30 this morning. I am on a mission that will be my legacy. I’m single-handedly taking-on The 21st Century Corporate, Media Empire. That’s who I am and I will not be denied justice.

CableCARD: A device…no scratch that…a way of life with a purpose and a prize at the end. Unfortunately for the agnostic computing minded, that prize you get at the end of your epic struggle against the Man is…re-installing an old scratched DVD of Windows 7 Home Premium with Media Center Edition and watching TV on that and an unplanned, confused, and emergency period in which you buy an MCE Remote off Amazon only to realize sadly that it can’t even turn the TV off because it’s actually a USB HID device, and not a proper remote and guess what, you’re now an Ir expert in addition to everything else.

CableCARD: It’s just the size of a PCMCIA adapter and in contrast to the Scientific Atlanta box the woman at TWC keeps mistakenly inputting on your account (Yes, I’m quite sure I said CableCARD, for the 1000th time ma’am), it’s so tiny and it’s not going to mess with the feng shui of your living room or the TV you mounted meticulously on the wall over the course of an entire afternoon and the spousal unit will be quite happy that getting TV doesn’t necessarily mean getting big black boxes with lots of wires to place under the tv and oh it’s going to be great, really, haha, really it will, just hang tough, you’ll see.

And then you step back, you pause, you take a deep breath, and you look at what your hatred for renting one black box hath wrought:




and this:

Yes. I did it. I bought an open box PC from Best Buy for $230, or about 9 months worth of black cable box rental

which you had to buy at the last minute because your plans to build a home lab meant you can’t run Hyper-V or vSphere or even goddamned Virtual PC on a computer that needs be frisked, patted down, and butt cheeks spread by the DRM Police CableCARD brings with him to every party because he’s a f#$*(#$ kill-joy party-pooper

and, taking it all in, you break down and cry out to the universe, why?!? Why lord, why is it like this, why does it have to be so complicated, why can’t someone regulate this shit and make it better?” and then you fall to the ground sobbing because though you’ve met and defeated CableCARD, you’re still trying to conceal black boxes and wires. Only  now you’re doing it by adopting the habits of a junkie: hiding your shame and purchases inside used Ikea magazine containers, hoping no one will see them, and asking your local dealer for a deal on some used merchandise, it doesn’t have to be Grade A, a D- will do, you just need it now.

And here is what you get from your titanic battle with CableCARD:


Software Defined Drinking

SDD is probably pretty common in our line of work, but it’s almost never a good mix…late night chat with my British colleague after some maintenance work.

Me [10:38 PM]:
here you go
a bridge from your world to mine
Him [11:30 PM]:
sounds similar to direct attachedd storage into the cloud
Me [11:30 PM]:
yeah but slower than a usb drive lol
Him [11:31 PM]:
thre is a big awakening for storage
not sure where its going but somebody needs to pick a side
Me [11:32 PM]:
funny thing is if you abstract it enough, you start not thinking about where the session box is
and that’s ok. just have to get used to it
Him [11:33 PM]:
there is a major shift at the moment and nobody knows where its going
what would you choose
Me [11:34 PM]:
yeah that’s true
Him [11:34 PM]:
if everybody was going in differeent directions
Me [11:34 PM]:
we already got used to idea of virtualizing compute. and SAN. next is the biggest of all: software defined networking.
i was listening to a great interview of a guy
ccie or what have you, juniper, big network routing expert
he doesn’t even refer to Cisco or Juniper or alcatel anymore
physical switches, to him, are “the underlay”
he essentially writes software that breaks all the rules and makes networking as portable and movable as storage and compute
Him [11:36 PM]:
yeah the physical switch is now fabric and malible
Me [11:36 PM]:
and you’re right there’s a dozen different ways to get there
my little “Pertino” ipv6 program is a software defined network. and it’s amazing
vmm has it too
and vmware and a zillion others
Him [11:38 PM]:
the big question is where is this all going
nobidy knows
Me [11:38 PM]:
lol are you drunk. why do you keep asking that?
it’s going to the matrix man. I’ll play Neo, you be Trinity
Him [11:39 PM]:
i;ll be the agent
Me [11:39 PM]:
Him [11:39 PM]:
break down everything
Me [11:40 PM]:
wherever it’s going i want to go with it and not be flipping printers when i’m 40 or 50
because printers.
will. never.
be. virtualized. ever
Him [11:40 PM]:
yeah – we’ll be old school
its a brave new world where the complexity of servers and networks are gone
virtual everything and across platforms
Me [11:42 PM]:
you WILL need to know how to program or at least script. that’s what scares me
Him [11:43 PM]:
were in our early 30’s and already dinos
Me [11:43 PM]:
i know. goddamnit. when did that happen
Him [11:43 PM]:
fuck knows
somewhere between growing up and the world passing us by
it was about 5 min i thnk


And scene. He just faded away after that. I asked him if he was singing to me through Lync, and he asked if that would make me happy, and I said, hell yeah, let’s put your new Lync SIP trunk (That goes through my converged Hyper-v switch) to the test.

I think he’ll be in late tomorrow.