Software Defined Drinking

SDD is probably pretty common in our line of work, but it’s almost never a good mix…late night chat with my British colleague after some maintenance work.

Me [10:38 PM]:
hmmmm
here you go
a bridge from your world to mine
http://arstechnica.com/information-technology/2013/10/seagate-introduces-a-new-drive-interface-ethernet/
Him [11:30 PM]:
sounds similar to direct attachedd storage into the cloud
Me [11:30 PM]:
yeah but slower than a usb drive lol
Him [11:31 PM]:
thre is a big awakening for storage
not sure where its going but somebody needs to pick a side
Me [11:32 PM]:
haha
funny thing is if you abstract it enough, you start not thinking about where the session box is
and that’s ok. just have to get used to it
Him [11:33 PM]:
there is a major shift at the moment and nobody knows where its going
what would you choose
Me [11:34 PM]:
yeah that’s true
Him [11:34 PM]:
if everybody was going in differeent directions
Me [11:34 PM]:
we already got used to idea of virtualizing compute. and SAN. next is the biggest of all: software defined networking.
i was listening to a great interview of a guy
ccie or what have you, juniper, big network routing expert
he doesn’t even refer to Cisco or Juniper or alcatel anymore
physical switches, to him, are “the underlay”
he essentially writes software that breaks all the rules and makes networking as portable and movable as storage and compute
Him [11:36 PM]:
yeah the physical switch is now fabric and malible
Me [11:36 PM]:
yeah
and you’re right there’s a dozen different ways to get there
my little “Pertino” ipv6 program is a software defined network. and it’s amazing
vmm has it too
and vmware and a zillion others
Him [11:38 PM]:
the big question is where is this all going
nobidy knows
Me [11:38 PM]:
lol are you drunk. why do you keep asking that?
it’s going to the matrix man. I’ll play Neo, you be Trinity
Him [11:39 PM]:
i;ll be the agent
Me [11:39 PM]:
hahaha
Him [11:39 PM]:
break down everything
Me [11:40 PM]:
wherever it’s going i want to go with it and not be flipping printers when i’m 40 or 50
because printers.
will. never.
be. virtualized. ever
Him [11:40 PM]:
yeah – we’ll be old school
its a brave new world where the complexity of servers and networks are gone
virtual everything and across platforms
Me [11:42 PM]:
exactly
you WILL need to know how to program or at least script. that’s what scares me
Him [11:43 PM]:
were in our early 30’s and already dinos
Me [11:43 PM]:
i know. goddamnit. when did that happen
Him [11:43 PM]:
fuck knows
somewhere between growing up and the world passing us by
it was about 5 min i thnk

 

And scene. He just faded away after that. I asked him if he was singing to me through Lync, and he asked if that would make me happy, and I said, hell yeah, let’s put your new Lync SIP trunk (That goes through my converged Hyper-v switch) to the test.

I think he’ll be in late tomorrow.

The Home Lab blues

I just moved to a new house that offers me about +500 sq. ft. more than my old condo, has an attached two car garage, four bedrooms, an attic in which to run cable, and a top TWC circuit speed of 50 down/5 up.

I know what you’re thinking. What a great place for a home lab! Glad we’re on the same page because I need your help.

What I want in a home lab

A home IT lab ought to enable you to, at a minimum, 1) recreate a smaller scale of your work environment so that you can catch that bug during an Exchange 2013 upgrade, for instance, or prove a colleague wrong, 2) experiment with competing technologies, 3) and enable you to get familiar enough with processes, issues, and technologies that you can at least say you are familiar with other techs in a lab environment, if not in production.

Also, your home lab ought to be considered production inasmuch as you need to serve data reliably to your family and loved ones.

If you look at a home lab that way (production & dev & qa & simulation) you rapidly want an IT lab version of this:

1376167359_Superlab1

but have less money than it takes to build something sub-standard like this:

download

You, my friend, have the home lab blues.

What I have:

labcomponents
A motley crue of misfit tech that I want to build into a home IT lab

All that, plus I have this:

wifeang

a wife who won’t tolerate adventures in IT spending at home and wants her husband to get work to pay for it.

Pretty sad isn’t it? I’ve got to simulate some old but truly enterprise-class hardware with this bunch of hardware, much of it resulting from ill-advised ebay purchases, over-valuation of my own ability, and drunken experimenting?

Virtualization to the Rescue? 

Perhaps. It depends on what you want to simulate. If you work in a place that use vSphere and your whole workload is Windows based, you’re at an advantage over other solutions because vSphere, to my knowledge, supports nested hypervisors. So you can build an entire active directory domain on top of two virtual machines that themselves are running Hyper-V (or vmWare or Xen server right?) And then you can build a virtual iSCSI or FC cluster, exchange, anything you like, right within your single PC, no switching necessary. The only thing I’m having trouble figuring out is the storage piece (just not that up to date on VMWare these days I’m afraid), but just about every cheap NAS out there (or FreeNAS) can do iSCSI or NFS shares, so you should be set.

Of course that’s great for the VMWare crowd. What if you’re one of the poor slobs whose entire enterprise runs on Hyper-V (last I checked we’re at 14% of the market)?

As best I can tell, to do Hyper-V + the Microsoft stack in your home lab, you need to scale up your hardware for your lab into something in between the super meth lab at top and its basement-dwelling/mobile-homed smelly meth lab at bottom.

That’s because Hyper-V does not allow for nested hypervisors, or at least not the ones you’re interested in as a Hyper-V engineer (that would be, Hyper-V).

All this and your access to Technet expires by the end of the year! Damnit!

Reusing what I Have

Because the Agnostic Computer Lab is allergic to spending (except in the cases noted above), I’ve got to re-use or re-purpose my so-called stack for a Hyper-V lab.

And this is where the Home Labs Blues come in. You’re a creative guy, willing to break things, to experiment, but after several days of mulling it over in your head, you realize you can’t build a real Hyper-V lab at home with your crap that sufficiently simulates your work.

  • Compute:The Lenovo ThinkCentre is fine, in fact you’re running client Hyper-V on it now. It’s adequate enough to run several VMs + your home workload
  • Network: Netgear R7000 is so new it doesn’t have DD-WRT (aka Real Router software) yet, or at least not a version you would trust
  • Network, Compute, or Storage: Raspberry Pi: Shoe-in for one of two roles: 1) Gateway device to replace the R7000 which can’t do much, with DNS, DHCP, DNS Cache, and routing all built in to one of those boutique RPi packages or 2) FreeNAS + USB 3.0 drive = iSCSI or NFS target. Sadly FreeNAS doesn’t do SMB 3.0 yet (Indeed, they still call it CIFS, a violation  of the rules!), so experimenting with that kickass storage spec (EMC says it’s the future of storage protocols, naturally) in your home lab is probably out of the picture unless you attach it to your Lenovo. Plus RPi only has USB 2 ports
  • Compute: I’d love to re-purpose the Google Chromebox from Google I/O into a compute engine. A core i5 Hyper-V box mated to my Lenovo would be more than enough for my purposes, all I’d have to do is buy a little bit of RAM and use the USB drive as storage. Sadly the Chromebox had its virtualization bits turned off mistakenly when it was built, and to get the standard Intel virtual-enabling switch turned back on, you have to hack the damned BIOS. There are instructions, but I’m not feeling confident after reading over them for a week, and can’t find many online who have successfully re-purposed the Google I/O “Stumpy” Chromebox into anything else except for a kvm hyper-visor on RHEL.
  • Compute or Storage:The ARM-powered ChromeBook is just not suited for x86 virtualization in any way that I can think of save, for potentially, a storage host. Of course I’ve installed Chrubuntu and Chronos and even ARM and some other linux flavors, but aside from NFS shares, which I can’t really use in Hyper-V, what good would this device be?
  • Network: At least my 8 port GigE switch from Netgear is somewhat suitable for my home lab exercise. It can do LACP port channels (useful for Hyper-V hosts spefically), 802.11q VLANs (very useful) and a couple other great features for a small, < $100 switch.
  • Other laptops: The Frankenstein Windows 7 Thin Client laptop has no use in a virtualization lab, nor do the other junk laptops I have lying around: A Gateway LT303U with an AMD CPU that my mother in law is using, a Dell Lattitude D610 with an ancient Intel Pentium, and a 2012 Asus laptop with an Intel Pentium M, which I was excited about but it turns out Intel turns off virtualization on their cheap ass processors

So yeah, I’m out of luck. Primarily on the compute side. I have one computer capable of running Hyper-V. I could throw vSphere on it, I guess, losing my only capable desktop PC but gaining the ability to emulate a real datacenter. But what’s the wife going to say when she sits down at the Lenovo and vSphere comes up?

I got me some storage I can use, but nothing approaching the compute power required to test out 2012 R2’s nifty new Scaled Out File Server role (can’t co-locate SOFS + Hyper-V) and to use SOFS, you need expensive SAS storage. I got me loads of compute in my Chromebox, but I can’t re-purpose it without learning microcontroller programming, a truly dark art even I’m not interested in. I got me a nice switch that does, via web gui, everything the work one does, but only one thing to plug into that would take advantage of it (the Lenovo).

Spend

Sucks to admit it, but I think I’ve got to spend. But what? I want a small footprint but capable PC running at least a Core i3 or i5 and that can support up to 32GB of RAM to make sure I can continue to use it in a few years (Lenovo tops out at 16GB in my current box).

I’m thinking Mac Mini (an appropos choice for the Agnostic Computing lab), a Gigabyte BRIX, or a custom PC inside a shuttle case (offers 2GigE built in) and have a total budget of about $700.

Any thoughts?

 

Look ma, no MPLS!

One of the big dollar technology items organizations like mine will likely look to kill in the next few years are MPLS networks, private lines, point to point T1s, T3s, you know, the 1990s-2000s way corporations connected HQ & Branch Offices securely over the internet. I’ve worked on such networks for all of my career, from being nervous around the dusty old Cisco router with a T-1 WIC card at my first post-college job to being part of a team that deployed 100MegE, 10MegE and T-1s to branch offices in dozens of spots around the world.

killmpls

For all the hype about the “Cloud,” this is one area that doesn’t get a lot of attention. And it should. Because in many cases, emerging and established technologies could lead the way to saving thousands, tens of thousands or even hundreds of thousands of dollars per month.

Take a look at your IT spend. I bet leasing private lines over commercial carriers is a big part of it, and potentially a huge part of it if you use a managed MPLS service. In some cases, it might even cost as much as one or two FTEs! Certainly the business would be happy to get some of that spend back if it were possible to merge the security, privacy and SLA-backed service of a leased line with the rapid time-to-deploy, ubiquity and ease of provisioning a standard internet circuit or two at a remote office.

This is the model you grew to love and hate over the last 10-15 years if you cut your teeth in corporate IT with Microsoft. Providing software for this topology that was redundant and survivable was Microsoft’s bread and butter during the late Gates era and much of the Ballmer era.

adwan
A Typical Active Directory instance spread over a WAN using ipv4, private lines, firewalls, NAT, and routers. A focus on keeping the Internet out, the duality of LAN vs WAN, NAT rules and DMZ. All the classics are here.  If you were lucky, in the early days before people really understood QOS, you got to experience the joy of bakchauling Internet from Site B to Site A and the resulting crush on business traffic

Models like this had their problems: expensive, prone to failure, and slow in the days before Ethernet circuits. You had to buy a bunch of equipment and outfit each site too,which meant more licenses. But this model could scale relatively well, at least for SMEs.

And while the architecture above looks positively archaic if you’ve got your head buried deep in SDN and such, it’s still in use in a lot of SMEs around the world. I’d even go further and say 9 out of 10 enterprises still think of network architecture in the context of Inside vs Outside. And who can blame them? At least you can control what’s inside your network, and it’s useful to think of it in that context.

But cloud providers from Amazon to Google to Azure have failed to abstract this model -or build a hybrid model that offsets this model’s shortcomings- to the cloud. Oh sure, you could move your TLD to Google Apps today and be done with it, but you’ve got a bunch of IT generalists & employees who are aces on Microsoft products. And you like the control of management ability of AD.

All you want to do is kill your expensive monthly leased circuits and effectively put your AD on the internet with proper security & robust A/B internet links, or hire Azure to do that for you. But you’re out of luck because believe it or not, this is how you go from on-prem AD to something else with Azure, ipv6, and all the new shiny stuff we’ve been talking about for the last few years:

Azure_Network-610x319

You see that? This graphic, ripped off from Azure somewhere, shows how you move your enterprise to the cloud. You tack on another f*(#$#$ VPN device and federate against Azure! And your remote workers? They VPN into Azure or via Remote Access! Hurray, our problems are solved! Why didn’t I think of adding another VPN point-to-point device!

O365 with Azure offers much the same:

Azure-IaaS-Active-Directory

Not one, but two clouds to federate against now! What’s not shown in this topology is that your end users aren’t sitting in an Azure cloud as in the diagram; they’re on prem, behind your old ipv4 firewall & router, fat, dumb and happy to be “at work” where their “work stuff” is located. And you’re in your office, jamming through Technet links on provisioning, assigning and deploying certs correctly, tearing your hair out.

Is this the best Azure and all the rest can do? Can’t the Cloud guys figure out a way for me to have my cake and eat it to, to move my Active Directory instance to a cloud provider, kill my premium, high-cost, inflexible, slow-to-deploy leased circuit inventory, end the LAN/WAN duality that haunts us all, and save me from buying server iron for offices with only a handful of people?

So far I don’t think Azure is compelling enough and it’s for the reason above alone. Cheap storage? Sure. Scalable compute? Take my credit card! But while the spillover effect from MS’ experience running Azure is evident in 2012 R2, it’s all one way. Microsoft is learning a bunch of stuff about how to run multi-tenant data facilities that ends up in my hands, but their knowledge of plain vanilla Active Directory on a WAN isn’t being reproduced in a compelling way in Azure.

End result: Keep my expensive leased lines. What a fail.

That’s why I’m excited and optimistic about network startups like Pertino. Pertino offers a brain-dead simple ipv6 service that traverses consumer or enterprise NATs, connects computers over an ipv6 network, and even allows you to run Active Directory over it. Genius!

They’re a startup, yes, and they require a piece of software on the PC which skeptics would point out is not different at all from a VPN client (they’re right), and I don’t think this particular product could scale far and wide, yet, it works. You can run AD and get to domain resources from a remote device on the internet. No Direct Access needed, no VPN devices, no routers, no goddamn certs, no worrying about subject alternative names and no waiting on some provider to stand up a VPN between my house and the server in Virginia.

If you’re an IT Generalist, the potential is this: It’s the Active Directory you know and love. On the fucking internet. Right now.

Last night I stood up a demo of 2012 R2 on my Hyper-V client at home, built a domain at home behind my Netgear wifi router, then built another Windows box on AWS somewhere in Virginia, installed Pertino client on both of them and bam! Just like that -for free- I had two domain controllers pinging, authenticating, routing over ipv6, no leased lines necessary. It just worked.

I’m not a networking guy (to the extent that any virtualization engineer is not a networking guy), so I don’t know how exactly it worked, couldn’t tell you if 6-to-4 was used or pure ipv6, all I can tell you is that I have an Active Directory instance on the internet with just a small client application.

pertino_ad

If I can figure out how to engineer this with existing stuff, or if Pertino can scale and really build this technology out, I could eventually kill my leased lines. Game change.

Going full Marxist up in the datacenter

Suppose you are:

  • An IT engineer at a a small to medium enterprise, responsible for the company’s enterprise stack, from Cisco to storage to compute
  • Suppose further that in your budget for next year is a modest five figure sum allocated to you to upgrade/care for your datacenter stack that the entire SME runs on 24/6
  • Finally suppose that in the absence of strategic direction from the business, you have the inordinate and unusual power of determining how to use the money to modernize your stack.

Further, suppose you actually do believe you need some TLC for your stack as it’s a faithful but aging unit. What’s more, you feel you owe it to her; she’s been there giving it her all every time Captain Kirk called upon you to jump her to warp. True she doesn’t run quite as fast as she used to but she’s reliable and keeps the light on for you. And though you’d never admit this in her presence, you can hardly trade her in for the hottest, latest, slimmest stack for the five figures you’ve been given.

So, pop quiz, what do you do hotshot? What do you do?

If you’re like me, you go all Marxist on the problem and borrow from your policy/economics courses in grad school and central plan the ever-loving shit out of the next phase of your virtualization stack.

I invoke Marx & the idea of central planning because I’m really convinced that planning out a virtualization stack is a lot like being an old school, Cold War era Party Secretary in some dreary eastern European capital, slogging out your miserable life in the Central Economic Planning Bureau, trying to decide if this year’s harvest should go to socks or hand grenades.

Or as Wikipedia puts it:

Different forms of economic planning have been featured in various models of socialism. These range from decentralized-planning systems, which are based on collective-decision making and disaggregated information, to centralized-systems of planning conducted by technical experts who use aggregated information to formulate plans of production. In a fully developed socialist economy, engineers and technical specialists, overseen or appointed in a democratic manner, would coordinate the economy in terms of physical units without any need or use for financial-based calculation.

The highlighted portion describes the modern virtualization engineer almost to a T, does it not? We aren’t just technicians, but technocrats, balancing inputs, measuring outputs, carefully calibrating an entire highly complex system (perhaps not rivaling an economy, but surely, it’s up there), with imperfect but useful aggregated information (the business’ strategy, workflow, the calendar, our own instruments & measurements) against the backdrop of real hard stop supply constraints and sometimes outrageous and unpredictable demand. That’s somehow more than just what an engineer does; is it not?

And so from your technocrat’s seat, how do you keep the good times rolling yet  make sensible upgrades when funding becomes available? Where do you put your spend when no one’s telling you how to spend?

Don your central planner’s hat and forget the old virtualization rule book because you need to think like an economist as their toolset offers the best utility in planning your virtualization spend.

De-Abstract and Assign Values

A modern, fully-abstracted datacenter is still made up of just a few constituent elements at its core, and I maintain you can assign values to those elements and see which upgrade path makes the most sense. For my situation, it came down to storage or compute, with network a distant but potentially disruptive and game-changing third.

So you simply take Mr. Pareto’s amazingly useful technique and plot units of storage vs units of compute (I know, I know, how dare I do this on the CPU side, but bear with me!) just like the guns vs butter charts:

datacenter

Notice that I’ve generalized these resources even though there’s a vast array of different storage technologies, speeds, cores per cpu and such. That’s all fine; the Pareto exercise requires you at some point to de-abstract each item you’re deciding between, so that you can compare them and find the most efficient mix. From your lofty seat in the Central Planning Bureau of your IT Department, you’re still engineering against resource depletion, but at a different scale and from a different perspective than when you’re loading up CPU Ready or watching context switching in perfmon.

Notice too that I went a little beyond Pareto’s example by including the blue “outliers” and the yellow “Game Changer/Value Multiplier.”

Outliers, in this scenario, are the shiny new hotness. You know. The Nutanixes of the world (not that I have anything against them, but they are shiny, new and hot), the million+ IOPS solid state PCIe card that’s super expensive, but promises to make your database as fast to read and write to DDR3 RAM itself. These outliers are the exotic playboy bunnies of the Virtualization World: neat to read about, and you’d definitely like to get your hands all over one and benchmark it again and again, but you’re just a Central Planning virtualization nerd, stuck in a cramped office trying to get the job done. Come back down to earth big fellow.

The Game Changer/Value Multiplier, however, is another story. This is a potential element in your datacenter that has such amazing potential, it threatens to tear up the Pareto efficiency rule-set all together and force you to write a new one. For something to be a value multiplier in my datacenter today, it’d have to be as significant as server virtualization was in a data-center of ten years ago. What could that possibly be at this point?

In my case, I know vendors will try and convince me that their specialty niche product is that yellow game change button on my chart. But I’ve already determined, to an extent, what that game change element would be by putting the various elements into a cheesy but effective “Value pyramid”, that rips off the celebrated and very-appropriate for this post MoSCow Method:

Image 60

For your bread and butter virtual stack, stuck on 2010 era hardware that while still fast, can’t take advantage of some of the new stuff in Hyper-V, I reckon this pyramid is pretty accurate and perhaps useful.

The pyramid shows that what I need most is storage, but plain old iscsi storage is also of the least value to me as it doesn’t enable anything new; it just throws TBs at an old stack. No sorry NetApp, I don’t want the one with the bigger GBs.

Much more interesting to me and probably to a lot of IT engineers out there is what happens as you go up the list. SMB3 offers near game-change levels of disruption, but I’ve already got it in Windows 2012, what I don’t have is space or compute to use it with, to build out a Windows 2012 R2 scaled-out file server SAN killer (not that I’d run production on that…yet) or at the least do real shared-nothing live migrations.

Giving me more storage and compute and suddenly we’re in serious, high-value territory, which is as close as a Central Planning Technocrat comes to unadulterated, mathematically-pure joy.

I’ve already got software defined networking in my System Center suite and I’m using elements of it, but at this heady level, to really use it well, to start thinking about geo-independence, ingress and egress to Azure,  or VDI, perhaps I need to start thinking about replacing my 6509e switch, or “underlay” as the fancypants network virtualization guys call it now. Or at least I may need to get some new blades. Or maybe not…I’m not sure. Part of the exercise is to put a value on features and find out what you don’t know.

At the very tip of the pyramid, our mythical vendor would be able to supply every element from top to bottom, scaling back capacity the further up the pyramid he goes to keep costs down in your five figure range.

The top of the pyramid -a sum of all the parts below- represents a true game change scenario, one in which the old Pareto efficiency rules get torn up and you have the fun task of thinking up a new ruleset.

One last tool/visualization crutch I’ll leave you with if you’re in a similar situation is this: chart the rise in capacity, speed, or feature-set over time against your company’s own business cycle, then try to map out and think of new technologies that could disrupt the whole equation, getting you and your business to your destination more quickly and for less money, but more risk.

What do you aim for? How do you prioritize? That awesome new disruptive gamechanging technology could leapfrog you past ipv6 implementation hurdles and beyond 10GbE, but how do you hit it? Do you even bother aiming?

Image 59

I’ll know in a few weeks if my approach to upgrading my Hyper-V farm is successful or bears the right kind of solution I’m aiming at. In the meantime, I hope you found some utility in reading about Pareto and Marx on a tech blog.

A Chromebook defiled

So I was one of the lucky ones (68,000+ according to Wikipedia) to get one of the original prototype Chromebooks from Google, the legendary, all black, totally murdered-out CR-48 Chromebook.

I had forgotten that I even signed up for it when it showed up on my doorstep several weeks later about this time three years ago.

One look. One click. One foray into the browser-as-an-OS concept and I was smitten. I resolved then and there to hold the CR-48 near and dear to my heart, to keep it forever and treasure it as another item in my huge junk heap of out-dated computers nascent computer museum.

CR48-previewOf course, the CR-48 wasn’t much to write home about. This was no Model 100 or Apple Macintosh. No, this was more like a Lisa, Apple Mac Cube or Windows Me. Nice to look at, neat concept, but once you turned it on, it kind of sucked. It was slow, and back in 2010-2011, ChromeOS was truly just a browser. There were no “apps” for desktop, NaCL hadn’t been implemented yet, and this thing ran on a single core Atom, a CPU architecture so slow that you had time curse it and every Intel exec responsible for  fumbling the mobile revolution so badly (by name!), all while waiting for the wimpy Atom to render a single website.

Neat novelty laptop, and I’m glad I didn’t pay a dime for it, but really, I couldn’t do work on this thing, as my colleagues relentlessly teased. So after several months of non-use, I cleaned it up, looked up instructions on how to wipe/format it, and prepped it for sale on eBay.

Alas I’m a man of conscience. Google gave this laptop to me for free. How could I go and turn a profit on it? What kind of Google fanboi would I be if I did that?!? A pretty shitty one, I reflected.

So I didn’t sell it. I couldn’t. And so back into the box it went until this weekend, when an acute need for an extra laptop arose in my house after family members took some of my old ragged and spare netbooks.

“What’s that in the colorful cardboard box,” I thought when I came across the CR-48’s original packaging. “No. It couldn’t be!”

And yes, there it was, just as black and menacing and monolithic as the day I got it: the CR-48. Still looking good, three Mac Book Air cycles later.

My other Chromeboxen, all of which have been used and abused in multiple ways.
My other Chromeboxen, all of which have been used and abused in multiple ways.

I’m an experienced Chromehead, owning not one or two, but three (possibly four if you count the Chromecast) Chrome devices, including a Google IO 2012 edition Chromebox and the ubiquitous, best-selling series 3 ArmBook. And so I didn’t really need this CR-48, but I couldn’t sell it either…what to do what to do.

And so naturally, since I’m the sort who would really enjoy the irony, I resolved then and there to build myself a Windows laptop, and not just any Windows laptop, but a Windows 7 Thin Client laptop on my free Chromebook CR-48.

That’s right. Win 7 Thin PC.  Not just the familiar closed-source proprietary operating system of yesteryear, but the thin client version, the version that is/was pined after by PC efficiency nuts & custom system builders for so long, the version that isn’t available to the public, even more closed, locked away, and protected than Windows 7 itself.

Let me just repeat that and let it sink in: this is an exclusive, hard to get version of Microsoft’s last real successful operating system that weighs in at just a hair over 2.5GB installed!

The 16GB hard drive has slower random r/w than my first 512MB USB stick. Not even worth an upgrade, really.
The 16GB SSD has slower random r/w than my first 512MB USB stick. Not even worth an upgrade, really. And I’m a sucker for hard disk upgrades.

Truly putting this OS on this type of laptop would be a tech sin of the worst order. And so, naturally, I dived into the process post haste, tearing the CR-48 apart, placing masking tape over the BIOS safety switch (no electrical tape in my house and no time to get some!), losing a few screws in the process before finally booting the black beast up, sliding in the USB drive with the Win7TPC .iso and formally sticking some Redmond code way up where it didn’t belong.

Felt good. For awhile anyway. A sort of tech high, a mega byte of temporary euphoria. Yum.

Of course even Win7 TC can’t make up for the horribleness that is the Atom. You can’t really term what the CR-48 does as “performance,” it’s more like the opposite of performance; perhaps “level of degradation off baseline” is more accurate. “Is it any faster,” is replaced with, “How much and to what extent is it slower?” even with a thin PC operating system.

And what’s even worse about this experience is that I skipped the section on remapping the ChromeOS keyboard to Windows and sat for a good 5 minutes trying to figure out how to do CTRL-ALT-DEL after joining the machine to my domain. Thank god for the on-screen keyboard.

So that was the highlight of my weekend. I defiled my free Chromebook with an OS straight out of Linus’ dystopian hell-scape, experienced the thrill of doing something so naughty followed by the inevitable disappointment & headaches such experimentation is bound to yield.

Booting to Sharepoint…coming soon from Microsoft

I have seen the future, and for all the ChromeOS haters & MS fan boys out there, it’s a truly frightening one.

To be fair, it was actually my bosses’ suggestion, but he’s since backed off his vision and I’ve been developing it in my mad, mad mind.

I think I’ve figured out Microsoft’s (evolving) strategy in the desktop/consumer & enterprise desktop space. Let’s face it; Windows 8 has been a flop, perhaps not to the level that Vista was, but really, it’s just not that great of an operating system. It’s not very intuitive as a desktop operating system and it’s only a little bit better as a tablet. It’s downright awful if you’re a virtualization admin, like I am, trying to hover your little mouse cursor in the bottom right corner to get to the bloody start screen.

This was taken on a Mac, but it could have been a ChromeBox or Linux machine or a damned Cell phone with an HTML 5 browser. Point is I was editing rich Excel docs in Sharepoint, entirely divorced from the underlying OS. Keep it going MS.
This was taken on a Mac, but it could have been a ChromeBox or Linux machine or a damned Cell phone with an HTML 5 browser. Point is I was editing rich Excel docs in Sharepoint, entirely divorced from the underlying OS. Keep it going MS.

Now 8.1 Enterprise, which I’m running the beta now, is a huge improvement, but where’s all this going? Is the hybrid tablet/desktop paradigm that Microsoft established last year really what they’re committed to for the next decade and beyond?

I seriously doubt it. Or, I should say, I think they’re going to borrow from ChromeOS and integrate Sharepoint right into the desktop.

Yeah, crazy right? But think about it. Here’s what Sharepoint 2013 -and it truly is a revolutionary product- offers Microsoft in the way of a desktop operating system:

  • Abstraction of the file/folder system, the paradigm Steve Jobs wanted to kill off so badly before he died. No longer would we have files & folders to worry about; everything will be contained, indexed, and walled off within Sharepoint sites. Your Skydrive is already like this, but I’m saying they’ll extend that and kill off C:UsersMy Docs and all the other shit we’ve had to deal with since Windows 95.
  • Individual & group sharing of documents, resources, and files by users themselves rather than heavy-handed, antique IT admins carefully crafting NTFS folder permissions and applying them to old-world style AD Groups. This will be fantastic and will kill DropBox creep in my enterprise, I hope. Put the onus on the users to secure & share their documents, with approval checkpoints & workflows in the loop, and you’ve effectively provided a good alternative to old-fashioned NTFS structures.
  • A touchable, App-friendly (yeah you can buy Sharepoint apps now too) UI, HTML 5 flavored, AJAX-friendly operating environment that truly works on all browsers, finally
  • Office Web Apps 2013: a truly kick-ass suite that plugs right into Sharepoint 2013, Lync 2013, and Exchange 2013. Users today are able to open/edit/save/send files within OWA without downloading /editing/saving/reattaching documents. What’s the next generation going to look like? Just think about that for a second. Attaching files, as a concept, is or will soon be on the tech endangered species list as Office Web Apps + Sharepoint becomes the primary computing interface for many standard office workers

Put it all together, and what do you have? A compelling web-based operating system.

Now I haven’t built out Exchange 2013 for my enterprise yet, but it’s the last piece of the puzzle. Once I do, what reason do I have to continue giving Windows desktops to standard, run of the mill users, folks who get by daily running Office, Outlook, Excel, and a bit of IE and don’t need the full functionality of a fat Office client? The only reason to even bother with a desktop OS at that point -as far as I can tell- is that Microsoft isn’t building a full web-capable Lync 2013 client. But they are bundling Skype into 8.1, so who the hell knows?

So yeah, picture that as your future and you begin to see where MS might be headed. Maybe 8.1 won’t be there, but mark my words. Windows 9 will not feature a desktop OS with a task bar, system tray, and start button -at least in the Pro or consumer versions- it will feature the “Metro” UI, with tiles populated by a consumer’s Windows account (as it is now) and/or seamlessly populated with content provided by their enterprise Sharepoint infrastructure.

It’s a new twist on MS’ ancient idea of an “Active Desktop,” but it’s actually quite compelling. Earlier this year, I filled out a corporate expense report on our Sharepoint 2013 dev environment entirely from my $250 Arm Chromebook. Microsoft is finally getting it, after decades of being stubborn.