As Southern California is the center of the universe as far as I’m concerned, I know you’re all worried sick about me, this website, and other Southern Californians as we endure a a frightening precipitation event of some kind and scale. The Live MegaDoppler 7000 StormSageRadar XXTreme v 2.0 Beta can’t tell us yet if the great California Dampnening of 2014 is Noahic in nature or a $deity-punishment ripped straight from the Book of Revelations, but one thing is for certain: the water is everywhere.
Yes, it’s so wet out there that even the mighty Los Angeles River is flowing once again. It may even be navigable.*
But fret not! Let me calm your nerves. Your blogger Jeff, @agnostic_node1 on the twitters, is ok. So is the Child Partition and the Supervisor Module spouse. We’re all safe, our flood pants all fit and we’ve got buckets and bags of some sort of pre-silicon material at the ready.
The Converged Fabric Agile DevOps ITIL Waterfall Software-Defined Lab @ Home, however, I worry.
See I built it in the garage. The Supervisor Module would never permit such equipment inside the living spaces.
Not only is it in the garage, but it’s close to the garage door, bolted down properly to the wooden workbench.
A few inches outside but very close to being under the garage door tracks that lift what’s now become a very wet garage door.
I may be able to push 4 or 5,000 IOPS to the home-built ZFS array sitting in the garage. I’m quite confident in my ability to take my home lab, and my skillset too, to new heights. I can spin up a dozen VMs on this handsome 12u stack at once…no problem at all. I can build a lab that’s agnostic and welcoming to all type 1 and 2 hypervisors, no discrimination here!
What I can’t do is anticipate inclement weather that seems to come at me sideways sometimes.
So what’s an Agile guy to do?
Pull out his IT MacGyver manual: Bungie cord, two sticks, and some plastic sheeting:
Like I said, not my finest hour in contingency planning, but it’s working. So far. I won’t be putting this lab work on my resume, however.
Your enterprise’s mileage may vary, but in every place I’ve ever worked at, I’ve taken a pretty dogmatic approach to disk space utilization on VMs, especially ones hosting specialty workloads, such as Engineering or financial applications.
And that dogma is: No workload is special enough that it needs greater than 15% free disk space on its attached volume, non-boot volume.
This causes no end of consternation and panic among technicians who deploy & support software products.
“Don’t fence me in!” they shout via email. “Oh, give me space lots of space on your stack of spindles, don’t fence me in. Let me write my .isos and .baks till the free space dwindles! Please, don’t fence me in,” they cry.
“I can’t stand your fences. Let my IO wonder over yonder,” they bleat, escalating now, to their manager and mine.
Look, I get it. Seeing that the D: drive is down to 18% free space makes such techs feel a bit claustrophobic. And I mean no disrespect to my IT colleagues who deploy/support these applications. I know they are finicky, moody things, usually ported from a *nix world into Windows. I get it. You are, in a sense, advocating for your customer (the Engineering department, or Finance) and you think I’m getting in your way, making your job harder and your deployment less than optimal.
But from my seat, if you’ve got more than 15% free space on your attached volume in production, you’re wasting my business’ money. I know disk space is cheap, but if I gave all the specialty software vendors what they asked for when deploying their product in my stack, my enterprise would :
Still have a bunch of physical servers doing one workload, consuming electricity and generating heat instead of being hyper-rationalized on a few powerful hosts
Lots of wasted RAM & disk resources. 400GB free on this one, 500GB free on that one, and pretty soon we’re talking about real storage space
One of the great things about the success of virtualization is that it killed off the sacred cows in your 42U rack. It gave us in the Infrastructure side of the house the ability to economize, to study the inputs to our stack and adjust the outputs based not on what the vendor wanted, or even what us in IT wanted, but on what the business required of us.
And so, as we enter an age in which virtualization is the standard (indeed, some would argue we passed that mark a year or two ago), we’ve seen various software vendors remove the “must be physical server” requirement from their product literature. Which is a great thing cause I got tired of fighting that battle.
But they still ask for too much space. If you need more than 15% free on any of the attached, MPIO-based, highly-available, high performing LUNs I’ve given you, you didn’t plan something correctly. Here’s a hint: in modern IT, discrimination is not only allowed, but encouraged. I’m not going to provision you space on the best disk I have for backups, for instance. That workload will get a secondary LUN on my slow array!
Agnostic Computing is brand new as tech blogs go, rolled out on a whim in August 2013 just to vent some angst, to wax philosophical on some high technology magic (would you believe my first post was about Sharepoint 2013? Uhhh yeah).
My thinking in starting the site was simple: I wanted to write a blog that was as fun and as passionate as the tech debates my friends & colleagues and I enjoyed at work for years. These are debates that start innocently enough (“Check out my new 1080p Android phone”….or “Do you really buy music from the iTunes store?”) but soon escalate into a 45 minute verbal fisticuffs, where low blows & sucker punches are not only permitted, but encouraged.
The geekier the reference, the harder the punch: “That’s a user interface only the mother of Microsoft bob could love,” “You’re just a sad and broken man because both BeOS & WebOS died and you were the only one who noticed,” or “We can’t trust someone who buys music off iTunes to be able to program a switch.” “You’re acting pretty confident for a guy who broke Exchange just last month.”
Good times. I love those debates, and not just cause the normals don’t get them. They’re genuinely fun, so I set to out to capture a bit of that spirit on this blog, and, I hoped, post some genuinely interesting stuff, like a storage bakeoff between bitter rivals, a sincere, screenshot or gifcam heavy how-to sent from my virtualized stack to your’s, and more.
And so it goes for bloggers, who, like chefs, try a little of this, test a little of that, mix it all up and then taste what’s in the pot. And most of the time, it’s forgettable at best, shame-inducing at worst.
Which makes it all the more surprising for me because apparently I’m doing something right.
You see, I’ve been invited as a delegate to Virtualization Field Day #3. In the Disneyland of High Tech, Silcon Valley, where the combined brainpower is bound to rub off on me, I mean, how can it not?
If you don’t know what #VFD is, then you haven’t been paying close enough attention. From all the interviews I’ve heard of delegates from past #Tech Field Days (Storage, Network, Wireless Network…it’s spreading into all our sacred sub-disciplines and dark arts, surely the ERP & SQL guys will be next) going to a TFD as a delegate puts you face to face with the companies, and more importantly, the engineers who designed the stuff you deploy, support, break, fix, and depend on to keep your enterprise running.
Notice I said engineers. Not sales people. Or not just sales people at any rate.
Deep dives, white papers, new horizons opened, the potential to leave behind painful memories of broken processes and old ways of doing things by meeting the other delegates, some of whom I’ve been reading for years…..these are the things I’m looking forward to as a #VFD delegate.
Oh and challenging vendors and discerning which product is the right one for the business, which is among the most important jobs we as IT pros have.
As a former boss of mine put it memorably: “We’re only as good as our vendors.” And he was right: whether the device in your rack is amazing and incredible, or prone to failure, or the service you’ve contracted is game-changing or more trouble than its worth, managing “the stack” and interfacing with the stack builders and stack sellers is important to your success, and the business’ success.
Two of the sponsor firms at this year’s #VFD already have me excited. I just finished buying a Nimble array at work (gamechanger! no regrets!), but I won’t lie: I’m Coho-Curious. And Atlantis Computing: sharp guys, A++++ on the blogs, would read again, eager to hear about the products.
Thanks to the GestaltIT group (Add them to your RSS feed stat!) for the invite and be sure to check back here -as well as the other delegates’ blogs- for some #VFD thoughts in the weeks ahead.
All the sweat equity, money, and time I’ve put into the home lab is finally paying off at the Agnostic Computing.com HQ.
In fact, it’s been great: satisfying and pleasing little green health icons are everywhere, I read with satisfaction the validated Microsoft cluster configuration reports without any warnings at all, and the failover testing? Let’s just say I can remove “ish” from the end of the word “redudant.” This stack is as solid as it’s going to get on my low budget, single-psu setup designed to draw fewer than 5 amps and less than 500 watts (I’m at about ~325w & 3.5 amps more or less)
But standing up Hyper-V clusters on consumer-grade hardware isn’t exactly expanding my portfolio, even if all my storage is parked in a (new to me) ZFS box. So last weekend it was time to tackle Hyper-V’s nemesis: VMWare’s market-dominating ESXi 5.5, which I’ve got running on a stable 2-core Athlon II box, 12GB of RAM, and an Intel 2x1GbE NIC.
For a Hyper-V guy who hasn’t touched ESXi since probably 2011, building out the ESXi box involved some trips down memory lane.
A memory lane called Pain Street.
The last time I worked in ESXi on anything meaningful was during an eight month span during 2011 in which my colleagues and I were charged with replacing ESXi with Hyper-V 2.0, baked into the just-released 2008 R2 edition.
We had Hyper-V 2.0, a few brand-new PowerEdge servers with quad-Nehalem CPUs, something like 512GB of RAM, a FAS 2210, System Center Virtual Machine Manager, 2007 Edition, and a brand new file system-like layer on top of NTFS called Cluster Shared Volumes.
Oh, and a handful of V2V tools & .vmdk to .vhd conversion scripts with which we planned to stick it to VMWare.
I mentioned that this was a painful time in my life, right?
I’ll save the Hyper-V war stories and show you my scars (Hyper-V virtual switch ARP storms, oh my!) another time, but here’s what I learned from that experience: Hyper-V 2.0, was in all ways inferior to ESXi when it debuted in Server 2008 R2. And not just a little inferior. No, we are talking NBA vs 8th Grade Boys Basketball team scale inferiority.
It was half-baked, not entirely thought out, difficult to scale, prone to random failures, hard to backup (even risky…sometimes the CSVs would just drop off when the IO was supposed to be redirected to another host), and the virtual drivers written by Microsoft for Microsoft Hyper_v virtual machines running on Microsoft virtual synthetic NICs weren’t stable. It was a hypervisor that made you pound your keyboard, sit back in your chair, scratch your head and ask, “Has anyone at Microsoft ever tried to use this thing?”
And you couldn’t team it and expect Microsoft support. I had to delay my love letter to LACP for years because of that.
Even so, I loved Hyper-V 2.0. Wore the admin hat like a badge of honor. Proud and boastful of the things I could make Hyper-V 2.0 do in the face of so much adversity, so much genetic disadvantage. Yeah the other guys had Ferraris tuned up by Enzo himself and all I had was a leaky Fiesta with a suspect axle, but that Fiesta could, in the right hands, make it across the finish line.
We, we happy few, we band of brothers, who persisted in our IT careers through the days of Hyper-V 2.0 and even excelled.
All that to say that the hey-day of VMWare, ESXi, the Nexus 1000v, and now VSAN have kind of passed me by. Just can’t seem to get exposed to it, to sink my teeth into that whole wondrous stack. It’s expensive.
But it’s been alright with me because in the same span I’ve adopted Hyper-V 3.0 with relish and become convinced that we Microsofties finally had a Hypervisor worthy of respect. “Feature Parity” is a term that’s been bandied about, and with 2012 R2, it got even better. EMC, parent company of VMWare, even called SMB 3.0 “the future of storage.” Haha, take that NFS!
So has it?
It’s not easy for me to admit this but while I like Hyper-V much more in some areas and feel like it can scale and serve any enterprise well, I have to admit after playing with ESXi at home, Hyper-V still has deficits purely from a Hyper-visor perspective (System Center is a different animal).
Deficits other virtualization bloggers are eager to demonstrate, with barely-concealed glee. Take Mike Laverick, a sharp ESXi guy, for instance. This February readers of his blog have been treated to post after post In Which the ESXi Guy Plays with Hyper-V 3.0.
I’m always up for a good tech debate, but after devouring his posts, letting them sink in, I got nothin’ except a few meek responses and maybe some envy.
I guess to be fair – taken individually this lack of hotness of the Gen2 Windows 2012 Hyper-VM might not be a deal breaker for some. For me personally, they collectively add up big pain in the rear, especially if you coming off the back of virtualization product like VMware vSphere that does have them. For me the whole point of virtualization is it liberates us from the limitations of the physical world. What’s the point of software-defined-virtual-machines, when it feels more like the hardware-defined-physical-machines….
’tis true in some respects. I have long wanted to stop mapping LUNs directly from the SAN, through the Hyper-V switch to a virtual machine, but it was not possible to resize .vhdx drives on a live VM until October 2013, when R2 was released. And even now in R2, it’s not as simple or more importantly -reliable- enough to depend on in production, at least not compared to resizing an RDM in a NetApp or Nimble or even my ZFS array.
I will offer some resistance in the following two areas though.
Hyper-V runs on whatever piece of junk you throw at it. That’s interesting news if you’re a value-oriented enterprise, and really great news if you’re building a home lab or trying to learn the trade. VMWare, in contrast, won’t even install without supported NICs…the cheap realtek in your Asus? Not supported. The Ferrari metaphor is apt: You’ve got to shell out some bucks for the High Octane stuff before you can stand-up ESXi in a meaningful way.
Second observation is that I’m not comprehending the switching model very well. I was really excited to see Cisco Discovery Protocol just work on mouse-hover with zero configuration, but this 1:1 stuff feels archaic, devoid of the abstract fabric goodness:
What am I missing here?
On my ESXi box, I’ve got two Intel GigE adapters. I have the option to make them active/passive (cool), team them, but I’m not seeing the same converged fabric concept that’s liberated me in Hyper-V 3.0 from, guess what, worrying about hardware.
The three NICs on my Hyper-V host, for instance, are joined in an LACP team, which then is used to build a true & advanced virtual switch for both the host & the guests. And an LACP-capable switch is not a requirement here; I could use the dumb switch in my rack and have the same fault-tolerant (though lower performing) converged team.
Some very simple powershell lines later, and you’ve got vethernets on the management OS tagged with the appropriate VLAN.
All ports on the physical Cisco switch? Trunked.
I know I’m missing something here…PowerCLI? I’ll be testing that tonight.
And then Richard Campbell of the famous RunAs Radio &.net rocks did an entire show on staying sharp in IT a few weeks back. He and Kim Tripp, a SQL server MVP, blogger, and tech coach, talked up the value of IT training conferences. Tripp in particular advocated mid-career IT pros send themselves to out-of-town week-long IT training conferences. It has to be out of town so that you’re free from the distractions of home-life and able to soak in the learning & wisdom in a full-immersion nerd camp.
Okay so I supplied that last bit of color. But Tripp is right: Whether you’re a SQL admin, a storage jockey or you dream of BGP & ipv6, there are some awesome bootcamp style, full-immersion conferences (not vendor conventions mind you!) you can go to to learn a lot relatively quickly. I’d call this the Shock ‘n Awe approach to healthy IT career maintenance.
But if that doesn’t sound like your cup of tea, there’s loads of other ways to learn formally. For about $2,500 you can take a course at any one of a dozen or so different technology training academies, perhaps even getting a nice certificate you can put on your resume.
Or you can subscribe to professionally-made video lessons for about $40 a month and learn at your own pace. I doubt you’ll get a certificate but certainly, that’s a great option if you’re mid-career, with a family, and not a lot of free time. And distance/web-learning is not the joke it once was.
But I’m finding it’s not an easy choice to make. It’s gone more or less like this:
“Hurray!!!Finally, the recession blues are over and companies are investing in their workforce again!”
“Now, give me a whiteboard, a good instructor, a switch, a router, and a few hours a week and I’ll show ipv6 what’s what. Today, link-local; tomorrow: jokes about carrier grade NAT and knowing what that actually means!
and then, after quiet, sober reflection:
“Or maybe I should do some powershell scripting, because mastering powershell will empower me in so many areas. But wait, that’s too Microsoft-centric. VMWare certification sounds like a good idea. Or maybe I should get into a Java scripting class? Web is huge afterall.”
On the other hand, I need to introduce myself to JSON, take a RESTful course and lather up with some SOAP knowledge. REST, SOAP & JSON are the lingua franca of our internet of things, after all, we’ll be using them to provision <Insert IT resource that used to be physical and tied to geographic location but is now free & agile by virtue of Software Defined somethingorother> by Thanksgiving for sure!
“Then again, ipv6 could be a game changer.”
And on and on it goes, the alphabet soup of technology acronyms I want to familiarize myself with approaching ∞ but having the bandwidth, time, and luxury to only pick one.
And that was before I recalled the cloud push and how it’s shaping IT as a career field.
What if the high-level engineering roles Working Hard in IT mentioned are declining in total numbers as well as percentages of a business’ IT staff payroll? What if, instead of needing skilled IT engineers, businesses decide (as many are) that monthly opex to AWS or Azure or OpenStack is a better way to spend their money?
What if -gasp- the IT department of today is getting disrupted out of existence?
In that case, the internal dialog goes like this:
Forget IPV6! Go big with ITIL or go home! Storage optimization, VM density, WAN performance…what is this, 2011? Businesses don’t want that, they don’t want a data center at all anymore. The smart bet for the future of IT is in process management, technical accounting, contract writing, service delivery, and SLA monitoring!
Suddenly picking something to study, something that sparks your interest, helps your career and also brings some value back to the business, isn’t such an easy choice, is it?
You want to stay sharp & agile, yet be aware enough about how market forces (animal spirits even!) are shaping IT & the business that you can adapt & respond to the new regime, whatever it is and will be in five years. Amen.
I’m mostly self-taught in IT, so maybe I’m biased that way, but I think there’s a lot to be said for building out a home lab where you can patch up some of the weak spots in your technology portfolio, perfect the stuff you’re already good at, play with the awesome new tech everyone is talking about, and have some fun too.
And unlike boot-camp style skills conferences, video subscriptions, or class room training, home labs don’t have to cost a lot, and you don’t need to leave your Child Partition and go on a road trip. The same technology that took your server room down from six racks of 2u servers to a few virtual machine hosts can help you along in your career: hypervisors!
Or maybe you’re a network guy. In that case, go crazy with Mininet young man. Build a network from a local LAN switch to a pair of virtual routers doing BGP, then stress & break it, all inside your PC, at home.
Or study the effect high latency & 1% packet loss has on application delivery by standing up a Wan Emulator VM, then put WanEM between a host & a ZFS-based storage VM on your iSCSI network. Because you can do that too!
And if the tech you want isn’t free, see if there’s a trial version. Or sweet talk your way past the sales guys into getting a free copy. Tech companies, after all, want people like you and me learning about their products and offerings. Use that to your advantage!
All you need is one capable, modern x86 box with 12GB-16GB RAM and a few hundred gigabytes of storage space. Windows 8.1 Professional ($200) has Hyper-V built right in, and it’s capable enough to be your home lab & home PC. But if you don’t like Windows, Ubunutu is a free download and kvm is an easy install. You don’t need to install a 12u rack in your garage like some people.
And yes for the skeptical, home IT labs are a thing, not just something crazy people do. Check out TinkerTryIT @ Home, or Serve the Home.com, for instance. It’s perhaps even a movement, a response, if you will, to the “Cloud Disruption” story VC firms are fond of pushing.
Approach your home lab as real & vital infrastructure. And put some goals and structures into your lab environment: this month I’m going to learn X, and next month Y and stick with it. Record your progress, make a checklist. BUILD.
Of course a home lab probably won’t help you along towards ITIL certification. If that’s the skillset IT pros will need to devleop, perhaps classroom training is the best. Or maybe you should just get an MBA. I’m not certain. But having a good attitude & trying to understand how business is evolving will take you along way, no matter what the IT department evolves into.
I just shipped my last ChromeOS device to a buyer on eBay, and while photographing the packaging just prior to shipment, I had cause to reflect on this device, the box it was in, and the year 2013, a strange year for me computing-wise, a year in which the Windows guy abandoned Microsoft completely.
Come, emote with me.
As 2012 ended, I slowly but steadily realized I hated Windows 8. Strike that. I reviled it. Its only saving grace was that Hyper-V came baked into Windows 8 Pro and Enterprise, but even that wasn’t enough to save it for me. I hated the tiles, the split-brained nature of the thing, the helter-skelter implementation, the awful Windows Store that was bereft of anything useful for work, or fun for home.
And I resented the shit out of Microsoft for making Server 2012 boot to the awful Start Screen, where it is about 10 times more useless than in Windows 8. I think my colleagues and I actually booed and hissed the first time we ran Server 2012 and had to hunt for the start screen activator thing like we were playing Enterprise Wack-A-Mole.
Windows, to borrow from Steve Jobs (and give another Here! Here! to Paul Thurott for his essay last week), was my work truck. I dumped all my stuff in it, had built a toolbox for the truck bed, and knew exactly which levers and buttons to push to make my Windows boxes purr and perform. And while Microsoft had done some great upgrades to the truck for Server 2012 (networking stack in particular), unless you ran core (you should!) it was all masked by that goofy wretched start screen.
Who among us didn’t get frustrated at being mouse/keyboard guys and suddenly facing a designed-by-committee touch interface on our dual or triple LCD displays?
Fed up, I went for the sugar high of using Mac OS X with Parallels, but that wore off after a few weeks.
So I ran kicking and screaming to the arms of Google. I stuck some rainbow-colored G way up into the emptiness of my heart, the place Microsoft had once occupied. I went deep. balls-deep, into the Chrome.
I loved the integration, the speed, the ubiquity & presence of all Chrome apps on all devices all in perfect sync, all my stuff living up there in the nebulous but omniscient Google “cloud.” It was a no-nonsense OS, the new operating system for people who just wanted to get shit done. I joined the Beta group, then dev, got familiar with the chrome://flags screen, and more.
A flurry of purchases ensued. The CR-48 came out of storage. I bought the Samsung ArmBook. Then a rare and prized Google I/O 2012 ChromeBox with a Core i5. The Windows box at home went into storage, the laptop at work went to an incoming exec, and I maintained my enterprise via the Chromebox for much of 2013.
My colleagues thought I was nuts (they’re right) but I made it work, and it wasn’t even (that) hack-ish. For remote desktop I bought ChromeRDP for $10 (A+++ would buy again) and in order to run my Windows applications on the Chromebox, I stood up a VM and built out the incredible RemoteSpark HTML 5 RDS server written by a small company in Canada (a solution so awesome that Google & vmWare appear to be ready to copy it in 2014).
In my own mind, I was an IT Hero, pointing the way forward, demonstrating that with ChromeOS, you could have your cake and eat it too: A high performance, secure & cheap desktop platform giving you reliable access to your Windows-based server stack, the .nets and the asps and the IISes and the Exchanges and SQLs happily existing within my modern, fast and slick browser operating system. I was ecstatic.
“Don’t you see?!?” I cried out to my colleagues, as if I was John the Baptist, announcing the Messiah’s arrival.
“This is what John Gage of Sun was talking about so long ago. We’re here! The network is now the computer!” I wailed, sackcloth and ashes now, as the networking guy backed away slowly, and passwords to critical systems were changed.
But then in perhaps the most spectacular IT Icarus story you’ve ever heard, I got too close to the promised land, too near the warm and beautiful future that awaits us (Agnostic Computing, where you don’t care what device you’re on), that my wings burned off, I fell to the server room floor in a pile of shattered dreams, cat 5 cable, and hopes.
Snowden. NSA. Compromised SSL certs, RSA the standard in security, but in reality a research branch of the NSA. The dawning realization that the cost was too high, that I was surrendering too much for this convenience. And oh yeah, the $$$ cost was probably about the same too as the on-prem stuff, and guess what? I got more 9s than the lot of them.
Disillusionment, despair, depression, all over again.
ChromeOS -and the stuff supporting it- not so shiny anymore.
And then, just like that, summer ended. I saw screenshots of Windows 8.1. I saw my beloved Start button return. I saw options to banish the Start Screen away for good if I liked. I saw Windows Management Framework 4, Powershell 4.0, and so many other goodies. Then Ballmer got sacked, following Sinofsky, and Gates was Alpha Dog once more.
Microsoft was still lost and confused, perhaps fatally, but at least I got my Start button back. Server 2012 R2, while not perfect, was what Server 2012 should have been, I thought. The “CloudOS” needn’t be so; you could keep all that stuff on-prem if you like. Yeah it’s not as elegant or complete as Chrome from a user standpoint, but it’s not as compromised either.
And so I resolved last fall to sharpen my skills rather than surrender to the cloud providers. I bought into a DIY & “Maker” aesthetic that seems to be, in my observations of the industry at least, getting some traction among IT pros lately.
Sources familiar with Microsoft’s plans tell The Verge that the company is seriously considering allowing Android apps to run on both Windows and Windows Phone. While planning is ongoing and it’s still early, we’re told that some inside Microsoft favor the idea of simply enabling Android apps inside its Windows and Windows Phone Stores, while others believe it could lead to the death of the Windows platform altogether. The mixed (and strong) feelings internally highlight that Microsoft will need to be careful with any radical move.
Radical is understating it a bit.
Linux/Android applications running natively on Microsoft Windows desktops and/or Windows 8 phones, not because some nerd went and accomplished a great feat of software engineering, but because Microsoft needs it?!?
That’s not just crazy. It’s almost heretical.
It’s a thought so wild that the phrase paradigm shift doesn’t do it justice. No, this is more like magnetic south switching to magnetic north. This is lions lying down with the lambs territory people, except in this case, Microsoft was the Lion, *nix the Lamb, and the Lion, as is its nature, bullied the lamb around for a few decades, but the Lamb just ate the Lion and is now resting, a satisfied look on its face.
This is end-is-near-grab-the-sandwich-board-meet-you-on-the-corner news.
It’s like waking up one day, and holy crap, the dollar has crashed, and in order to maintain financial stability in the western hemisphere, Mexico bails the US out, air-dropping truck loads of pesos from C-130s all over America, rescuing us from ruin.
Step back 10-12 years when you were young and crazy, undersexed and over-curious with no money and I bet you experimented with Linux. Remember those days? For me it was about the VAX machine…what was it, what did it do and why was my university email address so strange, and why did the guys in charge of that refrigerator-sized box all have beards, suspenders, and grumpy dispositions?
So I did what any geek did in 2000-2001. I downloaded/bought a copy of SUSE OpenLinux or RH or whatever distro was in favor that month, used partition magic to divide up my 16GB drive, and booted into some flavor of Linux, feeling like a stud. Penguins man! Linux on the Desktop! It’s for real this time!
“Hey this isn’t that bad,” You thought. “Most stuff works here pretty good. I could get used to this. Now let me see if I can get that WINE thing to work.”
Four hours of cursing & violent threats to your PC later, you resign yourself to defeat, realizing you’ll never get Win32 to work inside this strange linux thing, you’re just not smart enough. You did the forum crawl thing and the Linux nerds tried to help but you don’t have any concept of what sudo is and apt-get isn’t around yet, only RPMs and it’s all so chaotic.
Besides, at the end of the day, you have Photoshop available and all they can bring out is the GMP, you think smugly.
So you reboot and join your friends in a campus CounterStrike party in Windows 98. And you go on to develop your moderately-successful career as an IT “knowledge worker” supporting Microsoft products until you die, hopefully sometime after Microsoft Office 2042 is released, but you never know.
Except it’s not, and now, 13 years later, it’s us Windows guys -and Microsoft itself!- who are trying to figure out how to run Android applications on Windows cause that’s where all the exciting stuff is happening and it’s where all the cool kids are hanging out.