Fellow #VFD3 Delegate and Chicago-area vExpert Eric Shanks has recently posted two great pieces on how to setup an Active Directory Certificate Authority in your home lab environment.
Say what? Why would you want the pain of standing up some certificate & security infrastructure in your home lab?
Eric explains:
Home Lab SSL Certificates aren’t exactly a high priority for most people, but they are something you might want to play with before you get into a production environment.
Exactly.
Security & Certificate infrastructure are a weak spot in my portfolio so I’ve been practicing/learning in the Daisetta Lab so that I don’t fail at work. Here’s how:
As I was building out my lab, I knew three things: I wanted a routable Fully Qualified Domain Name for my home lab, I was focused on virtualization but should also practice for the cloud and things like ADFS, and I wanted my lab to be as secure as possible (death to port 80 & NTLM!)
With those loose goals in mind, I decided I wanted Daisetta Labs.net to be legit. To have some Certificate Authority bonafides…to get some respect in the strangely federated yet authoritarian world of certificate authorities, browser and OS certificate revocations, and yellow Chrome browser warning screens.
Too legit, too legit to quit
So I purchased a real wildcard SSL certificate from a real Certificate Authority back in March. It cost about $96 for one year, and I don’t regret it at all because I’m using it now to secure all manner of things in Active Directory, and I’ll soon be using it as Daisetta Labs.net on-prem begins interfacing with DaisettaLabs.net in Azure (it already is, via Office 365 DirSync, but I need to get to the next level and the clock is ticking on the cert).
Building on Eric’s excellent posts, I suggest to any Microsoft-focused IT Pros that you consider doing what I did. I know it sucks to shell out money for an SSL certificate, but labwork is hard so that work-work isn’t so hard.
So, go follow Eric’s outline, buy a cert, wildcard or otherwise (got mine at Comodo, there’s also an Israeli CA that gives SSL certs for free, but it’s a drawn-out process) and stand up a subordinate CA (as opposed to a on-prem only Root CA) and get your 443 on!
Man it sucks to get something so fundamentally wrong. Reader Chris pointed out a few inaccuracies and mistakes about my post in the comments below.
At first I was indignant, then thoughtful & reflective, and finally resigned. He’s right. I built an AD Root -not a subortinate as that’s absurd- Certificate Authority in the lab.
Admittedly, I’m not strong in this area. Thanks to Chris for his coaching and I regret if I mislead anyone.
So the three of you who read this blog might be wondering why I haven’t been posting much lately.
Where’s Jeff, the cloud praxis guy & Hyper-V fanboy, who says IT pros should practice their cloud skills? you might have asked.
Well, I’ll tell you where I’ve been. One, I’ve been working my tail off at my new job where Cloud Praxis is Cloud Game Time, and two, the Child Partition, as adorable and fun as he is, is now 19 months old, and when he’s not gone down for a maintenance cycle in the crib, he’s running Parent Partition and Supervisor Module spouse ragged, consuming all CPU resources in the cluster. Wow that kid has some energy!
Yet despite that (or perhaps because of that), I found some time to re-think my storage strategy for the Daisetta Lab.
Recall that for months I’ve been running a ZFS array atop a simple NAS4Free instance, using the AMD-powered box as a multi-path iSCSI target for Cluster Shared Volumes. But continuing kernel-on-iscsi-target-service homicides, a desire to combine all my spare drives & resources into a new array, and a vacation-time cash-infusion following my exit from the last job lead me to build this for only about $600 all-in:
Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. There’s some serious storage utopia up in the Daisetta Lab
Here are some superlatives and other interesting curios about this new box:
It was born on the 4th of July, just like ‘Merica and is as big, loud, ostentatious and overbearing as ‘Merica itself
I would name it ‘Merica.daisettalabs.net if the OS would accept it
It’s a real server. With a real Supermicro X10SAT server/workstation board. No more hacking Intel .inf files to get server-quality drivers
It has a real server SAS card, an LSI 9218i something or other with SAS-SATA breakout cables
It doesn’t make me choose between file or block storage, and is object-storage curious. It can even do NFS or SMB 3…at the same time.
It does ex post facto dedupe -the old model- rather than the new hot model of inline dedupe and/or compression, which makes me resent it, but only a little bit
It’s combining three storage chipsets -the LSI card, the Supermicro’s Intel C226, and ASMedia 1061- into one software-defined logical system. It’s abstracting all that hardware away using pools, similar to ZFS, but in a different, more sublime & elegant way.
It doesn’t have the ARC –ie RAM AS STORAGE– which makes me really resent it, but on the plus side, I’m only giving it 12GB of RAM and now have 16GB left for other uses.
It has 16 Disks : 12 rotational drives (6x1TB 5400 RPM & 6x2TB 7200RPM) and four SSDs (3x256GB Samsung 840 EVO & 1x128GB Samsung 830) and one boot drive (1x32GB SanDisk ReadyCache drive re-purposed as general SSD)
Total capacity RAW: nearly 19TB. Usable? I’ll let you know. Asking
“Do I need that much?” is like asking “Does ‘Merica need to stretch from Sea to Shining Sea?” No I don’t, but yes ‘Merica does. But I had these drives in stock, as it were, so why not?
It uses so much energy & power that it has, in just a few days, erased any greenhouse gas savings I’ve made driving a hybrid for one year. Sorry Mother Earth, looks like I’m in your debt again
But seriously, under load, it’s hitting about 310 watts. At idle, 150w. Not bad all things considered. Haswell + full C states & PCIe power management work.
It’s built as veritable wind-tunnel as it lives the garage. In Southern California. And it’s summer. Under load, the CPU is hitting about 65C and the south-bridge flirts with 80c, but it’s stable.
It has six, yes, six, 1GbE Intel NICs. Two are on the motherboard, and I’m using a 4 port PCIe 2 card. And of course, I’ve enabled Jumbo Frames. I mean do you have to even ask at this point?
It uses virtual disks. Into which you can put other virtual disks. And even more virtual disks inside those virtual disks. It’s like Christopher Nolan designed this storage archetype while he wrote Inception…virtual disk within virtual disk within virtual disk. Sounds dangerous, but in the Daisetta Lab, Who Dares Wins!
So yeah. That’s what I’ve been up to. Geeking out a little bit like a gamer, but simultaneously taking the next step in my understanding, mastery & skilled manipulation of a critical next-gen storage technology I’ll be using at work soon.
Can you guess what that is?
Stay tuned. Full reveal & some benchmarks/thoughts tomorrow.
Stick figure man wants his application to run faster. #WhiteboardGlory courtesy of VM Turbo’s Yuri Rabover
So you may recall that back in March, yours truly, Parent Partition, was invited as a delegate to a Tech Field Day event, specifically Virtualization Field Day #3, put on by the excellent team at Gestalt IT especially for the guys guys who like V.
And you may recall further that as I diligently blogged the news and views to you, that by day 3, I was getting tired and grumpy. Wear leveling algorithms intended to prevent failure could no longer cope with all this random tech field day IO, hot spots were beginning to show in the parent partition and the resource exhaustion section of the Windows event viewer, well, she was blinking red.
And so, into this pity-party I was throwing for myself walked a Russian named Yuri, a Dr. named Schmuel and a product called a “VMTurbo” as well as a Macbook that like all Mac products, wouldn’t play nice with the projector.
You can and should read all about what happened next because 1) VMTurbo is an interesting product and I worked hard on the piece, and 2) it’s one of the most popular posts on my little blog.
Now the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation wasn’t just that it played into my fevered fantasies of being a virtualization economics czar (though it did), or that it promised to bridge the divide via reporting between Infrastructure guys like me and the CFO & corner office finance people (though it can), or that it had lots of cool graphs, sliders, knobs and other GUI candy (though it does).
No, the great thing about VMTurbo OpsMan & Yuri & Dr. Schmuel’s presentation was that they said it would work with that other great Type 1 Hypervisor, a Type-1 Hypervisor I’m rather fond of: Microsoft’s Hyper-V.
I didn’t even make screenshots for this review, so suffer through the annotated .pngs from VMTurbo’s website and imagine it’s my stack
And so in the last four or five weeks of my employment with Previous Employer (PE), I had the opportunity to test these claims, not in a lab environment, but against the stack I had built, cared for, upgraded, and worried about for four years.
That’s right baby. I put VMTurbo’s economics engine up against my six node Hyper-V cluster in PE’s primary datacenter, a rationalized but aging cluster with two iSCSI storage arrays, a 6509E, and 70+ virtual machines.
Who’s the better engineer? Me, or the Boston appliance designed by a Russian named Yuri and a Dr. named Schmuel?
Here’s what I found.
The Good
Thinking economically isn’t just part of the pitch: VMTurbo’s sales reps, sales engineers and product managers, several of whom I spoke with during the implementation, really believe this stuff. Just about everyone I worked with stood up to my barrage of excited-but-serious questioning and could speak literately to VMTurbo’s producer/consumer model, this resource-buys-from-that-resource idea, the virtualized datacenter as a market analogy. The company even sends out Adam Smith-themed emails (Famous economist…wrote the Wealth of Nations if you’re not aware). If your infrastructure and budget are similar to what mine were at PE, if you stress over managing virtualization infrastructure, if you fold paper again and again like I did, VMTurbo gets you.
Installation of the appliance was easy: Install process was simple: download a zipped .vhd (not .vhdx), either deploy it via VMM template or put the VHD into a CSV and import it, connnect it to your VM network, and start it up. The appliance was hassle-free as a VM; it’s running Suse Linux, and quite a bit of java code from what I could tell, but for you, it’s packaged up into a nice http:// site, and all you have to do is pop in the 30 day license XML key.
It was insightful, peering into the stack from top to nearly the bottom and delivering solid APM: After I got the product working, I immediately made the VMturbo guys help me designate a total of about 10 virtual machines, two executables, the SQL instances supporting those .exes and more resources as Mission Critical. The applications & the terminal services VMs they run on are pounded 24 hours a day, six days a week by 200-300 users. Telling VMTurbo to adjust its recommendations in light of this application infrastructure wasn’t simple, but it wasn’t very difficult either. That I finally got something to view the stack in this way put a bounce in my step and a feather in my cap in the closing days of my time with PE. With VMTurbo, my former colleagues on the help desk could answer “Why is it slow?!?!” and I think that’s great.
Like mom, it points out flaws, records your mistakes and even puts a $$ on them, which was embarrassing yet illuminating: I was measured by this appliance and found wanting. VMTurbo, after watching the stack for a good two weeks, surprisingly told me I had overprovisioned -by two- virtual CPUs on a secondary SQL server. It recommended I turn off that SQL box (yes, yes, we in Hyper-V land can’t hot-unplug vCPU yet, Save it VMware fans!) and subtract two virtual CPUs. It even (and I didn’t have time to figure out how it calculated this) said my over-provisioning cost about $1200. Yikes.
It’s agent-less: And the Windows guys reading this just breathed a sigh of relief. But hold your golf clap…there’s color around this from a Hyper-V perspective I’ll get into below. For now, know this: VMTurbo knocked my socks off with its superb grasp & use of WMI. I love Windows Management Instrumentation, but VMTurbo takes WMI to a level I hadn’t thought of, querying the stack frequently, aggregating and massaging the results, and spitting out its models. This thing takes WMI and does real math against the results, math and pivots even an Excel jockey could appreciate. One of the VMTurbo product managers I worked with told me that they’d like to use Powershell, but powershell queries were still to slow whereas WMI could be queried rapidly.
It produces great reports I could never quite build in SCOM: By the end of day two, I had PDFs on CPU, Storage & network bandwidth consumption, top consumers, projections, and a good sense of current state vs desired state. Of course you can automate report creation and deliver via email etc. In the old days it was hard to get simple reports on CSV space free/space used; VMTurbo needed no special configuration to see how much space was left in a CSV
vFeng Shui for your virtual datacenter
Integrates with AD: Expected. No surprises.
It’s low impact: I gave the VM 3 CPU and 16GB of RAM. The .vhd was about 30 gigabytes. Unlike SCOM, no worries here about the Observer Effect (always loved it when SCOM & its disk-intensive SQL back-end would report high load on a LUN that, you guessed it, was attached to the SCOM VM).
A Eureka! style moment: A software developer I showed the product to immediately got the concept. Viewing infrastructure as a supply chain, the heat map showing current state and desired state, these were things immediately familiar to him, and as he builds software products for PE, I considered that good insight. VMTurbo may not be your traditional operations manager, but it can assist you in translating your infrastructure into terms & concepts the business understands intuitively.
I was comfortable with its recommendations: During #VFD3, there was some animated discussion around flipping the VMTurbo switch from a “Hey! Virtualization engineer, you should do this,” to a “VMTurbo Optimize Automagically!” mode. But after watching it for a few weeks, after putting the APM together, I watched its recommendations closely. Didn’t flip the switch but it’s there. And that’s cool.
You can set it against your employer’s month end schedule: Didn’t catch a lot of how to do this, but you can give VMTurbo context. If it’s the end of the month, maybe you’ll see increased utilization of your finance systems. You can model peaks and troughs in the business cycle and (I think) it will adjust recommendations accordingly ahead of time.
Cost: Getting sensitive here but I will say this: it wasn’t outrageous. It hit the budget we had. Cost is by socket. It was a doable figure. Purchase is up to my PE, but I think VMTurbo worked well for PE’s particular infrastructure and circumstances.
The Bad:
No sugar coating it here, this thing’s built for VMware: All vendors please take note. If VMware, nomenclature is “vCPU, vMem, vNIC, Datastore, vMotion” If Hyper-V, nomenclature is “VM CPU, VM Mem, VMNic, Cluster Shared Volume (or CSV), Live Migration.” Should be simple enough to change or give us 29%ers a toggle. Still works, but annoying to see Datastore everywhere.
Interface is all flash: It’s like Adobe barfed all over the user interface. Mostly hassle-free, but occasionally a change you expected to register on screen took a manual refresh to become visible. Minor complaint.
Doesn’t speak SMB 3.0 yet: A conversation with one product engineer more or less took the route it usually takes. “SMB 3? You mean CIFS?” Sigh. But not enough to scuttle the product for Hyper-V shops…yet. If they still don’t know what SMB 3 is in two years…well I do declare I’d be highly offended. For now, if they want to take Hyper-V seriously as their website says they do, VMTurbo should focus some dev efforts on SMB 3 as it’s a transformative file storage tech, a few steps beyond what NFS can do. EMC called it the future of storage!
Didn’t talk to my storage: There is visibility down to the platter from an APM perspective, but this wasn’t in scope for the trial we engaged in. Our filer had direct support, our Nimble, as a newer storage platform, did not. So IOPS weren’t part of the APM calculations, though free/used space was.
The Ugly:
Trusted Install & taking ownership of reg keys is required: So remember how I said VMTurbo was agent-less, using WMI in an ingenious way to gather its data from VMs and hosts alike? Well, yeah, about that. For Hyper-V and Windows shops who are at all current (2012 or R2, as well as 2008 R2), this means provisioning a service account with sufficient permissions, taking ownership of two Reg keys away from Trusted Installer (a very important ‘user’) in HKLMCLSID and one further down in WOW64, and assigning full control permissions to the service account on the reg key. This was painful for me, no doubt, and I hesitated for a good week. In the end, Trusted Installer still keeps full-control, so it’s a benign change, and I think payoff is worth it. A Senior VMTurbo product engineer told me VMTurbo is working with Microsoft to query WMI without making the customer modify the registry, but as of now, this is required. And the Group Policy I built to do this for me didn’t work entirely. On 2008 R2 VMs, you only have to modify the one CLSID key
Soup to nuts, I left PE pretty impressed with VMTurbo. I’m not joking when I say it probably could optimize my former virtualized environment better than I could. And it can do it around the clock, unlike me, even when I’m jacked up on 5 Hour Energy or a triple-shot espresso with house music on in the background.
Stepping back and thinking of the concept here and divesting myself from the pain of install in a Hyper-V context: products like this are the future of IT. VMTurbo is awesome and unique in an on-prem context as it bridges the gap between cost & operations, but it’s also kind of a window into our future as IT pros
That’s because if your employer is cloud-focused at all, the infrastructure-as-market-economy model is going to be in your future, like it or not. Cloud compute/storage/network, to a large extent, is all about supply, demand, consumption, production and bursting of resources against your OpEx budget.
What’s neat about VMTurbo is not just that it’s going to help you get the most out of the CapEx you spent on your gear, but also that it helps you shift your thinking a bit, away from up/down, latency, and login times to a rationalized economic model you’ll need in the years ahead.
The Apollo-Soyuz metaphor is too rich to resist. With apologies to NASA, astronauts & cosmonauts everywhere
Right. So if you’ve been following me through Cloud Praxis #1-3 and took my advice, you now have a simple Active Directory lab on your premises (Wherever that may be) and perhaps you did the right thing and purchased a domain name, then bought an Office 365 Enterprise E1 subscription for yourself. Because reading about contoso.com isn’t enough.
What am I talking about “if”. I know you did just what I recommended you do. I know because you’re with me here, working through the Cloud Praxis Program because you, like me, are an IT Infrastructurist who likes to win! You are a fellow seeker of #InfrastructureGlory, and you will pursue that ideal wherever it is, on-prem, hybrid, in the cloud, buried in a signed cmdlet, on your hybrid iSCSI array or deep inside an NVGRE-encapsulated packet, somewhere up in the Overlay.
Right. Right?
Someone tell me I’m not alone here.
You get there through this thing.
So DirSync. Or Directory Synchronization. In the grand Microsoft tradition of product names, DirSync has about the least sexy name possible. Imagine yourself as a poor Microsoft technology reseller; you’ve just done the elevator pitch for the Glories that are to be had in Office 365 Enterprise & Azure, and your mark is interested and so he asks:
Mark: “How do I get there?”
Sales guy: “DirSync”
Mark: “Pardon me?”
Sales Guy: “DirSync.”
Mark: Are you ok? Your voice is spasming or something. Is there someone I can call?
DirSync has been around for a long, long time. I hadn’t even heard of it or considered the possibility of using it until 2012 or 2013, but while prepping the Daisetta Lab, I realized this goes back to 2008 & Microsoft Online Services.
But today, in 2014, it’s officially called Windows Azure Active Directory Sync, and though I can’t wait to GifCam you some cool powershell cmdlets that show it in action, we’ve got some prep work to do first.
Lab Prep for DirSync
As I said in Cloud Praxis #3, to really simulate your workplace, I recommend you build your on prem lab AD with a fully-routable domain name, then purchase that same name from a registrar on the internet. I said in Cloud Praxis #2 that you should have a lab computer with 16GB of RAM and you should expect to build at least two or three VMs using Client Hyper-V at the minimum.
Now’s the time to firm this all up, prep our lab. I know you’re itching to get deep into some O365, but hang on and do your due dilligence, just like you would at work.
Lab DHCP : What do you have as your DHCP server? If it’s a consumer-level wifi router that won’t let you assign an FQDN to your devices, consider ditching it for DHCP and stand-up a DHCP instance in your Lab Domain Controller. Your wife will never know the difference and you can ensure 1) that your VMs (whether 1 or 2 or several) get the proper FQDN suffix assigned, and 2) you can disable NetBIOS via MS DHCP
Get your on-prem DNS in order: This is the time to really focus on your lab DNS. I want you to test everything; make some A-records, ensure your PTRs are created automatically. Create some C-Names and test forwarding. Download a tool like Steve Gibson’s DNS Benchmark to see which public name servers are the closest to you and answer the quickest. For me, it’s Level 3. Set your forwarders appropriately. Enable logging & automatic testing
Build a second DC: Not strictly required, but best practice & wisdom dictates you do this ahead of DirSync. Do what I did; go with a Windows core VM for your second DC. That VM will only need 768mb of ram or so, and a 15GB .vhdx. But with it, you will have a healthier domain on-prem
Now over to O365 Enterprise portal. Read the official O365 Induction Process as I did, then take a look at the steps/suggestions below. I went through this in April; it’s easy, but the official guides leave out some color.
Office 365 Prep & Domain Port ahead of DirSync
Go to your registrar and assign and verify to Microsoft you own the domain via TXT record: Process here
Pick from the following options for DNS and read this:
Easy but not realistic: Just handover DNS to O365. I took the easy way admittedly. Daisetta Labs.net DNS is hosted by O365. It’s decent as DNS hosting goes, but I wouldn’t have chosen this option for my workplace as I use an Anycast DNS service that has fast CDN Propagation globally
More realistic: Create the required A Records, C Names, TXT and SRV records at your registrar or DNS host and point them where Microsoft says to point them
Balls of Steel Option: Put your Lab VM in your DMZ, harden it up, point the registrar at it and host your own DNS via Windows baby. Probably not advisable from a residential internet connection.
Keep your .onmicrosoft.com account for a week or two: Whether you’re starting out in O365 at work or just to learn the system like I did, you’ll need your first O365 account for a few days as the domain name porting process is a 24-36 hour process. Don’t assign your E1 licenses to your @domain.com account just yet.
I wouldn’t engage MFA just yet…let things settle before you turn on Multifactor authentication. Also be sure your backup email account (The oh shit account Microsoft wants you to use that’s not associated with O365) is accessible and secure.
Fresh start cause I couldn’t build out an Exchange lab :sadface:
If you are simulating Exchange on-prem to hybrid for this exercise, you’ll have more steps than I did. Sadly, I had to give O365 the easy way out and selected “Fresh Start” in the process.
Proceed with the standard O365 wizard setups, but halt at OnRamp: I’m happy to see the Wizard configuration method is surviving in the cloud. Setting all this up won’t take long; the whole portal is pretty easy & obvious until you get to Sharepoint stuff.
Total work here is a couple of hours. I can’t stress how important your lab DNS & AD health are. You need to be rock solid in replication between your DCs, your DNS should be fast & reliably return accurate results, and you should have a good handle on your lab replication topology, a proper Sites & Services setup, and dial in your Group Policy and OU structure.
Daisetta Labs.net looks like this:
and dcdiag /e & repadmin show no errors.
Final Steps before DirSync Blastoff
With a healthy Domain on-prem, you need now to create some A Records, C-Names and TXT records so Lync, Outlook, and all your other fat clients dependent Exchange, Sharepoint and such know where to go. This is quite important; at work, you’ll run into this exact same situation. Getting this right is why we chose to use routable domain, it’s a big chunk of the reason why we’re doing this whole Cloud Praxis thing in the first place. It’s so our users have an enjoyable and hassle-free transition to O365
Follow the directions here. Not as hard as it sounds. For me it went very smoothly. In fact, the O365 Enterprise portal gives you everything you need in the Domain panel, provided you’ve waited about 36 hours after porting your domain. Here’s what mine looks like on-prem after manually creating the records.
And that’s it. We’re ready to Sync our Dirs to O365s Dirs, to get a little closer to #InfrastructureGlory. On one side: your on-prem AD stack, on the launch pad, in your lab ready for liftoff.
Sure, it’s a little hair-brained, admittedly, but if you’re like me, this is how you learn. And I’m learning. Aren’t you?
On the other launch pad, Office 365. Superbly architected by some Microsoft engineers, no longer joke-worthy like it was in the BPOS days, a place your infrastructure is heading to whether you like it or not.
I want you to be there ahead of all the other guys, and that’s what Cloud Praxis is all about: staying sharp on this cloud stack so we can keep our jobs and find #InfrastructureGlory.
DirSync is the first step here, and I’ll show you it on the next Cloud Praxis. Thanks for reading!
It’s been a tough year for those of us in IT who engineer, deploy, support & maintain Microsoft technology products.
First, Windows 8 happened, which, as I’ve written about before, sent me into a downward spiral of confusion and despair. Shortly after that but before Windows 8.1, Microsoft killed off Technet subscriptions in the summer of 2013, telling Technet fans they should get used to the idea of MSDN subscriptions. As the fall arrived, Windows 8.1 and 2012 R2 cured my Chrome fever just as Ballmer & Crew were heading out the door.
Next, Microsoft took Satya Nadella out of his office in the Azure-plex and sat him behind the big mahogany CEO desk at One Microsoft Way. I like Nadella, but his selection spelled more gloom for Microsoft Infrastructure IT guys; remember it was Nadella who told the New York Times that Microsoft’s on-prem infrastructure products are old & tired and don’t make money for Microsoft anymore.
And then, this spring…first at BUILD, then TechEd, Microsoft did the unthinkable. They invited the Linux & Open source guys into the tent, sat them in the front row next to the developers and handed them drinks and party favors, while more or less making us on-prrem Infrastructure guys feel like we were crashing the party.
No new products announced for us at BUILD or TechEd, ostensibly the event built for us. Instead, the TechEdders got Azured on until they were blue in the face, leading Ars’ @DrPizza to observe:
So basically, not a single on-prem announcement. I wonder what IT people think about that….
We think it feels pretty shitty Dr. Pizza, that’s how. It feels like we’re about to be made obsolete, that we in the infrastructure side of the IT house are about to be disrupted out of existence by Jeffrey Snover’s cmdlets, Satya’s business sense and something menacingly named the Azure Pack.
And the guys who will replace us are all insufferable devs, Visual Studio jockeys who couldn’t tell you the difference between a spindle and a port-channel, even when threatened with a C#.
Which makes it hurt even more Dr. Pizza, if that is your real name.
But it also feels like a wake-up call and a challenge. A call to end the cynicism and embrace this cloud thing because it’s not going away. In fact, it’s only getting bigger, encroaching more and more each day into the DMZ and onto the LAN, forcing us to reckon with it.
The writing’s on the wall fellow Microsofties. BPOS uptime jokes were funny in 2011 and Azure doesn’t go down anymore because of expired certs. The stack is mature, scalable, and actually pretty awesome (even if they’re still using .vhd for VMs, which is crazy). It’s time we step up, adopt the language & manners of the dev, embrace the cloud vision, and take charge & ownership of our own futures.
I’d argue that learning Microsoft’s cloud is so urgent you should be exploring it and getting experienced with it even if your employer is cloud-shy and can’t commit.Don’t wait on them if that’s the case, do it yourself!
Because, if you don’t, you’ll get left behind. Think of the cloud as an operating system or technology platform and now imagine your resume in two, five, or seven years without any Office 365 or Azure experience on it. Now think of yourself actually scoring an interview, sitting down before the guy you want to work for in 2017 or 2018, and awkwardly telling him you have zero or very little experience in the cloud.
Would you hire that guy? I wouldn’t.
That guy will end up where all failed IT Pros end up: at Geek Squad, repairing consumer laptops & wifi routers and up-selling anti-virus subscriptions until he dies, sad, lonely & wondering where he went wrong.
Don’t be that guy. Aim for #InfrastructureGlory on-prem, hybrid, or in the cloud.
Over the coming days, I’ll show you how I did this on my own in a series of posts titled Cloud Praxis.
[table]
Link, On-prem/Hybrid/Cloud?, Notes
Cloud Praxis #2, On Prem, General guidance on building an AD lab to get started
Cloud Praxis #3, Cloud, Wherein I think about on-prem email and purchase an O365 E1 sub
Have you ever been in a position in IT where you’re asked to do what is, by rational standards, impossible?
As virtualization engineers, we operate under a kind of value-charter in my view. Our primary job is to continuously improve things with the same set of resources, thereby increasing the value of our gear & ourselves.
Looked at economically, our job isn’t so much different than what some people view as the great benefit of a free market economy: we are supposed to be effeciency multipliers, just like entrepreneurs are in the market. We take a set of raw resources, manipulate & reshape them, and extract more value out of them.
I hate to go all tech-crucnh on you, but we disrupt. In our own way. And it’s something you should be proud of.
Maybe you never thought of yourself like that, but you should…and you should never sell yourself short.
For guys and gals like us, compute, storage & network are raw resources at our disposal. Anything capable of being virtualized or abstracted can, or at least should, potentially have some value, as there are so many variables we can fine-tune and manipulate.
That old Dell PowerEdge 2950 with some DDR2 RAM that shipped to you in 2007? Sure it’s old and slow, but it’s got the virtualization bits in its guts that can, in the right hands, multiply & extend its value. Sure it’s not ideal, but raise your hand if you’re an engineer who gets The Platonic Ideal all the time?
I sure don’t. Even when I think it’s inescapably rational & completely reasonable.
Old switches with limited backplane bandwidth & small amounts of buffers? It’s junk compared to a modern Arista 10GbE switch, but when push comes to shove, you, as a virtualization engineer, can make it perform in service to your employer.
This is what we do. Or I should say, it’s what some of us forced to do.
We are, as a group, folding paper again and again, defying the rules & getting more & more value out of our gear.
It can be stressful and thankless. No one sees it or appreciates it, but we are engineers. Many have gone before us, and many will come after us. Resources are always going to be limited for people like us, and it’s our job to manage them well and extract as much as we can out of them.
This post written as much as a pep-talk for myself as for others!
Greg Ferro, Philosopher King of networking and prolific tech blogger/personality, had me in stitches during the latest Coffee Break episode on the Packet Pushers podcast.
Coffee Breaks are relatively short podcasts focused on networking vendor & industry news, moves and initiatives. Ferro usually hosts these episodes and chats about the state of the industry with two other rotating experts in the “time it takes have a coffee break.”
Some Coffee Breaks are great, some I skip, and then, some, are Vintage Greg Ferro, encapsulated IT wisdom with some .co.uk attitude.
Like April 25th’s, in which discussion centered around transitioning to public cloud services, Cisco’s new OpFlex platform, and other news.
During the public cloud services discussion, the conversation turned toward on-prem expertise in firewalls, which, somehow, touched Ferro’s IT Infrastructure Library (ITIL) nerve.
ITIL, if you’re not familiar with it, is sort of a set of standards & processes for IT organizations, or, as Ferro sees it:
ITIL is an emotional poison that sucks the inspiration and joy from technology and reduces us to grey people who can evaluate their lives in terms of “didn’t fail”. I have spent two decades of my professional living a grey zone of never winning and never failing.
Death to bloody ITIL. I want to win.
Classic.
Anyway on the podcast, Ferro got animated discussing a theoretical on-prem firewall guy operating under an ITIL framework:
“Oh give me a break. It’s all because of ITIL. Everybody’s in ITIL. So when you say you’re going to change your firewall, these people have a change management problem, a self change management problem, because ITIL prevents them from being clever enough. You’re not allowed to be a compute guy & a firewall guy [in an ITIL framework]. When you move to the public cloud, you throw away all those skills because you don’t need them.
Ferro’s point (I think), was that ITIL serves as a kind of retardant for IT organizations looking to move parts of their infrastructure to the public cloud, but not in just the obvious ways you might think (ie it’d be an arduous process to redo the Change Management Database & Configuration Items involved in putting some of your stack in the cloud!)
It seems Ferro is saying that specialized knowledge (ie the firewall guy & his bespoke firewall config) are threatened by the ease of deploying public cloud infrastructure, and to get to the cloud, some organizations will have to break through ITIL orthodoxy as it tends to elevate and protect complexity.
Good stuff.
But that wasn’t all. Ferro also helped me understand the real difference between declarative & imperative programming. Wheras before I just nodded my head and thought, “Hell yeah, Desired State Configuration & Declarative Programming. That’s where I want to be,” now I actually comprehend it.
It’s all about sausage rolls, you see:
Let’s say you want a sausage roll. And it’s a long way to the shop for a sausage roll. If you’re going to send a six year old down to the shop to get you a sausage roll, you’re going to say, Right. Here is $2, here’s the way to the shop. You go down the street, turn right, then left. You go into the shop you ask the man for a sausage roll. Then you carry the sausage roll home very carefully because you don’t want the sausage roll to get cold.
That’s imperative programming. Precise instructions for every step of the process. And you get a nice sausage roll at the end.
Declarative programming (or promise theory as Ferro called it), is more like:
You have a teenager of 13 or 14, old enough to know how to walk to the shop, but not intelligent enough to fetch a sausage roll without some instructions. Here’s $10, go and fetch me a sausage roll. The teenager can go to the shop, fetch you a sausage roll and return with change.
See the distinction? The teenager gets some loose instructions & rules within which to operate, yet you still get a sausage roll at the end.
Jeffrey Snover, Microsoft Senior Technical Fellow (or maybe he’s higher in the Knights of Columbus-like Microsoft order), likened declarative programming to Captain Picard simply saying, ‘Make it so!.” I was happy with that framework but I think sausage rolls & children work better.
By the way, for Americans, a Sausage Roll looks like the image above and appears to be what I would think of as a Pig in a Blanket, with my midwestern & west coast American roots. How awesome is it that you can buy Pigs in a Blanket in UK shops?
In his famous essay The Myth of Sisyphus, French existentialist Albert Camus argued that though life is absurd & meaningless, man can achieve relative happiness by acknowledging the true nature of his existence, revolting against it, and enjoying his freedom.
Camus then discussed some examples of men who achieve happiness despite the absurdity, but the greatest Absurd Man of them all, Camus reckons, is Sisyphus, the Greek mythology figure who was doomed to pushing a rock up a mountain every day, only to have to repeat the same task the next day, on and on and on for eternity.
The trick to life, Camus famously said, is to “imagine Sisyphus happy.”
Sorry Camus, but that’s a load of bull. It really sucks being Sisyphus and there’s no way he’s happy pushing that boulder up the mountain day after day.
Especially if Sisyphus is mid-career IT guy on the Infrastructure side of the house. IT guys are supposed to hate repetitive tasks, and if we’re pushing boulders up the mountain again and again, it’s automatic #ITFail as far as I’m concerned. Button pushing monkey work drains the soul & harms the career.
So we automate the boulder push via a script or cron job or scheduled task and then we put some reporting & metrics around boulder performance, and then, just like that, IT Sisyphus can chill out at the bottom of the mountain and feel relatively happy & satisfied.
Yet the risk here for IT Sisyphus is that the care & feeding of the script or cron job becomes the new boulder.
And that’s where I’ve been at these last few difficult weeks at work. The time-saving techniques of yesteryear are the new boulder I’m pushing up the mountain everyday. I see the Absurdity in this, but no one wants to join me in a revolt; the organization is content with the new-boulder-same-as-the-old-boulder strategy.
But I’m not. I’m a Systems Engineer, I’m called to be more than a script-watching Systems Administrator. I’m supposed to hate boulder-pushing, but I aim higher.
I want to defeat gravity.
Sandboxes
As long as I’m pushing out five cent IT allegories & metaphors, I might as well mention this one.
Parent Partition has his Sandbox at home:
and soon, hopefully this weekend, the Child Partition will have finally have his Sandbox as well:
This sandbox will increase Child Partition’s agility while relieving some of the strain on the Parent Partitions’ resource pool
I have a whole bunch of good blog posts on the warm burner, but lately my free time has gone to terraforming the side yard and extracting cartloads of dirt to create a large play area & sandbox for Child Partition.
You might think IT guys hate manual labor like this, but to be honest, getting outside and literally toiling in the soil with spades, hoes, rakes and my own sweat & muscle has been regenerative.
Ora et labora; certain Catholic monastic traditions say. Pray and work.
Whatever your muse is, it’s good to occasionally step away from the keyboard and reflect on things while you struggle against the elements, as I have been for the last several weeks on Child Partition’s Sandbox. I’d like to blame the Supervisor Module Spouse and her tendency to move the goalposts, but really, this is my first landscape architecture project and to be frank, I’m not very good at it. I can’t even make the ground level.
But it’s still fun.
Nokia Reflections
Today, Microsoft’s purchase of Nokia closes. Nokia, as we know them, kind of cease to exist. Or do they?
Paul Therott, ace Microsoft blogger/reporter /pundit at WinSuperSite.com, is worried. What makes Nokia special & interesting, he reckoned on Windows Weekly, is the fact that it’s Finnish, it’s old and, as Leo Laporte pointed out, Nokia owns its own supply chain & manufacturing force. From rare-earth mineral extraction that’s safe & socially responsible to device design & construction, Nokia is a classic, vertically-integrated device manufacturer. Everyone else uses Foxconn; Nokia is the exception.
And now they’re owned by Nadella & Microsoft.
As Microsoft ingests Nokia, what’s going to happen to our beloved Finnish phone maker?
I’m a longtime Nokia fan…most guys my age were exposed to smartphones in the late 90s early 2000s era. Some went for Blackberry and its legendary keyboard. Others went Palm & Treo with either PalmOS or Windows Mobile. I was always in the Nokia/Symbian S60 camp until Android arrived on scene.
And I miss it. I miss my old Nokia E51, it’s fast, secure, and unique S60 operating system, and yeah, sometimes I miss the keyboard. And it was made in Finland! By Finns.
So I hope Thurott’s worries are misplaced, but I fear he might be right. Bland Pacific Northwest design sense & standardized Asian-outsourced product management are going to supplant the unique Finnish ethos that made Nokia, Nokia.
Lastly: today marks one week since I ditched Google, the Nexus 5 and went full Redmond with an Office 365 Enterprise E1 subscription + Nokia Lumia Icon running Windows Phone 8.1.
And I’m hooked. The ideal is closer today than it was a year ago, or even a month ago: agnostic computing. I loved Google for so long because I could get what I wanted on whatever device I had on me at the time; today I can do the same with Windows and it’s not so clunky.
Also,the Icon’s camera & optics are incredible. The Nokia camera software & effects produce some really stunning shots.
I finally have a phone with a camera that is superior to my wife’s iPhone 5. It’s great.
More on the transition to O365 next week; have a good weekend!
I’ve been pleased to hear a lot of talk and read a lot of thoughts lately about staying sharp & avoiding career rot in IT, even if it has raised more questions at times than it has answered.
Working Hard in IT, (incidentally a great Hyper-V blog), kicked things off at the turn of the year with a nice post on IT workers and staying sharp enough throughout your career that you reach expert status, a place where you are “always in flow”, building for yourself a a virtuous feedback loop of continuous improvement.
And then Richard Campbell of the famous RunAs Radio &.net rocks did an entire show on staying sharp in IT a few weeks back. He and Kim Tripp, a SQL server MVP, blogger, and tech coach, talked up the value of IT training conferences. Tripp in particular advocated mid-career IT pros send themselves to out-of-town week-long IT training conferences. It has to be out of town so that you’re free from the distractions of home-life and able to soak in the learning & wisdom in a full-immersion nerd camp.
Okay so I supplied that last bit of color. But Tripp is right: Whether you’re a SQL admin, a storage jockey or you dream of BGP & ipv6, there are some awesome bootcamp style, full-immersion conferences (not vendor conventions mind you!) you can go to to learn a lot relatively quickly. I’d call this the Shock ‘n Awe approach to healthy IT career maintenance.
But if that doesn’t sound like your cup of tea, there’s loads of other ways to learn formally. For about $2,500 you can take a course at any one of a dozen or so different technology training academies, perhaps even getting a nice certificate you can put on your resume.
Or you can subscribe to professionally-made video lessons for about $40 a month and learn at your own pace. I doubt you’ll get a certificate but certainly, that’s a great option if you’re mid-career, with a family, and not a lot of free time. And distance/web-learning is not the joke it once was.
For me this talk, while interesting and encouraging, hasn’t hit home until recently. Yes I resolved to sharpen my skills following my 2013 computing flip-flop saga, but at work, we’re being given a rare chance at getting reimbursed or funded up-front for career related education. Terrific!
But I’m finding it’s not an easy choice to make. It’s gone more or less like this:
“Hurray!!!Finally, the recession blues are over and companies are investing in their workforce again!”
And then:
“Now, give me a whiteboard, a good instructor, a switch, a router, and a few hours a week and I’ll show ipv6 what’s what. Today, link-local; tomorrow: jokes about carrier grade NAT and knowing what that actually means!
and then, after quiet, sober reflection:
“Or maybe I should do some powershell scripting, because mastering powershell will empower me in so many areas. But wait, that’s too Microsoft-centric. VMWare certification sounds like a good idea. Or maybe I should get into a Java scripting class? Web is huge afterall.”
On the other hand, I need to introduce myself to JSON, take a RESTful course and lather up with some SOAP knowledge. REST, SOAP & JSON are the lingua franca of our internet of things, after all, we’ll be using them to provision <Insert IT resource that used to be physical and tied to geographic location but is now free & agile by virtue of Software Defined somethingorother> by Thanksgiving for sure!
“Then again, ipv6 could be a game changer.”
“What’s NoSQL?”
And on and on it goes, the alphabet soup of technology acronyms I want to familiarize myself with approaching ∞ but having the bandwidth, time, and luxury to only pick one.
Cautionary tale: Sr. IT Engineer Rip van Winkle took a nap one day and woke up to find his datacenter in something called a “cloud,” his IT department unrecognizable, and his Excel skills severely lacking.
And that was before I recalled the cloud push and how it’s shaping IT as a career field.
What if the high-level engineering roles Working Hard in IT mentioned are declining in total numbers as well as percentages of a business’ IT staff payroll? What if, instead of needing skilled IT engineers, businesses decide (as many are) that monthly opex to AWS or Azure or OpenStack is a better way to spend their money?
What if -gasp- the IT department of today is getting disrupted out of existence?
In that case, the internal dialog goes like this:
Forget IPV6! Go big with ITIL or go home! Storage optimization, VM density, WAN performance…what is this, 2011? Businesses don’t want that, they don’t want a data center at all anymore. The smart bet for the future of IT is in process management, technical accounting, contract writing, service delivery, and SLA monitoring!
Suddenly picking something to study, something that sparks your interest, helps your career and also brings some value back to the business, isn’t such an easy choice, is it?
You want to stay sharp & agile, yet be aware enough about how market forces (animal spirits even!) are shaping IT & the business that you can adapt & respond to the new regime, whatever it is and will be in five years. Amen.
I’m mostly self-taught in IT, so maybe I’m biased that way, but I think there’s a lot to be said for building out a home lab where you can patch up some of the weak spots in your technology portfolio, perfect the stuff you’re already good at, play with the awesome new tech everyone is talking about, and have some fun too.
And unlike boot-camp style skills conferences, video subscriptions, or class room training, home labs don’t have to cost a lot, and you don’t need to leave your Child Partition and go on a road trip. The same technology that took your server room down from six racks of 2u servers to a few virtual machine hosts can help you along in your career: hypervisors!
Splunk is free to download and test out, for instance. Go deep into “machine data,” understand what it is, why Splunk has splashed, and maybe even put it on your resume under “Lab Experience.”
Or maybe you’re a network guy. In that case, go crazy with Mininet young man. Build a network from a local LAN switch to a pair of virtual routers doing BGP, then stress & break it, all inside your PC, at home.
Or study the effect high latency & 1% packet loss has on application delivery by standing up a Wan Emulator VM, then put WanEM between a host & a ZFS-based storage VM on your iSCSI network. Because you can do that too!
And if the tech you want isn’t free, see if there’s a trial version. Or sweet talk your way past the sales guys into getting a free copy. Tech companies, after all, want people like you and me learning about their products and offerings. Use that to your advantage!
All you need is one capable, modern x86 box with 12GB-16GB RAM and a few hundred gigabytes of storage space. Windows 8.1 Professional ($200) has Hyper-V built right in, and it’s capable enough to be your home lab & home PC. But if you don’t like Windows, Ubunutu is a free download and kvm is an easy install. You don’t need to install a 12u rack in your garage like some people.
And yes for the skeptical, home IT labs are a thing, not just something crazy people do. Check out TinkerTryIT @ Home, or Serve the Home.com, for instance. It’s perhaps even a movement, a response, if you will, to the “Cloud Disruption” story VC firms are fond of pushing.
Approach your home lab as real & vital infrastructure. And put some goals and structures into your lab environment: this month I’m going to learn X, and next month Y and stick with it. Record your progress, make a checklist. BUILD.
Of course a home lab probably won’t help you along towards ITIL certification. If that’s the skillset IT pros will need to devleop, perhaps classroom training is the best. Or maybe you should just get an MBA. I’m not certain. But having a good attitude & trying to understand how business is evolving will take you along way, no matter what the IT department evolves into.