Buying a car is just like buying storage

Supermicro...the king of all storage disruptors
Would you like some underbody rust protection with your array sir?

So the family car (a tiny 2012 Mazda 3) lease is up in February which means it’s time to get a new Agnosto-ride for the Supe Module spouse, the Child Partition and -like dads everywhere know- all the heavy, awkwardly-shaped stuff that’s required to go everywhere the Child Partition goes.

It’s 2015, I’m nearing 40 and so I’m thinking Agnosto-ride 2.0 will be something bigger, safer, and because gas is so cheap and will never, ever, ever go up again, suitably powerful & commanding. Something established, something that says “Look upon and fear me,”  yet is soft, friendly and maneuverable enough that my wife and I can park it without effort.

Or hell, maybe it can park itself.

That’s right. Time to go car shopping, baby.

I love shopping for cars, almost as much as I love shopping for storage arrays. When you step back and think about it, the two industries (cars & storage arrays) are so similar I’m convinced a skilled salesman could make a great living selling cars in the morning and slinging shelves in the afternoon. ((Or perhaps NetApp could merge with Ford and the same guy who sells you a Taurus could sell you a filer out of the same dealership))

Think about it. Glen works for a dealer selling Camrys in the morning, and he’s really good at bumping his commission up by convincing his mark to buy something that really should be included: a spare tire. By late afternoon, he’s pitching the exact same thing (High Availability via Active/Passive controllers) in expensive recurring license form to some poor storage schlub who just needs a few more TBs so he can sleep at night without worrying about his backups.

What’s more, the customer victim can’t just go and purchase the car/array from the manufacturer himself,  he’s got to have some value added to that transaction by way of a VAR or a dealer, you see, else what reason is there for Glen?  The customer must have Glen’s guidance; he literally is incapable of picking the right car or array for himself, even if the mark produces his own storage podcast or subscribes to Auto Week & Consumer Reports. The mark’s hands are held until such time that he selects the right car/array, which is always either the car/array closest to Glen, or the car/array that offers Glen’s employer the most margin.

For this is the way of things, except during quarter or year end.

And in both industries, the true cost of the product is either really hard to find or it’s been hidden in plain site, or it only applies in certain use cases, all of which  makes determining a car/array’s value very hard to quantify. Yes, you can take all the variables, drop them in Excel, but pivot tables only go so far: the electric gets you an invaluable HOV sticker for 2x the cost of the range-anxiety free hybrid, while the all flash array that dedupes & compresses inline and goes like a bat out of hell costs twice as much as the compress-only hybrid array which has honest-to-God cheep ‘n deeps that you know and trust.

Lastly, no buyer of metal boxes with rotating round things ((usually)) is as biased & opinionated as car & storage buyers. “You’ll regret that POS Kia in a few years, it’ll let you down!” says the Honda snob to the dad trying to save a buck or two. “No one ever got fired for buying EMC!” shouts the storage traditionalist at his colleague who just wants a bunch of disks & software.

And in the end, all this …analysis if you can call it that…. is utterly worthless if your family doesn’t like the way the car handles or your DBA can’t quite grasp the concept of mounting a cloned snapshot of his prod LUN and insists on doing SQL backups the way he learned to do them in 19-diggity-7.

Don’t hate the player, hate the game, Jeff you’re thinking.

But I don’t! I love the player and the game. I just like winning and if that means Glen loses a point or two on his commission, so be it.

Which is why before I buy a car or a storage array, I arm myself as best I can. In the case of storage, it’s imperfect spreadsheets with complex formulas, some Greybeards on Storage, some SQLIO & IOMETER, and some caffeine. In the case of cars, it’s perfect spreadsheets + Clark Howard + myFico.com credit report ((Incidentally, it won’t be Myfico.com this time around since Fair Isaac apparently refuses to encrypt their entire site like a real bank would

For Shame Fair Isaac
For Shame, Fair, Fair Isaac, if that is your real name

)) + bank check just to let Glen know that I’m the real deal, that I could bolt and buy that other car he’s trash-talking if he doesn’t toss in the spare tire gratis.

Game on. Time to go hunting!

System Center is Dead, Long Live System Center?

MSFTSystemCenterlogo1Change is afoot for System Center, Microsoft’s stack of enterprise technology management applications that guys like me install, use, manage, and build great careers on top of. And not just little change. Big, sweeping change, I’m convinced, thanks largely to Satya Nadella, but also thanks to a new & healthy culture of pragmatism inside Microsoft.

But that pragmatic culture began with a bit of fear & intimidation for the System Center team. I’m told by a source ((Not really)) that it went down like this: Nadella strolled over to the office building where System Center is built by  segregated development teams. I’m told that the ConfigMan & VMM teams, as creators of the most popular programs in the suite, get corner offices with views of the Cascades, while the Service Manager & DPM teams fight over cubes in the interior.

Anyway, Nadella walked in one day, called them all around a handsome, gigantic, rectangular redwood work table in the center of their space. He looked at each of them quietly, then -with a roar that’s becoming legendary throughout the greater Seattle metroplex- he bent over and with enormous strength, flipped the table on its side, spilling coffee, laptops, management packs, DPM replicas, System Center Visio shapes and the pride/pain of so many onto the cold, grey marble floor.

“Some of this is going to stay. And some of it’s going to go,” he said to them, motioning to the mess on the floor.

And then, he vanished, like a ninja.

But seriously, look at all the change happening at Microsoft. Surely the System Center we love/hate/want to name our kid after is not goign to escape 2015 without some serious, deep, and heartbreaking/joy-inducing change, depending on your perspective. It’s already happening. To wit:

  • Parts of System Center are dead as of Windows Server Technical Preview: App Controller, the self-service Silverlight & http front-end to VMM has been dropped out of System Center Technical Preview.  Farewell oddly-named App Controller, can’t say I’ll miss you. In its place? Azure Pack baby.
  • In the last 45 days, the whole System Center team has been busy begging and pleading with us to give them some feedback. VMM put up a Survey Monkey , and the DPM, Orchestrator, and Service Manager blogs all have been asking readers to give them more feedback. VMM even has a Customer Panels  whose purpose is to take the pulse of working virtualization stiffs like me. That’s awesome -and reflects the broader changes in the company- but it’s also a bit scary because I love my VMM & Configman and I’m not used to being asked what I think of it, I’m used to just taking it, warts and all. ((Since they asked, I’m running SCVMM Technical Preview in the lab at home and though its changes mostly amount to removal of features in the production version, I view it as a great advancement for one reason: I can now automate the re-naming of vNICs through VMM itself, rather than some obscure netsh command/batch file thingy. Awesome))
  • There are many Configuration Management products out there, but ConfigMan is mine, and it has remained suspiciously absent from System Center Technical Preview. Now I’m not suggesting that MS is going to kill off the crown jewel of its System Center suite, but crazier things have happened. Jeffrey Snover, father of Powershell, isn’t giving up on his Desired State Configuration cmdlets, the DSC sect within the Microsoft professional community is gaining influence & strutting about the datacenter floor with some swagger, and DSC is a tool that with some maturity could largely make ConfigMan unnecessary in many environments. It probably scales to Azure better, though it doesn’t have anything in MDM as far as I know.
  • Though much improved, SCOM still strikes me as too hard  to build-out compared to Monitoring as a Service offerings like New Relic. Granted, SCOM’s cloud story was pretty strong; just two months I ago I got a taste of #MonitoringGlory when I piped an endless train of SCOM alerts/events directly into Azure Operational Insights and got, well, some insight into my stack. But guess what SCOM-fans? You no longer need SCOM for that.  Ok then.  Why would I use it?
  • There are no sacred cows at Microsoft anymore: My precious Lync? Gone. Renamed Skype for Business. The Start Screen, which I was strangely beginning to like? I’m suffering Stockholm Syndrome as I play with the latest Windows 10 build; it’s been axed! Sharepoint online public-facing websites? Starting March 9, new customers won’t have to go through the crucible some of us have gone through to stand-up a dynamic corporate website back-ended by Sharepoint in Office 365. They get to go through someone else’s crucible, like Drupal or something.
  • Nadella has a talent for picking the obvious, and he’s clear: Apparently it was Nadella who told the Microsoft Holo Lens team that what they were building was more akin to the Enterprise’s Holodeck than a new way to play shooters in XBox Online. It’s been Nadella repeating the call that there should be One Windows across all products, not an RT here, and a Windows Phone there. Like him or not, the man has some clarity on where he wants Microsoft to be; and I think that’s exactly what MS needed.

So, I have no evidence that System Center is going to get all shook up in 2015 -and I mean seriously shaken up- but it seems pretty obvious to me that with Nadella came a healthy & powerful introspection that’s really bearing some fruit in parts of Microsoft’s business.

Now it’s System Center’s turn. And it’s good. We should look at that suite holistically, in the context of our time & and the marketplace. Parts of it are undoubtedly great & market-leading; other parts of it are, in my opinion, beyond fixing. The former will be strengthened, the latter will be cut off and discarded. System Center, whether it lives on or gets swallowed up by the Azure Pack, will get better, and I’m pumped about that!

Live from #VFD4

It’s a little after 5am here in the capital of the great state of Texas and finally, after a furious #VFD4 day one, I’ve a chance to catch up and blog about my second Tech Field Day experience.

I’ll get around to posting about the vendors & the products I’m learning about, but first I’d like to impress upon the reader briefly how crazy this Tech Field Day thing is. It’s something of a unique beast, difficult to explain to outsiders and families, two parts professional conference, one part trade show, all parts fun.

Gestalt IT & Organizers

VFD-Logo-400x398Stephen Foskett is a storage guy who for many years penned an influential storage column in one of those old snail mail periodicals we called magazines. Foskett dresses well (seriously, I half expect him to show up with an ascot most days), works out of Ohio but hails from Connecticut, likes fine watches, and is sort of the head chef in charge of defining how this crazy, multi-layered, complicated enchilada called Tech Field Day is going to taste. The man has his detractors I understand, but all men and women with vision & drive do. He’s been doing this Tech Field Day thing for about 5 years, is confident it’ll be around for another five, and is a good man to know in an industry (IT) that’s changing & shifting.

Tom Hollingsworth is a bonafide Cisco Certified Internetwork Expert (CCIE, not sure how many digits or if he tattooed them anywhere), calls Oklahoma home, and like all good networking guys, routes, switches and load-balances snark in addition to frames & packets. He’s genuine, quick-thinking, funny and brutally honest with vendors & delegates alike. He tells good stories and is deeply engaged on the state of the art (cf IPv6 debate at #VFD3).

Claire Chaplais isn’t with us at #VFD4, but her talent for bringing order to chaos is in evidence everywhere. I don’t know that much about her or her background, but I know that in addition to being the organizational brains of the operation, Claire brings balance to the troika.

That’s it. That’s the Gestalt IT organization: a former storage columnist who presents well, an OK CCIE, and Claire. Two technical guys and one sharp organizational doer working across geographies and delivering a product, Tech Field Day, that brings me to Austin today.

The Vendors & the Delegates

This then is kernel of Tech Field Day, it’s raison d’etre, its pitch or value prop if you will forgive my use of that abused term. Gestalt connects technology vendors -established, startups, mid-life etc- to influential IT practitioners who blog about technology. The vendors fund this thing and pay for our accommodations, travel, and all food, schwag, spirits & the venue. They bring their show to us, or we go to them in their workplace over the space of 72 hours.

10-Office-Space-quotesDelegates, to invoke the great Tom Symkowski of Office space, interface with the goddamned Vendors so that you can have an informed perspective on their product, its position among competing products, and its value. Sometimes these presentations are amazing & informative, full of #WhiteboardingGlory, and sometimes they suck….it’s a coin flip.

Delegates receive no compensation for this trip, and some of us, including me this time around, are losing income to serve as Delegates. We’re encouraged to write our views, but not forced, and no one approves or reviews content before I hit the big blue publish button in WordPress.

All this is choreographed, packaged, and produced into a frenetic 72 hour span, and Gestalt makes it work. Nearly 20 flights converged on AUS from all points of the globe in the space of just a few hours Tuesday, yesterday we heard from startups like Platform9, saw #VFD3 friend & alum Eric Wright who now works for VMTurbo, will hear from Solarwinds, Commvault and StorMagic today before closing tings up tomorrow with Scale and  Dell, as establishment a player there is.

Knowing your place

I relish Tech Field Day. It’s fun for me as I know my skillset and the environments I excel in. I practice IT in small to medium enterprises, organizations with 500-2000 employees, wide geographic footprints, usually private but sometimes public, places where a Converged IT Guy can touch a lot of things and have an outsized impact. I love fast-paced IT Shops, am not a fan of ITIL, and I’m DevOps-curious. My solutions are probably not a good fit for a 10,000 seat enterprise, and may be too complicated for really small IT shops.

There are Delegates here who are like me, but many are not. Some are rockstars who author respected technical books, and the string of certs behind their names is truly impressive. I’m more of a Generalist whose passions were lit up by virtualization, cloud, and rationalizing the stack in my space.

In the context of Virtualization Field Day, I’m again the only Hyper-V & System Center guy in a sea of sharp VMware experts. When the VMware Delegates say SRM, I think Failover Clustering & Azure Site Recovery. They vMotion, I Live Migrate, they moan about vSphere web client and I bitch about SCOM.

And we all complain about storage.

The Vendors build their products for VMware first and foremost and that is a reflection of the marketplace reality.  Yet as a Hyper-V & System Center guy, I still get a lot out of these presentations, understanding how the products are positioned and how colleagues solve some of the same problems I face.

And I’m just arrogant & confident enough that I don’t mind nagging and pushing vendors to support Hyper-V/System Center even as they’re excited to tell me about VMware solutions.

Maybe it’s hubris but I like to think I’m representing a bunch of IT guys and gals in the real world who are building and supporting durable infrastructure systems in smaller environments like the ones I come from and usually on Microsoft technologies. Hopefully they get something out of the blogs I’ll be posting over the next few days.

Should have used FQDN in your malware, North Korea

Bad technology habits are universal, even among the strange and isolated yet apparently elite hacker dev community of North Korea.

From the FBI statement this morning assigning blame for the Sony hack directly on the hermit kingdom:

  • Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.
  • The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack.

Devs can be really lazy, hardcoding an IP address where they should put an FQDN, though I suppose for their purposes, North Korea didn’t really care to cover their tracks (perhaps pointing the A record at someone else).

All kidding aside, this is really going to shake things up in IT environments small and large. I’m not sure if this is the first State-sponsored cyberattack on a private corporation on another nation’s soil, but it’s going to be the first one widely remembered.

Time to start implementing that which was once considered exotic and too burdensome….doing things like encrypting your data even when it’s at rest on the SAN’s spindles, off-lining your CA, encrypting its contents,and storing it on a USB stick inside a safe, governance procedures & paper-based chain-of-custody forms for your organization’s private keys.

Assume breach, in other words.

Nimble Storage now integrates with System Center VMM

Just as I was wrapping up my time at my last employer, Nimble Storage delivered a great big Christmas gift, seemingly prepared just for me. It was a gift that brought a bit of joy to my blackened, wounded heart, which has suffered so much at the hands of storage vendors in years gone by.

What was this amazing gift that warmed my soul in the bleak, cold Southern California winter? Something called SMI-S, or Smizz as I think of it. SMI-S is an open standard management framework for storage. But before I get into that, some background.

You may recall Nimble Storage from such posts as “#StorageGlory at 30,000 IOPS,” and “Nimble Storage Review: 30 Days at Ludicrous Speed.” It’s fair to say I’m a fan of Nimble, having deployed two of their mid-level arrays this year into separate production datacenter environments I was responsible for as an employee, not as a consultant. From designing the storage network & virtualization components, to racking & stacking the Nimble, to entrusting it with my VMs, my SQL volumes, and Exchange, I got to see and experience the whole product, warts and all, and came away damned impressed with its time-to-deploy, its flexibility, snapshotting, and speed.

But one of the warts really stood out, festered, itched and nagged at me. While there has been support for VMware infrastructure inside a Nimble array since day one, there was no integration or support for Microsoft’s System Center Virtual Machine Manager, or VMM as us ‘softies call it. What’s a Hyper-V & System Center fanboy to do?

Enter SMI-S, the Storage Management Initiative – Specification,

Connecting green blobs to other green blobs, SMI-S is now in release candidate form for your Nimble
Connecting green blobs to other green blobs, SMI-S is now in release candidate form for your Nimble

a somewhat awkwardly-named but comprehensive storage management spec allowing you to provision/destroy volumes, create snapshots or clones, and classify your tiers via 3rd party tools, just the way $Deity intended it.

SMI-S is a product of the Storage Networking Industry Association and there’s a ton of in-depth, technical PDFs up on their site, but what you need to know is the specification has been maturing for a decade or longer, and it’s been adopted by a modest but growing number of storage vendors. The big blue N has it, for instance, as does HP and Hitachi Data Systems.

The neat thing about SMI-S is that it’s built atop yet another open management model, the Common Information Model, which, as MS engineers know, is baked right into Windows Server (both as a listener and provider).

And that has made all the difference.

I love SMI-S and CIM (as well as WBEM)  because it’s a great example of agnostic computing theory working out to my benefit in practice. SMI-S and CIM are open-standards that save time, money & complexity, abstracting (in this case) the particulars of your storage array and giving you the freedom to purchase & manage multiple different arrays from one software interface, System Center via that other great agnostic system, https.

Or, to put it another way, SMI-S and CIM help keep your butt where it should be, in your chair, doing great IT engineering work, not in the CIO’s office meekly asking, “Please sir, may I have another storage system API license?”

Single Pane o' glass in VMM with SMI-S for the Hyper-V set
Single Pane o’ glass in VMM with SMI-S for the Hyper-V set ***

Fantastic. No proprietary or secret or expensive API here, no extra licensing costs on the compute side, no new SKUs, no gotchas.*

And now Nimble Storage has it.

Nimble’s implementation of SMI-S is based on the Open Pegasus project**, the Linux/Unix world’s implementation of CIM/WBEM. All Nimble had to do to make me feel happy & warm inside was download the tarball, make it, and stuff it into NimbleOS version 2.2, which is the release candidate OS posted last week.

For IT organizations looking to reduce complexity & consolidate vendors, a Nimble Array that can be managed via System Center is a good play. For Nimble, that may only be a small slice of the market, but in that slice and among IT pros who focus on value-engineering just as much as they focus on convergence, System Center support enhances the Nimble story and puts them in league with the bigger, more established players, like the big blue N.

Which is just where they want to be, it appears.

Nimble’s on a roll and closing out 2014 strong, with fiber channel support, new all-flash shelves, faster models, a more mature OS (in fact, I believe it’s mostly re-written from the 1.4x days), stable DSMs for my Microsoft servers, and  now, like icing on the cake, an agnostic standards-based management layer that plugs right into my System Center.

* Well, one gotcha. As the release notes say: “Note: SCVMM can only discover volumes that have the agent_type smis attribute.When logical units are created using SCVMM, the SMI-S provider ensures the agent_type smis attribute is added to the volumes. However, volumes created from the array do not automatically have the attribute.You must add the attribute when you create the volume; otherwise, SCVMM will not be able to discover it. For more information about the agent_type smis attribute, see Create a Starter Volume.” So existing volumes won’t show in your VMM but’s not too big of a headache as you can storage live migrate your VMs to volumes you’ve provisioned via VMM. 

Also, as a footnote, I believe NetApp charges for SMI-S support. 
** Open Pegasus is itself affiliated with the Open Group, an unsexy but in my view exciting & important IT standards organization that 1) is legit as the official certifying body of the UNIX trademark, 2)  is not ITIL-affiliated as best I can tell and 3) aligns very well with Microsoft’s servers & systems. SMI-S is Ajust one piece of the puzzle; another is instrumentation & other infrastructure items. To that end, the Open Group oversees work being done on Open Management Infrastructure, which Microsoft supports and can utilize via WSMAN and wmi. Cisco, Arista and others are on board with this, and though I haven’t yet programmed a Nexus switch with Powershell yet, it is a real option and offers a compelling vision for infrastructurists like me: best-in-class storage, network, compute hardware, all managed & instrumented via System Center or whatever https front-end is suitable. Jeff Snover detailed the relationship over two years ago in this blog.
 *** Incidentally,without SMI-S & CIM, there’d be no way for me to build a simulation SAN in the Daisetta Lab (#StorageGlory Achieved : 30 Days on a Windows SAN) and management via VMM, but as I detailed earlier this summer, you can: stand up a Windows file server box, turn on the feature “Standards Based Storage Management,” point VMM at it and provision

Hyper-V + VXLAN and more from Tech Ed Europe

If you thought -as I admittedly did- that on-prem Windows Server was being left for dead on the side of the Azure road, then boy were we wrong.

Not sure where to start here, but some incredible announcements from Microsoft in Barcelona, most of which I got from Windows Server MVP reporter Aidan Finn

Among them:

  • VXLAN, NVGRE & Network Controller, courtesy of Azure: This is something I’ve hoped for in the next version of Windows Server: a more compelling SDN story, something more than Network Function Virtualization & NVGRE encapsulation. If bringing the some of the best -and widely supported- bits of the VMware ecosystem to on-prem Hyper-V & System Center isn’t a virtualization engineer’s wet dream, I don’t know what is.
  • VMware meet Azure Site Recovery: Coming soon to a datacenter near you, failover your VMware infrastructure via Azure Site Recovery, the same way Hyper-V shops can

    Not sure what to do with this yet, but gimme!
    Not sure what to do with this yet, but gimme!
  • In-place/rolling upgrades for Hyper-V Clusters: This feature was announced with the release of Windows Server Technical Preview (of course, I only read about it after I wiped out my lab 2012 R2 cluster) but there’s a lot more detail on it from TechEd via Finn:  rebuild physical nodes without evicting them first.You keep the same Cluster Name Object, simply live migrating your VMs off your targeted hosts. Killer.
  • Single cluster node failure: In the old days, I used to lose sleep over clusres.dll, or clussvc.exe, two important pieces in Microsoft Clustering technology. Sure, your VMs will failover & restart on a new host, but that’s no fun.  Ben Armstrong demonstrated how vNext handles node failure by killing the cluster service live during his presentation. Finn says the VMs didn’t failover,but the host was isolated by the other nodes and the cluster simply paused and waited for the node to recovery (up to 4 minutes). Awesome!
  • Azure Witness: Also for clustering fans who are torn (as I am) between selecting file or disk witness for clusters: you will soon be able to add mighty Azure as a witness to your on-prem cluster. Split brain fears no more!
  • More enhancements for Storage QoS: Ensure that your tenant doesn’t rob IOPS from everyone else.
  • The Windows SAN, for real: Yes, we can soon do offsite block-level replication from our on-prem Tiered Storage Spaces servers.
  • New System Center coming next year: So much to unpack here, but I’ll keep it brief. You may love System Center, you may hate it, but it’s not dead. I’m a fan of the big two: VMM, and ConfigMan. OpsMan I’ve had a love/hate relationship with. Well the news out of TechEd Europe is that System Center is still alive, but more integration with Azure + a substantial new release will debut next summer. So the VMM Technical Preview I’m running in the Daisetta Lab (which installs to C:Program FilesVMM 2012 R2 btw) is not the VMM I was looking for.

Other incredible announcements:

  • Docker, CoreOS & Azure: Integration of the market-leading container technology with Azure is apparently further along than I believed. A demo was shown that hurts my brain to think about: Azure + Docker + CoreOS, the linux OS that has two OS partitions and is fault-tolerant. Wow
  • Enhancements to Rights Management Service: Stop users from CTRL-Cing/CTRL-Ving your company’s data to Twitter
  • Audiocodes announces an on-prem device that appears to bring us one step closer to the dream: Lync for voice, O365 for the PBX, all switched out to the PSTN. I said one step closer!
  • Azure Operational Insights: I’m a fan of the Splunk model (point your firehose of data/logs/events at a server, and let it make sense of it) and it appears Azure Operational Insights is a product that will jump into that space. Screen cap from Finn

This is really exciting stuff.

Commentary

Looking back on the last few years in Microsoft’s history, one thing stands out: the painful change from the old Server 2008R2 model to the new 2012 model was worth it. All of the things I’ve raved about on this blog in Hyper-V (converged network, storage spaces etc) were just teasers -but also important architectural elements- that made the things we see announced today possible.

The overhaul* of Windows Server is paying huge dividends for Microsoft and for IT pros who can adapt & master it. Exciting times.

* unlike the Windows mobile > Windows Phone transition, which was not worth it

More than good hygiene : applying a proper cert to my Nimble array

So one of my main complaints about implementing a cost-effective Nimble Storage array at my last job was this:

Who is Jetty Mortbay and why does he want inside my root CA store?
Who is Jetty Mortbay and why does he want inside my root CA store?

I remarked back in April about this unfortunate problem in a post about an otherwise-flawless & easy Nimble implementation:

The SSL cert situation is embarrassing and I’m glad my former boss hasn’t seen it. Namely that situation is this: you can’t replace the stock cert, which, frankly looks like something I would do while tooling around with OpenSSL in the lab.

I understand this is fixed in the new 2.x OS version but holy shit what a fail.

Well, fail-file no more,  because my new Nimble array at my current job has been measured and validated by the CA Gods:

verified
Green padlocks. I want green padlocks everywhere

Oh yeah baby. Validated in Chrome, Firefox and IE. And it only cost me market rates for a SAN certificate from a respected CA, a few hours back ‘n forth with Nimble, and only a few IT McGuyver-style tricks to get this outcome.

Now look. I know some of my readers are probably seeing this and thinking…”But that proves nothing. A false sense of security you have.”

Maybe you’re right, but consider.

I take a sort of Broken Windows Theory approach to IT. The Broken Windows Theory, if you’re not familiar with it, states that:

Under the broken windows theory, an ordered and clean environment – one which is maintained – sends the signal that the area is monitored and that criminal behavior will not be tolerated. Conversely, a disordered environment – one which is not maintained (broken windows, graffiti, excessive litter) – sends the signal that the area is not monitored and that one can engage in criminal behavior with little risk of detection.

Now I’m not saying that adding a proper certificate to my behind-the-firewall Nimble array so that Chrome shows me Green Padlocks rather than scary warnings is akin to reducing violent crime in urban areas. But I am saying that little details, such as these, ought to be considered and fixed in your environment.

Why? Well, somehow fixing even little things like this amount to something more than just good hygiene, something more than just ‘best practice.’

Ultimately, we infrastructurists are what we build, are we not? Even little ‘security theater’ elements like the one above are a reflection on our attention to detail, a validation of our ability to not only design a resilient infrastructure on paper at the macro level, but to execute on that design to perfection at the micro level.

It shows we’re not lazy as well, that we care to repair the ‘broken windows’ in our environment.

And besides: Google (and Microsoft & Mozilla & Apple) are right to call out untrusted certificates in increasingly disruptive & work-impairing ways.

*If you’re reading this and saying: Why don’t you just access the array via IP address, well, GoFQDNorGoHomeSon.com

Containers! For Windows! Courtesy of Docker

DockerWithWindowsSrvAndLinux-1024x505 (1)

Big news yesterday for fans of agnostic cloud/on-prem computing.

Docker -the application virtualization stack that’s caught on like wildfire among the *nix set- is coming to Windows.

Yeah baby.

Mary Jo with the details:

Under the terms of the agreement announced today, the Docker Engine open source runtime for building, running and orchestrating containers will work with the next version of Windows Server. The Docker Engine for Windows Server will be developed as a Docker open source project, with Microsoft participating as an active community member. Docker Engine images for Windows Server will be available in the Docker Hub. The Docker Hub will also be integrated directly into Azure so that it is accessible through the Azure Management Portal and Azure Gallery. Microsoft also will be contributing to Docker’s open orchestration application programming interfaces (APIs).

When I first heard the news, emotion was mixed.

On the one hand, I love it. Virtualization of all flavors -OS, storage, network, and application- is where I want to be, as a blogger, at home in my lab, and professionally.

Yet, as a Windows guy (I dabble, of course), Docker was just a bit out of reach for me, even with my lab, which is 100% Windows.

On the other hand, I also remembered how dreadful it used to be to run Linux applications on Windows. Installing GTK+ Libraries on Windows isn’t fun, and the end-result often isn’t very attractive. In my world, keeping the two separate on the application & OS side/uniting them via Kerberos and/or https/rest has always been my preference.

But that’s old world thinking, ladies and gentlemen.

Because you see, this announcement from Microsoft & Docker Inc sounds deep, rich, functional. Microsoft’s going to contribute some of its Server code to the Docker folks, and the Docker crew will help build Container tech into Windows Server and Azure. I’m hopeful Docker will just be another Role in Server, and that Jeffrey Snover’s powershell cmdlets will hook deep into the Docker stuff.

This probably marks the death of App-V, which I wrote about in comparison to Docker just last month, but that’s fine with me.

Docker on Windows marks a giant step forward for Agnostic Computing…do we dare imagine a future in which our application stacks are portable? Today I’m running an application in a Docker Container on Azure, and tomorrow I move it to AWS?

Microsoft says that’s exactly the vision:

Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere. This partnership will enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. This level of interoperability is what we at MS Open Tech strive to deliver through contributions to open source projects such as Docker.

Full announcement.

Microsoft releases new V2V and P2V tool

Do you smell what I smell?

Inhale it boys and girls because what you smell is the sweet aroma of VMware VMs being removed from the vSphere collective and placed into System Center & Hyper-V’s warm embrace.

Microsoft has released version three of its V2V and P2V assimilator tool:

Today we are releasing the Microsoft Virtual Machine Converter (MVMC) 3.0, a supported, freely available solution for converting VMware-based virtual machines and virtual disks to Hyper-V-based virtual machines and virtual hard disks (VHDs).

With the latest release, MVMC 3.0 adds the ability to convert a physical computer running Windows Server 2008 or above, or Windows Vista or above to a virtual machine running on a Hyper-V host (P2V).

This new functionality adds to existing features available including:

• Native Windows PowerShell capability that enables scripting and integration into IT automation workflows.
• Conversion and provisioning of Linux-based guest operating systems from VMware hosts to Hyper-V hosts.
• Conversion of offline virtual machines.
• Conversion of virtual machines from VMware vSphere 5.5, VMware vSphere 5.1, and VMware vSphere 4.1 hosts to Hyper-V virtual machines.

Download available here.

This couldn’t have come at a better time for me. At work -which is keeping me so busy I’ve been neglecting these august pages- my new Hyper-V cluster went Production in mid-September and has been running very well indeed.

But building a durable & performance-oriented virtualization platform for a small to medium enterprise is only 1/10th of the battle.

If I were a consultant, I’d have finished my job weeks ago, saying to the customer:

Right. Here you go lads: your cluster is built, your VMM & SCCM are happy, and the various automation bits ‘n bobs that make life in Modern IT Departments not only bearable, but fun, are complete

But I’m an employee, so much more remains to be done. So among many other things, I now transition from building the base of the stack to moving important workloads to it, namely:

  • Migrating and/or replacing important physical servers to the new stack
  • Shepherding dozens of important production VMs out of some legacy ESXi 5 & 4 hosts and into Hyper-V & System Center and thence onto greatness

So it’s really great to see Microsoft release a new version of its tool.

Going full Windows 10 Server in the Lab, part 1

So many new goodies in Windows Server 10.

So little time to enjoy them.

Highlights so far:

  • Command line transparency is awesome. Want the same in my Powershell windows
  • Digging the flat look of my Windows when they are piled atop one another. THere’s a subtle 3d effect (really muted shadows I think) that helps to highlight Window positions and focus. Nice work UI team
  • Server 10 without Desktop mode looks just about 100% like Server 2012 R2. So yeah, if you’re using your PC as a server, definitely install the Desktop mode

On the agenda for today:

  • Build what has to be one of the few Windows Server 10 Hyper-V clusters
  • Install the new VMM & System Center
  • Testing out the new Network Controller role on a 1U AMD-powered server I’ve had powered-off but ready for just this moment (never got around to building a Server 2012 R2 Network Virtualization Gateway server)
  • Maybe, just maybe, upgrading the two Domain Controllers and raising forest/domain functional level to “Technical Preview”, if it’s even possible.

What won’t be upgraded in the short term:

  • San.daisettalabs.net, the Tiered Storage box that hosts my SMB 3 shares as well as several iSCSI .vhdx drives
  • The VM hosting SQL 2012 SP2, IPAM, and other roles
  • The TV computer, which is running Windows 8.1 Professional with Media Center Edition. Yes, it’s a lab, but even in a lab environment, television access is considered mission critical

More later.