Enterprise Secrets + Privileged Account Management | CyberArk at #XFD1

Managing Enterprise Secrets & Privileged accounts has to be one of the most difficult jobs in Information Technology today, and one of the least transparent to the business. Bad guys have painted a target on admin’s backs, regulators are chomping at the bit as more consumer data is lost online, and Compliance officers are scrambling to understand the landscape and adapt to new rules from overseas. And yet the business may not even realize that unsung heroes in IT are still managing a stack of hardware & software designed to fulfill 1990s-era security models.

Take it from me: I know this pain well. Even if you do have an internal identity system, say Active Directory, it can be difficult to get all the bits from your Storage, Network, Compute & cloud systems to run a proper AAA model against your AD Forest. Even more difficult: figuring out how to audit the records of Active Directory (or NPS/RADIUS or ADFS or OAuth2/SAML glues) to present to your Compliance officers.

Yet in the background, a constant churn of news that only raises the pessimism bar higher: Target. Anthem. Maersk. Equifax. Facebook. Marriot. The goddamned CIA and the f****** National Security Agency. I made a Visio Timeline because I was having difficulty tracking all the breaches, and I’ve run out of room! And let’s not forget the business and your user colleagues’ need for secrets too as consumer technology continues to eat away at the Enterprise and as more of the economy is digitized. By 5pm most days, IT admins are just hoping to make it to retirement in 10 years without their orgs getting popped by a black hat.

cyberark-logoEnter CyberArk. This Silicon Valley company was founded in 1999, which is impressive to me. It’s not often you’ll find a company that’s been selling a product that handles Enterprise secrets + PAM for 20 years, at least a decade longer by my count than the popular consumer password management companies that are now sashaying their way into your Enterprise, as if they understand the challenge you’re facing. At Security Field Day 1 (#XFD1), CyberArk’s maturity & comprehension of the challenge of securing the enterprise really showed.

CyberArk’s Privileged Access Security Suite is a mature & fully-featured secrets + PAM tool. I was super-impressed with the demo their Global Director of Systems Engineering, Brandon Traffanstedt, gave us back in December 2018 in sunny San Jose. I came prepared to endure a boring password management demo; I left impressed at what I had seen, with only a single caveat.

Not only was CyberArk’s product comprehensive, it was bad-ass, with one exception. I saw:

  •  An SSH session opened to a network device’s command line, with a second factor prompt before access was granted
  • Full auditing + screen recordings of a Privileged Account accessing a protected server, just the kind of thing that reassures the business that you, as an admin, have nothing to hide, are not an ‘insider threat’ and are 100% transparent in your work.
  • Deep integration into Windows’ Win32 API, hooking into parts of the OS I’d not seen before outside of Microsoft products, including Credential Management
  • Full integration & support for MacOS
  • OAUTH2/SAML support and full support for your ADFS infrastructure
  • Cloud secrets & PAM management across AWS (and soon) Azure
  • Full support for your RADIUS infrastructure & 802.11x, whether via Microsoft’s NPS or some other solution
  • Automated credential rotation so that you don’t have to scramble when a fellow admin changes jobs, is fired for negligence, or joins Edward Snowden in Moscow
  • Secure sharing of secrets among your privileged IT colleagues
  • An offline, secured, and high-entropy password in a sealed envelope you can hand to the business for peace of mind

I’ve been working in IT for about as long as CyberArk’s been pounding the pavement and trying to convince IT Teams to invest in Enterprise Secrets & PAM software. I was impressed…..particularly because CyberArk scratches an itch that many IT Teams don’t know they have: the security costs & technical debt that a legacy of tactical, rather than strategic, investments that tend to leave an org arrears in 2019’s security landscape.

Por ejemplo: say you’re a mid-market SMB IT shop in the healthcare sector that’s experienced a lot of turnover among its IT admin staff through the years. If you’re the business, you’ve watched as IT Admins come and go, and listened as they’ve pitched tactical solutions to various challenges facing the business. You’ve invested in a few, and most work well enough, but gluing them all together into a comprehensive, strategic, and business-enabling solution has been a challenge.

cyberarkWhile your solutions are working, you’re paying a cost whether you know it or not because more than likely, the technical legwork needed to glue those solutions together into a comprehensive & auditable security framework hasn’t been done. Meanwhile, the regulators are knocking at your door, the pace of breaches quicken, and Brian Krebs’ pen is waiting to write about your company.

CyberArk is a good fit there. No, check that. It’s a *great* fit in that scenario. The product addresses threats to your business from both the inside and the outside. It protects Enterprise secrets -the very thing your admins are targeted for- while shining a bright light on your employee’s Privileged Accounts and how they are used.

It’s a product that’s far beyond anything the consumer password management companies are offering…trust me, I’ve looked at them all. It’s a true Enterprise solution. However….

I will say that one area where CyberArk felt a bit less than polished was in how they’ve architected the sharing & use of secrets with non-admin users working in the business. If we return to the healthcare example, think of a person in your business who needs the credentials to login to a state Medicaid site in order to bill the payor of a medical product.

In fairness, this is a complicated problem…while it’s in the business’ interests to control/maintain/audit all secrets, including to third party sites & services that are outside of IT’s domain, the mix of devices/browser here is a difficult puzzle to solve. Yet it’s here that CyberArk’s product left me perplexed. They propose intercepting TLS traffic on your user’s endpoints & injecting credentials into your business user’s browsers, whatever they may be.

This seemed to me -at the ass-end of 2018- to be a poor solution. For starters, we’ll soon see TLS 1.3 across more and more websites. TLS 1.3, as my fellow Delegate Jerry Gamblin pointed out, is not something you can intercept, decrypt, and inject credentials into. Indeed, other vendors in the security space seem to be steering Enterprise customers away from the expectation that we’ll be able to intercept/inspect/fiddle with TLS 1.3 connections. At best, we’ll be able to refuse TLS 1.3 connections in favor of the more Enterprise-friendly TLS 1.2 connections, but even here, the Enterprise’s political power & ability to influence the market & standards bodies is lacking, and Google, for better & worse, rules the roost. Even Microsoft is playing second fiddle here and announced in late 2018 that it would ditch its new Edge browser’s Trident engine in favor of Chromium open source.

Secondly, CyberArk’s solution even here feels archaic. They propose that you put a middlebox in front of your users to accomplish this. This is definitely old-school, calling to mind the many nights/weekends I spent configuring & troubleshooting BlueCoat devices in server rooms across many Southern California businesses. If you’re going to tackle a problem like TLS intercept, you need to think 21st century and go with a cloud interception service, that will follow your users around on the internet. Middleboxes often make your security posture worse, not better.

In my day job, I intercept/inspect TLS connections across several continents and on several thousand endpoints; it’s a tricky science and one that’s filled with compliance & policy questions above my paygrade. Microsoft’s move in the browser arena fills me with questions, and that’s before we consider mobile devices; so too should it fill you with questions if you are looking at CyberArk with an eye towards sharing secrets with non-admin users.

So, caveat emptor on this narrow point friends: a significant selling point of CyberArk’s featured product (injecting secrets into an HTTPS session) may not work a year or two from now. We raised this issue at #XFD1 and CyberArk says they have a plan for it, but eyes open!

Other than that though, I was really impressed. CyberArk gets the challenge facing Enterprise IT in this Wild West era. It understands intuitively complexities of Enterprise secrets, PAM, insider vs outsider threats, and auditing/compliance requirements. The only place it seems to fall short is in sharing credentials from the ‘Vault’ to non-privileged users.

Check it out if:

  • You’ve got a heterogenous stack of best of breed IT hardware & software and you’ve neglected integrating AAA security across that stack
  • You’re in an environment requiring heavy compliance & auditable proof across your stack against both insider & outsider threats
  • You want 2FA/MFA on old network switches, Macs, and Windows Servers
  • You want screen captures of your admin’s work on devices, servers, and services that you consider privileged
  • You’ve got cloud/SaaS management challenges even as you’ve centralized identity in on-prem Active Directory or other system

Ignore it if:

  • You’ve only ever bought Microsoft, only have Windows PCs & servers and Microsoft applications, and you have an MCSE on staff who understands Kerberos, Active Directory, NPS, RADIUS, ADFS, OAUTH2/SAML, and has configured your AD environment to comply with various regulatory statutes and compliance regimes

Other Coverage:

Disclosures
This blog post was written by me, Jeff Wilson, for publication on my blog, wilson.tech. I was not compensated by CyberArk to compose this blog post, and CyberArk did not see it prior to its publication. I learned about the CyberArk products during Security Field Day 1 (#XFD1) an event for IT, Security, and Enterprise influencers that was held in December 2018 in & around Silicon Valley, California. The Gestalt IT group paid for my airfare, accommodations, and meals during the time I was in greater San Jose, CA area. CyberArk and other sponsors paid Gestalt IT to bring Delegate influencers like me to #XFD1. 
I received no monetary compensation otherwise, save for the swag listed below
CyberArk swag I took home:
  • A ballpoint pen
About Me: My name is Jeff Wilson. I am a 20 year IT Professional with a security focus. I hold a GSEC from the SANS Institute, as well as a Bachelor’s Degree in History & a Master’s in Public Administration, both of which are from CalState. I live & work in Southern California. You can reach me on twitter @jeffwilsontech or via email at blog@wilson.tech

Cloud Field Day 3 | Morpheus Data | #CFD3

Morpheus Data was our first sponsor at #CFD3 and, as is my custom before Tech Field Day events, I had done zero prep work on Morpheus. I had never heard of the firm, and as first-at-bat sponsors for #CFD3, they were facing 12 delegates full of energy and with decades of Information Technology experience between them. So how’d they do? I came away impressed. Let me tell you why: they have a heart for operations, and I’m an operations guy.

Morpheus Data – Background

I found Morpheus Data’s story pretty compelling when I read up on it later. The company started off more or less as an internal product inside a cost center of Bertram Capital, a private equity firm in the Bay Area. Now every company has a founding mythology, but Morpheus’s range true to me. Here, I’ll quote from their site:

Bertram Labs is a world-class team of software developers and ops professionals whose sole purpose is to rapidly implement IT solutions to fuel the growth of the Bertram portfolio. In 2010, that team needed a 100% infrastructure agnostic cloud management platform which would integrate with the DevOps tools they were using to develop and deploy applications for a range of customers on an unpredictable mix of heterogeneous infrastructure. Such a tool didn’t exist so Bertram Labs created their own solution…

Just that phrase right there -an unpredictable mix of heterogenous infrastructure- comprises the je nais se qua of my success as an 18 year IT Pro. Using ratified standards sent to us from on high by the greyhairs at the IETF & IEEE ivory towers, a competent IT Pro like myself can string together disparate hardware systems into something rational because most vendors sometimes follow those standards.

But it’s very hard work.  It’s not cheap either. And that act -that integration of a Cisco PoE switch with an Aruba access point or an iSCSI storage array with a bunch of Dell servers- isn’t bringing much value to the business. Perhaps it would be different if IT Shops could just start over with a rational greenfield infrastructure design, but that’s rare in my experience because the needs of IT aren’t necessarily aligned with the needs of the business.

Morpheus Data says they grew out of that exact scenario, which is immediately familiar to me as an ops guy. I find that story pretty encouraging; an internal DevOps team working for a private equity firm was able to productize their in-house scripts & techniques and are now a separate company. Damn near inspiring!

So what are they selling?

It’s Glue, basically. But well-articulated & rational glue

Morpheus’ pitch is that their suite of products can take the pain out of managing & provisioning services from your stack of heterogenous stuff whether it’s on-premises, in one cloud, or several clouds. And by taking the pain out, you can move faster and bring more value to the business.

I’m not going to get into each product because frankly, I think they’re poorly named and not very exciting (Sharepoint-esque in a way: Analytics, Governance, Automation, Evolution, Integrations). But don’t let the naming confuse or dissaude you; it’s an exciting product and the pricing model is simple to understand.clover-b4ff8d514c9356e8860551f79c48ff7c

Instead, let me describe to you what I saw during Morpheus’ Demo at #CFD:

  • Performance data from On-Premise virtualization servers running Hyper-V, VMware, and even Citrix’s XenServer all in one part of the Morpheus web-based portal
  • You can drill-down from each host to look at VM performance data too. Morpheus says they’re able to hook into both Hyper-V performance counters and VMware’s performance counters. That’s pretty awesome for a hetergeonous shop
  • Performance & controls over IaaS & PaaS instances in both Azure & AWS, again in the same screen
  • Menu-driven wizards that let you instantly provision a new virtual machine pre-configured for whatever service you want to run on it. Again -this could be done in the same tool and you can pick where you want it to go
  • Cost data from each public clouds
  • Rich RBAC controls, which is very important to me from a security & integrity standpoint
  • A composable role-based interface. Por ejemplo, you can let your dev team login to Morpheus and not worry about him or her offlining a .vhdx on a Hyper-V server

This chart from their website sums up their offering nicely in comparison with other vendors in this space.

morpheus

Concluding Thoughts

I’ve worked in IT environments where purchasing has been less than most people would consider as rational. Indeed, I’ve worked at places where we had the very best equipment from multiple vendors, but nobody had the time or talent to integrate it all into a smooth & functional machine in service to the business.

Stepping back, the very nature of the integration puzzle has changed. I mentioned above that a competent IT Pro could stitch together infrastructure that used IETF, IEEE, w3c and other standards-based technologies. Indeed that’s been the story of my career.

But in 2018, the world’s moved on from that, for better and worse. The world’s moved on to proprietary Application Programming Interfaces (APIs), and so I’ve moved with it, creating my own Powershell functions and Python scripts to interact with cloud-based APIs. You can do this too, given enough time & study.

But let’s be honest: it’s hard enough to manage & integrate a heteregenous stack of best-of-breed stuff on-premises. Now your boss comes to you and wants you to add some Azure services & Office 365. And then someone on the business side orders up some Lambdas in AWS, surprise! Or perhaps a distant IT group at your company just went and bought Cloudflare or Rackspace. If you’re still trying to solve standards-based puzzles of yesteryear, while learning how to develop scripts & tools for use in a world of proprietary APIs, you’re probably not bringing much value to the business.

And that’s where Morpheus sees itself slotting in nicely…they’ve done the hard work of integrating with both your legacy on-premises standards-based systems and the API-driven cloud ones, and they release new integrations ‘every two or three weeks.’ They even take requests, so if you’ve got a bespoke stack of stuff that doesn’t surface SNMP properly, you can propose Morpheus build an integration for it.

Sidenote: One of the more dev-focused delegates at #CFD3 criticized the prodcut as too ops-friendly (nobody cares to see all that stuff! he said), but I had to push back on him because details are important for ops teams, and Morpheus can surface an interface that’s safe for devs to use. And that’s why I say they’ve got a heart for operations teams.

On pricing: the products which again, have somewhat confusing names, at least offer simplified pricing. To get workload & ‘core features’ running on a VM in your datacenter, you’ll need to spend $25k to start. That seems high to me, but you’re essentially buying a DevOps integrator & engineer who can work 24/7 and doesn’t need health insurance or take vacation, which is pretty cool, and which helps you bring value to the business.

Disclosures
This blog post was written by me, Jeff Wilson, for publication on my blog, wilson.tech. I was not compensated by Morpheus Data to compose this blog post, and Morpheus did not see it prior to its publication. I learned about the Morpheus Data products during Cloud Field Day 3, an event for IT & Enterprise influencers that was held in April 2018 in Santa Clara California. The Gestalt IT group paid for my airfare, accomodations, and meals during the time I was in Santa Clara. Morpheus and other sponsors paid Gestalt IT to bring Delegate influencers like me to #CFD3
Morpheus Data shwag I took home
  • Cool stickers
  • A t-shirt

Storage Networking excellence in an easy-to-digest .mp3

Can’t recommend the latest Packet Pushers Podcast enough. I mean normally, Packet Pushers (Where too Much Networking would Never be Enough) is great, but their Storage Networking episode this week was excellent.

Whether you’re a small-to-medium enterprise with a limited budget & your only dream is getting your Jumbo frames to work from end-to-end on your 1GigE (“a 10 year old design,” one of the panelists snarked), or you’re a die-hard Fiber Channel guy and will be until you die, the episode has something for you.

Rock star line-up too: Chris Wahl, Greg Ferro and J. Metz, a Cisco PhD.

The only guy missing is Andrew Warfield of Coho Data, who blew my mind and achieved Philosopher King of Storage status during his awesome whiteboarding session at #VFD3.

Check it out.

#VFD3 Day 2 : Kicking it with Coho

coho

So it’s a big day for the NFS and VMware guys here at #VFD3, they can’t stop talking about the VSAN announcement and the #VFD3 Awesomeness that was the last two and a half hours at Coho Data with some of Silicon Valley’s great Storage Philosopher Kings.

For your Hyper-V blogger, it’s time to put on a brave face, and soldier on. Coho’s gotta launch their array (“get to startup escape velocity” as someone on twitter put it) and that means focusing on NFS first. And that’s ok; my delegate friends here seem really interested and excited by this product, and when any virtualization engineer is excited for some new tech, I’m excited with them, even if I have to return home to my tired CSVs.

So what is Coho Data? Aside from having the greatest vendor schwag present ever (I kid!) and the actual best vendor schwag present so far (Chrome bike bag with the Coho logo, seriously a nice bag, thanks!), Coho is a startup with a unique storage product.

And I mean unique. Not sure I even understand it fully.

The Coho Storage architecture, borrowed from another blogger below, looks like any other storage solution, except that it’s completely and totally different. First, it involves a software-defined switch; more or less a switching model in which you let the Coho controller push your storage packets around so that your storage is closer to your hypervisor.

It’s real software-defined switching here; even Tom Hollingsworth was tweeting his approval for the messaging around these switches. For virtualization admins who touch on and worry about storage, compute, and network, it was refreshing for me to hear that Coho’s really putting some thought & interesting tech into the switch, even if I’m wary of letting go of my precious ASICs and my show fabric utilization.

Coho-Data-Rack-Layout-Marketing-Ref-Architecture

On the storage side, Coho sparked my interest for two reasons: Cheap, rebranded Supermicro arrays, SATA spinnners, and -unlike anyone we’ve talked to so far at #VFD3 thus far- PCIe SSD, not SATA/SAS SSD.

Coho’s performance model isn’t RAM-enabled like Atlantis & Pure yesterday. This is not a ZFS-derived model; it’s seemingly been grown organically in response to two things: the difficulty of managing and correctly using SSD, and the flexibility of cloud storage models. Coho has thought hard on maximizing SSD performance, on “not leaving any SSD performance on the table,” as the CTO put it, and in response to cloud flexibility, Coho’s model is designed to scale like that.

Hearing my delegate colleagues talk about Coho, I’ve realized they’ve got something unique and potentially game-changing here. It’s all we could talk about on the VMBus following, and I want to congratulate Coho on the general availability of their new product, something they savvily used #VFD3 to announce today.

Ping them if : IO Blending problems send you into a cold sweat, or you hate your ASIC on your switch

Set Outlook Reminder for When : They get Hyper-V SMB 3.0 support or iSCSI or OpenStack

Send them to /dev/null if: You aren’t brave enough to challenge storage paradigms

Not my finest hour in contingency planning, but it works

As Southern California is the center of the universe as far as I’m concerned, I know you’re all worried sick about me, this website, and other Southern Californians as we endure a a frightening precipitation event of some kind and scale. The Live MegaDoppler 7000 StormSageRadar XXTreme v 2.0 Beta can’t tell us yet if the great California Dampnening of 2014 is Noahic in nature or a $deity-punishment ripped straight from the Book of Revelations, but one thing is for certain: the water is everywhere.

Yes, it’s so wet out there that even the mighty Los Angeles River is flowing once again. It may even be navigable.*

But fret not! Let me calm your nerves. Your blogger Jeff, @agnostic_node1 on the twitters, is ok. So is the Child Partition and the Supervisor Module spouse. We’re all safe, our flood pants all fit and we’ve got buckets and bags of some sort of pre-silicon material at the ready.

The Converged Fabric Agile DevOps ITIL Waterfall Software-Defined Lab @ Home, however, I worry.

See I built it in the garage. The Supervisor Module would never permit such equipment inside the living spaces.

Not only is it in the garage, but it’s close to the garage door, bolted down properly to the wooden workbench.

A few inches outside but very close to being under the garage door tracks that lift what’s now become a very wet garage door.

*Gulp*

I may be able to push 4 or 5,000 IOPS to the home-built ZFS array sitting in the garage. I’m quite confident in my ability to take my home lab, and my skillset too, to new heights. I can spin up a dozen VMs on this handsome 12u stack at once…no problem at all. I can build a lab that’s agnostic and welcoming to all type 1 and 2 hypervisors, no discrimination here!

What I can’t do is anticipate inclement weather that seems to come at me sideways sometimes.

So what’s an Agile guy to do?

Pull out his IT MacGyver manual: Bungie cord, two sticks, and some plastic sheeting:

contingency
Agility Defined. Waterfall no more.

Like I said, not my finest hour in contingency planning, but it’s working. So far. I won’t be putting this lab work on my resume, however.

And yes, it’s okay to laugh.

* By kayak

Free disk space > 15% = Wasted money

Your enterprise’s mileage may vary, but in every place I’ve ever worked at, I’ve taken a pretty dogmatic approach to disk space utilization on VMs, especially ones hosting specialty workloads, such as Engineering or financial applications.

And that dogma is: No workload is special enough that it needs greater than 15% free disk space on its attached volume, non-boot volume. 

This causes no end of consternation and panic among technicians who deploy & support software products.

“Don’t fence me in!” they shout via email. “Oh, give me space lots of space on your stack of spindles, don’t fence me in. Let me write my .isos and .baks till the free space dwindles! Please, don’t fence me in,” they cry.

“I can’t stand your fences. Let my IO wonder over yonder,” they bleat, escalating now, to their manager and mine.

Look, I get it. Seeing that the D: drive is down to 18% free space makes such techs feel a bit claustrophobic. And I mean no disrespect to my IT colleagues who deploy/support these applications. I know they are finicky, moody things, usually ported from a *nix world into Windows. I get it. You are, in a sense, advocating for your customer (the Engineering department, or Finance) and you think I’m getting in your way, making your job harder and your deployment less than optimal.

But from my seat, if you’ve got more than 15% free space on your attached volume in production, you’re wasting my business’ money. I know disk space is cheap, but if I gave all the specialty software vendors what they asked for when deploying their product in my stack, my enterprise would :

  • Still have a bunch of physical servers doing one workload, consuming electricity and generating heat instead of being hyper-rationalized on a few powerful hosts
  • Lots of wasted RAM & disk resources. 400GB free on this one, 500GB free on that one, and pretty soon we’re talking about real storage space

One of the great things about the success of virtualization is that it killed off the sacred cows in your 42U rack. It gave us in the Infrastructure side of the house the ability to economize, to study the inputs to our stack and adjust the outputs based not on what the vendor wanted, or even what us in IT wanted, but on what the business required of us.

And so, as we enter an age in which virtualization is the standard (indeed, some would argue we passed that mark a year or two ago), we’ve seen various software vendors remove the “must be physical server” requirement from their product literature. Which is a great thing cause I got tired of fighting that battle.

But they still ask for too much space. If you need more than 15% free on any of the attached, MPIO-based, highly-available, high performing LUNs I’ve given you, you didn’t plan something correctly. Here’s a hint: in modern IT, discrimination is not only allowed, but encouraged. I’m not going to provision you space on the best disk I have for backups, for instance. That workload will get a secondary LUN on my slow array!

#VFD 3 or bust!

Agnostic Computing is brand new as tech blogs go, rolled out on a whim in August 2013 just to vent some angst, to wax philosophical on some high technology magic (would you believe my first post was about Sharepoint 2013? Uhhh yeah).

My thinking in starting the site was simple: I wanted to write a blog that was as fun and as passionate as the tech debates my friends & colleagues and I enjoyed at work for years. These are debates that start innocently enough (“Check out my new 1080p Android phone”….or “Do you really buy music from the iTunes store?”) but soon escalate into a 45 minute verbal fisticuffs, where low blows & sucker punches are not only permitted, but encouraged.

The geekier the reference, the harder the punch: “That’s a user interface only the mother of Microsoft bob could love,” “You’re just a sad and broken man because both BeOS & WebOS died and you were the only one who noticed,” or “We can’t trust someone who buys music off iTunes to be able to program a switch.” “You’re acting pretty confident for a guy who broke Exchange just last month.”

Good times. I love those debates, and not just cause the normals don’t get them. They’re genuinely fun, so I set to out to capture a bit of that spirit on this blog, and, I hoped, post some genuinely interesting stuff, like a storage bakeoff between bitter rivals, a sincere, screenshot or gifcam heavy how-to sent from my virtualized stack to your’s, and more.

And so it goes for bloggers, who, like chefs, try a little of this, test a little of that, mix it all up and then taste what’s in the pot. And most of the time, it’s forgettable at best, shame-inducing at worst.

Which makes it all the more surprising for me because apparently I’m doing something right.

You see, I’ve been invited as a delegate to Virtualization Field Day #3. In the Disneyland of High Tech, Silcon Valley, where the combined brainpower is bound to rub off on me, I mean, how can it not?

VFD-Logo-400x398That’s right. Me. Agnostic Comptuing guy. Going to Enterprise Tech’s Woodstock.

If you don’t know what #VFD is, then you haven’t been paying close enough attention. From all the interviews I’ve heard of delegates from past #Tech Field Days (Storage, Network, Wireless Network…it’s spreading into all our sacred sub-disciplines and dark arts, surely the ERP & SQL guys will be next) going to a TFD as a delegate puts you face to face with the companies, and more importantly, the engineers who designed the stuff you deploy, support, break, fix, and depend on to keep your enterprise running.

Notice I said engineers. Not sales people. Or not just sales people at any rate.

Deep dives, white papers, new horizons opened, the potential to leave behind painful memories of broken processes and old ways of doing things by meeting the other delegates, some of whom I’ve been reading for years…..these are the things I’m looking forward to as a #VFD delegate.

Oh and challenging vendors and discerning which product is the right one for the business, which is among the most important jobs we as IT pros have.

As a former boss of mine put it memorably: “We’re only as good as our vendors.” And he was right: whether the device in your rack is amazing and incredible, or prone to failure, or the service you’ve contracted is game-changing or more trouble than its worth, managing “the stack” and interfacing with the stack builders and stack sellers is important to your success, and the business’ success.

Two of the sponsor firms at this year’s #VFD already have me excited. I just finished buying a Nimble array at work (gamechanger! no regrets!), but I won’t lie: I’m Coho-Curious. And Atlantis Computing: sharp guys, A++++ on the blogs, would read again, eager to hear about the products.

Thanks to the GestaltIT group (Add them to your RSS feed stat!) for the invite and be sure to check back here -as well as the other delegates’ blogs- for some #VFD thoughts in the weeks ahead.

In which the Hyper-V guy plays with ESXi 5.5

All the sweat equity, money, and time I’ve put into the home lab is finally paying off at the Agnostic Computing.com HQ.

In fact, it’s been great: satisfying and pleasing little green health icons are everywhere, I read with satisfaction the validated Microsoft cluster configuration reports without any warnings at all, and the failover testing? Let’s just say I can remove “ish” from the end of the word “redudant.” This stack is as solid as it’s going to get on my low budget, single-psu setup designed to draw fewer than 5 amps and less than 500 watts (I’m at about ~325w & 3.5 amps more or less)

0218140307a

But standing up Hyper-V clusters on consumer-grade hardware isn’t exactly expanding my portfolio, even if all my storage is parked in a (new to me) ZFS box.  So last weekend it was time to tackle Hyper-V’s nemesis: VMWare’s market-dominating ESXi 5.5, which I’ve got running on a stable 2-core Athlon II box, 12GB of RAM, and an Intel 2x1GbE NIC.

For a Hyper-V guy who hasn’t touched ESXi since probably 2011, building out the ESXi box involved some trips down memory lane.

A memory lane called Pain Street.

The last time I worked in ESXi on anything meaningful was during an eight month span during 2011 in which my colleagues and I were charged with replacing ESXi with Hyper-V 2.0, baked into the just-released 2008 R2 edition.

We had Hyper-V 2.0, a few brand-new PowerEdge servers with quad-Nehalem CPUs, something like 512GB of RAM, a FAS 2210, System Center Virtual Machine Manager, 2007 Edition, and a brand new file system-like layer on top of NTFS called Cluster Shared Volumes.

Oh, and a handful of V2V tools & .vmdk to .vhd conversion scripts with which we planned to stick it to VMWare.

I mentioned that this was a painful time in my life, right?

I’ll save the Hyper-V war stories and show you my scars (Hyper-V virtual switch ARP storms, oh my!) another time, but here’s what I learned from that experience: Hyper-V 2.0, was in all ways inferior to ESXi when it debuted in Server 2008 R2. And not just a little inferior. No, we are talking NBA vs 8th Grade Boys Basketball team scale inferiority.

The Hyper-V 2.0 guys will know what this is.
The Hyper-V 2.0 guys will know what this is.

It was half-baked, not entirely thought out, difficult to scale, prone to random failures, hard to backup (even risky…sometimes the CSVs would just drop off when the IO was supposed to be redirected to another host), and the virtual drivers written by Microsoft for Microsoft Hyper_v virtual machines running on Microsoft virtual synthetic NICs weren’t stable. It was a hypervisor that made you pound your keyboard, sit back in your chair, scratch your head and ask, “Has anyone at Microsoft ever tried to use this thing?”

And you couldn’t team it and expect Microsoft support. I had to delay my love letter to LACP for years because of that.

Even so, I loved Hyper-V 2.0. Wore the admin hat like a badge of honor. Proud and boastful of the things I could make Hyper-V 2.0 do in the face of so much adversity, so much genetic disadvantage. Yeah the other guys had Ferraris tuned up by Enzo himself and all I had was a leaky Fiesta with a suspect axle, but that Fiesta could, in the right hands, make it across the finish line.

We, we happy few, we band of brothers, who persisted in our IT careers through the days of Hyper-V 2.0 and even excelled.

Backing up your VMs in 2008 R2 involved this, which worked....mostly. But pucker factor was high. In 2012 I never worry.
Backing up your VMs in 2008 R2 involved this, which worked….mostly. But pucker factor was high. In 2012 I never worry.

All that to say that the hey-day of VMWare, ESXi, the Nexus 1000v, and now VSAN have kind of passed me by. Just can’t seem to get exposed to it, to sink my teeth into that whole wondrous stack. It’s expensive.

But it’s been alright with me because in the same span I’ve adopted Hyper-V 3.0 with relish and become convinced that we Microsofties finally had a Hypervisor worthy of respect. “Feature Parity” is a term that’s been bandied about, and with 2012 R2, it got even better. EMC, parent company of VMWare, even called SMB 3.0 “the future of storage.” Haha, take that NFS!

So has it?

It’s not easy for me to admit this but while I like Hyper-V much more in some areas and feel like it can scale and serve any enterprise well, I have to admit after playing with ESXi at home, Hyper-V still has deficits purely from a Hyper-visor perspective (System Center is a different animal).

Deficits other virtualization bloggers are eager to demonstrate, with barely-concealed glee. Take Mike Laverick, a sharp ESXi guy, for instance. This February readers of his blog have been treated to post after post In Which the ESXi Guy Plays with Hyper-V 3.0.

I’m always up for a good tech debate, but after devouring his posts, letting them sink in, I got nothin’ except a few meek responses and maybe some envy.

He concludes at the bottom of this great screenshot-by-screenshot comparison:

I guess to be fair – taken individually this lack of hotness of the Gen2 Windows 2012 Hyper-VM might not be a deal breaker for some. For me personally, they collectively add up big pain in the rear, especially if you coming off the back of virtualization product like VMware vSphere that does have them. For me the whole point of virtualization is it liberates us from the limitations of the physical world. What’s the point of software-defined-virtual-machines, when it feels more like the hardware-defined-physical-machines….

’tis true in some respects. I have long wanted to stop mapping LUNs directly from the SAN, through the Hyper-V switch to a virtual machine, but it was not possible to resize .vhdx drives on a live VM until October 2013, when R2 was released. And even now in R2, it’s not as simple or more importantly -reliable- enough to depend on in production, at least not compared to resizing an RDM in a NetApp or Nimble or even my ZFS array.

I will offer some resistance in the following two areas though.

Hyper-V runs on whatever piece of junk you throw at it. That’s interesting news if you’re a value-oriented enterprise, and really great news if you’re building a home lab or trying to learn the trade. VMWare, in contrast, won’t even install without supported NICs…the cheap realtek in your Asus? Not supported. The Ferrari metaphor is apt: You’ve got to shell out some bucks for the High Octane stuff before you can stand-up ESXi in a meaningful way.

Second observation is that I’m not comprehending the switching model very well. I was really excited to see Cisco Discovery Protocol just work on mouse-hover with zero configuration, but this 1:1 stuff feels archaic, devoid of the abstract fabric goodness:

What am I missing here?

vswitch

On my ESXi box, I’ve got two Intel GigE adapters. I have the option to make them active/passive (cool), team them, but I’m not seeing the same converged fabric concept that’s liberated me in Hyper-V 3.0 from, guess what, worrying about hardware.

The three NICs on my Hyper-V host, for instance, are joined in an LACP team, which then is used to build a true & advanced virtual switch for both the host & the guests. And an LACP-capable switch is not a requirement here; I could use the dumb switch in my rack and have the same fault-tolerant (though lower performing) converged team.

Some very simple powershell lines later, and you’ve got vethernets  on the management OS tagged with the appropriate VLAN.

All ports on the physical Cisco switch? Trunked.

Freedom.

hyperv

I know I’m missing something here…PowerCLI? I’ll be testing that tonight.

Irony & Memories in eBay packaging

I just shipped my last ChromeOS device to a buyer on eBay, and while photographing the packaging just prior to shipment, I had cause to reflect on this device, the box it was in, and the year 2013, a strange year for me computing-wise, a year in which the Windows guy abandoned Microsoft completely.

Shipping the Chromebook inside the box you just received a 2u server chassis in for your home lab? Nerd irony perfected.
Shipping the Chromebook inside the box you just received a 2u server chassis in for your home lab? Nerd irony perfected.

Come, emote with me.

As 2012 ended, I slowly but steadily realized I hated Windows 8. Strike that. I reviled it. Its only saving grace was that Hyper-V came baked into Windows 8 Pro and Enterprise, but even that wasn’t enough to save it for me. I hated the tiles, the split-brained nature of the thing, the helter-skelter implementation, the awful Windows Store that was bereft of anything useful for work, or fun for home.

And I resented the shit out of Microsoft for making Server 2012 boot to the awful Start Screen, where it is about 10 times more useless than in Windows 8. I think my colleagues and I actually booed and hissed the first time we ran Server 2012 and had to hunt for the start screen activator thing like we were playing Enterprise Wack-A-Mole.

Dystopia in a Server UI
Dystopia in a Server UI

Windows, to borrow from Steve Jobs (and give another Here! Here! to Paul Thurott for his essay last week), was my work truck. I dumped all my stuff in it, had built a toolbox for the truck bed, and knew exactly which levers and buttons to push to make my Windows boxes purr and perform. And while Microsoft had done some great upgrades to the truck for Server 2012 (networking stack in particular), unless you ran core (you should!) it was all masked by that goofy wretched start screen. 

Who among us didn’t get frustrated at being mouse/keyboard guys and suddenly facing a designed-by-committee touch interface on our dual or triple LCD displays?

Fed up, I went for the sugar high of using Mac OS X with Parallels, but that wore off after a few weeks. 

So I ran kicking and screaming to the arms of Google. I stuck some rainbow-colored G way up into the emptiness of my heart, the place Microsoft had once occupied. I went deep. balls-deep, into the Chrome.

I loved the integration, the speed, the ubiquity & presence of all Chrome apps on all devices all in perfect sync, all my stuff living up there in the nebulous but omniscient Google “cloud.” It was a no-nonsense OS, the new operating system for people who just wanted to get shit done. I joined the Beta group, then dev, got familiar with the chrome://flags screen, and more.

A flurry of purchases ensued. The CR-48 came out of storage. I bought the Samsung ArmBook. Then a rare and prized Google I/O 2012 ChromeBox with a Core i5. The Windows box at home went into storage, the laptop at work went to an incoming exec, and I maintained my enterprise via the Chromebox for much of 2013.

My colleagues thought I was nuts (they’re right) but I made it work, and it wasn’t even (that) hack-ish. For remote desktop I bought ChromeRDP for $10 (A+++ would buy again) and in order to run my Windows applications on the Chromebox, I stood up a VM and built out the incredible RemoteSpark HTML 5 RDS server written by a small company in Canada (a solution so awesome that Google & vmWare appear to be ready to copy it in 2014).

In my own mind, I was an IT Hero, pointing the way forward, demonstrating that with ChromeOS, you could have your cake and eat it too: A high performance, secure & cheap desktop platform giving you reliable access to your Windows-based server stack, the .nets and the asps and the IISes and the Exchanges and SQLs happily existing within my modern, fast and slick browser operating system. I was ecstatic.

“Don’t you see?!?” I cried out to my colleagues, as if I was John the Baptist, announcing the Messiah’s arrival.

Windows applications inside ChromeOS via RemoteSpark, an awesome little HTML 5 javascript server package that plugs into Microsoft Remote Desktop Services. Google and VMware hope something like this will be the dagger that kills of Microsoft in the enterprise.
Windows applications inside ChromeOS via RemoteSpark, an awesome little HTML 5 javascript server package that plugs into Microsoft Remote Desktop Services. Google and VMware hope something like this will be the dagger that kills of Microsoft in the enterprise.

“This is what John Gage of Sun was talking about so long ago. We’re here! The network is now the computer!” I wailed, sackcloth and ashes now, as the networking guy backed away slowly, and passwords to critical systems were changed.

It all felt so right, so perfect, so wonderful. It is, afterall, bliss at the top of the Gartner Hype Cycle chart.

But then in perhaps the most spectacular IT Icarus story you’ve ever heard, I got too close to the promised land, too near the warm and beautiful future that awaits us (Agnostic Computing, where you don’t care what device you’re on), that my wings burned off, I fell to the server room floor in a pile of shattered dreams, cat 5 cable, and hopes.

Snowden. NSA. Compromised SSL certs, RSA the standard in security, but in reality a research branch of the NSA. The dawning realization that the cost was too high, that I was surrendering too much for this convenience. And oh yeah, the $$$ cost was probably about the same too as the on-prem stuff, and guess what? I got more 9s than the lot of them.

Disillusionment, despair, depression, all over again.

ChromeOS -and the stuff supporting it- not so shiny anymore.

And then, just like that, summer ended. I saw screenshots of Windows 8.1. I saw my beloved Start button return. I saw options to banish the Start Screen away for good if I liked. I saw Windows Management Framework 4, Powershell 4.0, and so many other goodies. Then Ballmer got sacked, following Sinofsky, and Gates was Alpha Dog once more.

Microsoft was still lost and confused, perhaps fatally, but at least I got my Start button back. Server 2012 R2, while not perfect, was what Server 2012 should have been, I thought. The “CloudOS” needn’t be so; you could keep all that stuff on-prem if you like. Yeah it’s not as elegant or complete as Chrome from a user standpoint, but it’s not as compromised either.

And so I resolved last fall to sharpen my skills rather than surrender to the cloud providers. I bought into a DIY & “Maker” aesthetic that seems to be, in my observations of the industry at least, getting some traction among IT pros lately.

 

Techtonic Shift – Android apps on Windows?

Tom Warren, The Verge’s ace Microsoft reporter with a startling headline:

Sources familiar with Microsoft’s plans tell The Verge that the company is seriously considering allowing Android apps to run on both Windows and Windows Phone. While planning is ongoing and it’s still early, we’re told that some inside Microsoft favor the idea of simply enabling Android apps inside its Windows and Windows Phone Stores, while others believe it could lead to the death of the Windows platform altogether. The mixed (and strong) feelings internally highlight that Microsoft will need to be careful with any radical move.

Radical is understating it a bit.

Linux/Android applications running natively on Microsoft Windows desktops and/or Windows 8 phones, not because some nerd went and accomplished a great feat of software engineering, but because Microsoft needs it?!?

That’s not just crazy. It’s almost heretical.

It’s a thought so wild that the phrase paradigm shift doesn’t do it justice. No, this is more like magnetic south switching to magnetic north. This is lions lying down with the lambs territory people, except in this case, Microsoft was the Lion, *nix the Lamb, and the Lion, as is its nature, bullied the lamb around for a few decades, but the Lamb just ate the Lion and is now resting, a satisfied look on its face.

This is end-is-near-grab-the-sandwich-board-meet-you-on-the-corner news.

It’s like waking up one day, and holy crap, the dollar has crashed, and in order to maintain financial stability in the western hemisphere, Mexico bails the US out, air-dropping truck loads of pesos from C-130s all over America, rescuing us from ruin.

Step back 10-12 years when you were young and crazy, undersexed and over-curious with no money and I bet you experimented with Linux. Remember those days? For me it was about the VAX machine…what was it, what did it do and why was my university email address so strange, and why did the guys in charge of that refrigerator-sized box all have beards, suspenders, and grumpy dispositions?

wine-logoSo I did what any geek did in 2000-2001. I downloaded/bought a copy of SUSE OpenLinux or RH or whatever distro was in favor that month, used partition magic to divide up my 16GB drive, and booted into some flavor of Linux, feeling like a stud. Penguins man! Linux on the Desktop! It’s for real this time!

“Hey this isn’t that bad,” You thought. “Most stuff works here pretty good. I could get used to this. Now let me see if I can get that WINE thing to work.”

Four hours of cursing & violent threats to your PC later, you resign yourself to defeat, realizing you’ll never get Win32 to work inside this strange linux thing, you’re just not smart enough. You did the forum crawl thing and the Linux nerds tried to help but you don’t have any concept of what sudo is and apt-get isn’t around yet, only RPMs and it’s all so chaotic.

Besides, at the end of the day, you have Photoshop available and all they can bring out is the GMP, you think smugly.

So you reboot and join your friends in a campus CounterStrike party in Windows 98. And you go on to develop your moderately-successful career as an IT “knowledge worker” supporting Microsoft products until you die, hopefully sometime after Microsoft Office 2042  is released, but you never know.

The End.

Except it’s not, and now, 13 years later, it’s us Windows guys -and Microsoft itself!- who are trying to figure out how to run Android applications on Windows cause that’s where all the exciting stuff is happening and it’s where all the cool kids are hanging out.

Wow.

And here I thought Paul Thurott’s devastating takedown of Windows 8 development was the biggest Windows story this week.