“Assume Breach” not just at work, but at home too

Security has been on my mind lately. I think that in the Spring of 2015, we’re in a new landscape regarding security, one that is much more sinister, serious and threatening than it was in years past. I used to think anonymity was enough, that there was saftey in the herd. But the rules & landscape have changed, and it’s different now than it was just 12 or 24 months ago. So, let’s do an exercise, let’s suppose for the sake of this post that the following are true:

  • Your credit history and your identity are objects in the marketplace that have value and thus are bought and sold between certain agents freely
  • These things are also true of your spouse or significant other’s credit history & identity, and even your child’s
  • Because these things are true, they are also true for malefactors (literally, bad actors) just like any other object that has value and can be traded
  • There is no legal structure in America aside from power of attorney that allows a single member of a family to protect the identity and credit history of another member of his/her family.
  • The same market forces that create innovation in enterprise technology are now increasing the potency of weaponized malware systems, that is to say that financial success attracts talent which begets better results which begets more financial success.
  • The engineers who build malware are probably better than you are at defending against them, and what’s more,they are largely beyond the reach of local, state, or national law enforcement agencies. ((Supposing that your local Sheriff’s Department even has the in-house know-how to handle security breaches, they lack jurisdiction in Ukraine))
  • The data breaches and mass identity theft of 2014 & 2015 are similar somehwat to a classic market failure, but no cure for this will be forthcoming from Washington, and the trial attorneys & courts who usually play a role in correcting market failures have determined your identity & credit history are worth about $0.14 (($10 million settlement for the 70 million victims of Target breach = $0.14))
  • Generally speaking most IT departments are bad and suffer from poor leadership, poorly-motivated staff, conflicting directions from the business, an inability to meet the business’ demands, or lack of C-level support. IT is Broken, in other words
  • All of this means it’s open season on you and your family’s identity & credit history, which we have to assume rest unencrypted on unpatched SQL servers behind an ASA with a list of unmitigated CVEs maintained by some guys in an IT department who hate their job
Don't be like these people. Secure your online identity now
Don’t be like these people. Secure your online identity now

There it is. That’s the state of personal identity & credit security in 2015 in America, in my view.

And worst of all, it’s not going to get better as every company in America with your data has done the math from the Target settlement and the beancounters have realized one thing: it’s cheaper to settle than to secure your information.

Assume breach at home

If this is truly the state of play -and I think it is- then you as an interested father/mother husband/wife need to take action. I suggest an approach in which you:

  1.  Own your Identity online by taking SMTP back: Your SMTP address is to the online world what your birth certificate and/or social security number is to the meatspace world: everything. Your SMTP address is the de facto unique identifier for you online ((By virtue of the fact that these two things are true of SMTP but are not true of rival identity systems, like Facebook or Google profiles: 1) Your SMTP address is required to transact business or utilize services online or is required at some point in a chain of identity systems and 2) SMTP is accepted by all systems and services as prima facie evidence of your identity because of its uniqueness & global acceptance and rival systems are not)) , which begs the question: why are you still using some hippy-dippy free email account you signed up for in college, and why are you letting disinterested third party companies host & mine something for free that is so vital to your identity? Own your identity and your personal security by owning and manipulating SMTP like corporations do: buy a domain, find a hosting service you like, and pay them to host your email. It doesn’t cost much, and besides, you should pay for that which you value. And owning your email has value in abundance: with your own domain, you can make alias SMTP addresses for each of the following things: social media, financial, shopping, food, bills, bulk and direct your accounts to them as appropriate. This works especially well in a family context, where you can point various monthly recurring accounts at a single SMTP address that you can redistribute via other methods and burn/kill as needed. ((Pretty soon, you and your loved ones will get the hang of it, and you and your family will be handing out food@domain.com to the grocery store checkout person, retail@domain.com for receipts, shopping@domain.com for the ‘etailers’ and apple@domain.com for the two iPhones & three other Apple devices you own.))
  2. Proxy your financial accounts wherever possible: Mask your finances behind a useful proxy, like Paypal, perhaps even Mint. The idea here is to put a buffer between your financial accounts and the services, people, and corporations that want access to them and probably don’t give two shits about protecting your identity or vetting their own IT systems properly. Whenever possible, I buy things online/pay people/services via Paypal or other tools so that use of my real accounts is minimized. Paypal even offers a business credit card backed by the Visa logo, which means you can use it in brick ‘n mortar stores like Target, where the infosec is as fast and loose as the sales and food quality.
  3. Filter the net at home and wherever else you can: Spyware, malware and viruses used to be an annoyance, the result of a global dick-measuring contest for geeks and nerds who liked to tinker and brag. But no more; today’s malware systems are weaponized and potent, and that puts you and your family at a huge disadvantage as it’s difficult to secure all the devices creeping into your life, let alone worry about the bad IT departments stewarding your sosh, DOB, mother’s maiden name and home address at RetailCo. I suggest a heavy filtering strategy by whatever means you can employ: employ whitelist javascript filtering on Windows PCs, use and pay for OpenDNS malware filtering, or buy something like ITUS Networks or even a ZyXel like the one I have. Get to know Privoxy well as I think filtering ads from websites is even fair now as the major ad agencies apparently can’t prevent malware from creeping into them. Finally invest some time and study into certificates and periodically review their use, as there are Certificate Authorities out there that you should not trust.
  4. Use Burner Numbers: Similar to SMTP, your standard US 10 digit POTS/Mobile phone is a kind of unique identifier to companies, existing somewhere in a unsecured table no doubt. Use burners where you can as your 10 digit mobile is important as  a unique identifier and an off-net secondary notification/authentication channel.  If Google Voice is to be killed off, as it appears to be, consider Ooma, where for $100/year, you can spawn burner numbers and use them in the same way you use SMTP. Else, use the app on your phone for quick burner numbers.
  5. Consider Power of Attorney or Incorporation: This is admittedly a little crazy, but words can’t describe how furious you’ll be when a family member’s identity has been stolen and some scummy organization that calls itself a bank is calling to verify that you’ve purchased $1000 in Old Navy gift certificates in Texas -something completely out-of-sync with your credit history- but they refuse to stop the theft because it’s happening to your wife, not you, and your wife can’t come to the phone right now.  The solution to this problem is beyond me, but probably involves a “You can’t beat ’em, join ’em” approach coupled with an attorney’s threatening letter.
  6. Learn to Love Sandboxing: Microsoft has a free and incredibly powerful tool called Enhanced Mitigation Experience Tool, or EMET, which allows you to select applications and essentially sandbox them so that they can’t pwn your entire operating system. Learn to use and love it. But the idea here goes beyond Win32 to the heart of what we should be doing as IT Pros: standing-up and tearing-down instances of environments, whether those environments are Docker containers, Windows VMs, jails in BSD, or KVM virtual machines. Such techniques are useful beyond devops, they are also useful as operational security techniques at home in my view.
  7. Go with local rather than national financial institutions: Where possible, consider joining a local credit union, where infosec practices might not be state of the art, but your family’s finances have more influence and weight than they do at a Bank of America.

I am not a security expert, but that’s how I see it. If we IT pros are to assume breach at work, as many experts advise us to, we should assume breach at home too, where our identities and those of our loved ones are even more vulnerable and even more valuable.

How to Superfish Your Users : SSL Proxy in a Windows Network

When in the course of IT events it becomes necessary to inspect all traffic that hits your user’s PCs, there is but one thing you can do in 2015: get a proxy server or service, deploy a certificate to your trusted root store, and direct all traffic through the proxy.

Why would you do what amounts to a Man in the Middle Attack on your users as a responsible & honest IT Pro? Why Superfish your users? ((

IT Shakespeare put it like this:

To proxy SSL or not to proxy, that is the question

whether ’tis nobler in the mind to suffer

the breaches and theft of outrageous malware

or to take Arms against a sea of digital foes

and by opposing, only mitigate the threat.

To protect via decrypt ; Aye there’s the rub

Thus Conscience does make Cowards of us all

and lose the name of Action))

Numbers are hard to pin down, ((I am not a security expert, and though I checked sources I respect like the Norse IP Viking security blog, Malwarebytes Unpacked blog, SearchSecurity.com etc, I found very few sources that a percentage on how much malware is encrypted and thus difficult to detect. This NSS Labs report from summer 2013 comparing Next Gen Firewall SSL Decryption performance, for instance, says that “the average proportion of SSL traffic within a typical enterprise is 25-35%”  and that only ~1% of malware is encrypted. A GWU security researcher named Andre DiMino has a series of good blog posts on the topic, showing what SSL-encrypted malware looks like in WireShark. Team CYMURU’s TotalHash database is probably the most comprehensive open DB of malware samples, but I don’t feel qualified to search it frankly)) but it seems an increasing amount of virulent & potent malware is arriving at your edge encrypted. Because those packets are encrypted, you essentially can’t scan the contents. All you get is source/destination IP address, some other IP header information, and that’s about it.

No bueno.

One option, really your only option at that point, is to crack open those packets and inspect them. Here’s how.

1.You need a proxy server or service that does security inspection. 

I’ve seen ZScaler used at several firms. ZScaler dubs itself the premiere cloud-based, SaaS proxy service, and it’s quite a nifty service.

For a fee per user, ZScaler will proxy most if not all of your internet traffic from several datacenters around the globe, sort of like how CloudFlare protects your websites.

The service scans all that http and https traffic, filters out the bad and malicious stuff, blocks access to sites you tell it to, and sends inspected http/https content to your users, wherever they are, on-prem or connected to the unsecured Starbucks access point.

2. You need to bundle those proxy settings up into a .pac file

Getting the service is one thing, you still need to direct your users and computers through it. The easiest way is via Group Policy & what’s called a .pac file.

A .pac file is a settings file generated by ZScaler that contains your preferences, settings, and lists of sites you’d prefer bypass the filter. It looks like this:


function FindProxyForURL(url, host)
{
    var resolved_host_ip = dnsResolve(host);

    if (!isResolvable("gateway.zscaler.net"))
        return "DIRECT";

    if (url.substring(0, 4) == "ftp:")
        return "DIRECT";

    // If the requested website is hosted within the internal network, send direct
    if (isPlainHostName(host) ||
        isInNet(resolved_host_ip, "1.1.1.1", "255.0.0.0") ||
        return "DIRECT";

    // If the requested website is SSL and associated with Microsoft O365, send direct
    return "DIRECT";

3. Deploy the .pac file via Group Policy to Users

Next, you need to pick your favorite deployment tool to push the .pac file out and set Windows via IE to proxy through ZScaler. We’ll use Group Policy because it’s fast and easy.

Under User Configuration > Policies > Windows Settings > Internet Explorer Maintenance > Connection / Automatic Browser Configuration, select Enable.

Then point the Auto-proxy URL to your Zscaler .pac file URL. It looks like this:

grouppolicy

Keep Group Policy open, because we’re not done quite yet.

4. Download the ZScaler Root CA certificates

You’ll find the certs in the administration control screen of ZScaler. There are two:

  • ZScaler Root Certificate -2048.crt
  • ZScalerRoot Certificate -2048-SHA256.crt

The two certificates are scoped similarly, the only difference seems to be SHA1 or SHA256 encoding.

Double-click the certificate you prefer to use, and notice that Windows alerts you to the fact that it’s not trusted. Good on ya Microsoft, you’re right.

To validate this setup, you’ll probably want to test before you deploy. So select Install Certificate, select your Computer (not user) and navigate to the Trusted Root CA Store:

rootca

or you can do it via powershell:


PS C:daisettalabs.netImport-Certificate -FilePath C:usersjeffDownloadsZscalerRootCertsZscalerRootCertificate-2048-SHA256.crt -CertStoreLocation Cert:LocalMachineRoot
Directory: Microsoft.PowerShell.SecurityCertificate::LocalMachineRoot
Thumbprint Subject
---------- -------
thumbprint E=support@company.com, CN=Zed, OU=Zed Inc, O=Zed's Head, L=The CPT, S=CaliforniaLove, C=USA 

4. Verify that the .pac file is in use

Now that you’ve installed the .pac file and the certificate, ensure that IE (and thus Chrome, but not necessarily Firefox) have been set to proxy through Zscaler:

Your settings will differ no doubt from my screenshot

5. SSL Proxy Achievement Unlocked: 

Go to Google or any SSL/TLS encrypted site and check the Certificate in your browser.

You should see something like this:

googlewithz

 

6. You can now deploy that same certificate via Group Policy to your Computers.

It’s trivial at this point to deploy the ZScaler certificates to end-user PCs via Group Policy. You’ll want to use Computer Preferences.

Once deployed, you’ll get comprehensive scanning, blocking and reporting on your users http/https use. You can of course exempt certain sites from being scanned ((Before you do this, make sure you get your Legal department or corporate controller’s sign-off on this. Your company needs to understand exactly what SSL Proxy means, and the Gordian Knot of encryption.

By making all SSL traffic visible to your proxy service,  you may gain some ability to prevent potent malware attacks, but at the cost of your user’s privacy. When a user transacts business with their bank, their session will be secured, but only between the ZScaler cloud and the bank’s webserver. The same is true of Facebook or personal email sites.

By doing this, you’re placing an immense amount of trust in the proxy server/service of your choice. You’re trusting that they know what they’re doing with Certificates, that they didn’t use a weak password. You’re trusting that they have their act together, and you’re doing this on behalf of all your users who trust you. This is not to be taken lightly, so run it up the legal/HR flagpole before you do this. ))

Microsoft’s commitment to open initiatives & the riddle of whitebox networking

On Tuesday Microsoft surprised me by announcing an open switching/networking plan in partnership with Mellanox and as part of the Open Compute initiative.

Wait, what?

Microsoft’s building a switch?

Not quite, but before we get into that, some background on Microsoft’s participation in what I call OpenMania: the cloud & enterprise technology vendor tendency to prefix any standards-ish cooperative work effort with the word Open.

Microsoft’s participating in several OpenMania efforts, but I only really care about these two because they highlight something neat about Microsoft and apply or will soon apply to me, the Converged IT Guy.

Open Compute, or OCP, is the Facebook-led initiative to build agnostic hardware platforms on x86 for the datacenter. I like to think of OCP as a ground-up re-imagining of hardware systems by guys who do software systems.

As part of their participation in OCP, Microsoft is devoting engineering resources and talent into building out specifications, blueprints and full hardware designs for things like this, a 12U converged chassis comprised of storage and compute resources.

ocs
Are those brown Zunes in the blades?

 

Then there’s Open Management Infrastructure (OMI), an initiative of the The Open Group (TOG). Microsoft joined OMI almost three years ago to align & position Windows to share common management frameworks across disparate hardware & software systems.

That’s a lot of words with little meaning, so let me break it down for the Windows guys and gals reading this. The promise of Microsoft’s OMI participation is this: you can configure other people’s hardware and software via the same frameworks your Windows Server runs on (CIM, the next-gen WMI) using the same techniques and tooling you manage other things with: Powershell.

All your management constructs are belong to CIM
All your management constructs are belong to CIM

I’ve been keenly interested in Microsoft & their OMI push because it’s an awesome vision, and it’s real, or real-close at any rate: SMI-S, for instance, is gaining traction as a management play on other people’s hardware/software storage systems ((cf NIMBLE STORAGE NOW INTEGRATES WITH SCVMM)) , and is already baked-into Windows server as a feature you can install and use to manage Windows Storage Spaces, which itself is a first-class citizen of CIMville.

All your CIM classes -running as part of Windows or not- manipulated & managed via Powershell, the same ISE you and I use to deploy Hyper-V hosts, spin-up VMs, manage our tenants in Office 365, fiddle around in Azure, and make each day at work a little better and a little more automated than the last.

That’s the promised land right there, ladies and gentlemen.

Except for networking, the last stubborn holdout in my fevered powershell dream.

Jeff Snover, the architect of the vision, teases me with Powershell Leaf Spine Tweets like this:

//platform.twitter.com/widgets.js

but  I have yet to replace Putty with Powershell, I still have to do show int status rather than show-interface -status “connected” on my switch because I don’t have an Arista or N7K, and few other switches vendors seem to be getting the OMI religion.

All of which makes Microsoft’s Tuesday announcement that it is extending its commitment to OCP’s whitebox switching development really odd yet worthy of more consideration:

The Switch Abstraction Interface (SAI) team at Microsoft is excited to announce that we will showcase our first implementations of the specification at the Open Compute Project Summit, which kicks off today at the San Jose Convention Center. SAI is a specification by the OCP that provides a consistent programming interface for common networking functions implemented by network switch ASIC’s. In addition, SAI allows network switch vendors to continue to build innovative features through extensions.

The SAI v0.92 introduces numerous proposals including:

Access Control Lists (ACL)
Equal Cost Multi Path (ECMP)
Forwarding Data Base (FDB, MAC address table)
Host Interface
Neighbor database, Next hop and next hop groups
Port management
Quality of Service (QoS)
Route, router, and router interfaces

At first glance, I wouldn’t blame you if you thought that this thing, this SAI, means OMI is dead in networking, that managing route/switch via Powershell is gone.

But looking deeper, this development speaks to Microsoft’s unique position in the market (all markets, really!)

  1. SAI is probably more about low-level interaction with Broadcom’s Trident II ((At least that’s my read on the Github repo material)) and Microsoft’s participation in this is more about Azure and less about managing networking stuff w/Powershell
  2. But this is also perhaps Microsoft acknowledging that Linux-powered whitebox switching is really enjoying some momentum, and Microsoft needs to have something in this space

So, let’s review: Microsoft has embraced Open Compute & Open Management. It breaks down like this:

  • Microsoft + OCP =  Contributions of hardware blueprints but also low-level software code for things like ASIC interaction
  • Microsoft + OMI = A long-term strategic push to manage x86 hardware & software systems that may run Windows, but likely run something Linuxy yet

In a perfect world, OCP and OMI would just join forces and be followed by all the web-scale players, the enterprise technology vendors, the storage guys & packet pushers. All would gather together under a banner singing kumbaya and praising agnostic open hardware managed via a common, well-defined framework named CIM that you can plug into any front-end GUI or CLI construct you like.

Alas, it’s not a perfect world and OCP & OMI are different things. In the real world, you still need a proprietary API to manage a storage system, or a costly license to utilize another switchport. And worst of all, in this world, Powershell is not my interface to everything, it is not yet the answer to all IT questions.

Yet Microsoft, by virtue of its position in so many different markets, is very close now to creating its own perfect world. If they find some traction with SAI, I’m certain it won’t be long before you can manage an open Microsoft-designed switch that’s a first-class OMI citizen and gets along famously with Powershell! ((Or buy one, as you can buy the Azure-in-a-box which is simply the OCP blueprint via Dell/Microsoft Cloud Platform System program))

The Value of Community Editions

I was excited to hear on the In Tech We Trust podcast this week that the godfather of all the hyperconverged things -Nutanix- may release a community edition of their infrastructure software this year.

That. Would. Be. Amazing.

I’ve crossed paths with Nutanix a few times in my career, but they’ve always remained just a bit out of reach in my various infrastructure projects. Getting some hands-on experience with the Google-inspired infrastructure system in my lab at home would be most excellent, not just for me, but for them, as I like to recommend product stacks I’ve touched above ones I haven’t.

Take Nexenta as an example. As Hans D. pointed out on the show, aside from downloading & running Oracle Solaris 12, Nexenta’s just about the only way one can experience a mature & enterprise-focused implementation of ZFS. I had a blast testing Nexenta out in my lab in 2014 and though I can’t say my posts on ZFS helped them move copies of NexentaStore, it surely didn’t hurt in my view.

VEEAM is also big in the community space, and though I’ve not tested their various products, I have used their awesome stencil collection.

Lest you think storage & hyperconvergence vendors are the only ones thinking ‘community, today my favorite yellow load balancer Kemp announced in effect a community edition of their L4/L7 Loadmaster vAppliance. Kemp holds a special place in the hearts of Hyper-V guys; as long as I can remember, yes even back in the dark days of 2008 R2, they’ve always released a Loadmaster that’s just about on-par with what they offer to VMware shops. In 2015 that support is paying off I think; Kemp’s best-in-class for Microsoft shops running Hyper-V or building out Azure, and with the announcement you can now stress a Kemp at home in your lab or in Azure with your MSDN sub. Excellent.

Speaking of Microsoft, I’d be remiss if I didn’t mention Visual Studio 2013, which got a community edition last fall.

I’d love to see more community editions, namely:

  • Nimble Storage: I’ve had a lot of success in the last 18 months racking/stacking Nimble arrays in environments with older, riskier storage. I must not be the only one;  the company recently celebrated its 5,000th customer. Yet, Nimble’s rapid evolution from storage startup with potential to serious storage player is somewhat bittersweet for me as I no longer work at the places I’ve installed Nimble arrays and can’t tinker with their rapidly-evolving features & support. Come on guys, just give me the CASL caching system in download form and let me evaluate your Fiber Channel support and test out your support for System Center
  • NetApp: A community release of Clustered Data OnTAP 8.2x would accomplish something few NetApp products have accomplished in the last few years: create some genuine excitement about the big blocky blue N. I’m certain they’ve got a software-only release in-house as they’ve already got an appliance for vSphere and I heard rumors about this from channel sources for years. So what are you waiting for NetApp? Let us build-out, support, and get excited about cDOT community-style since it’s been too hard to see past the 7-mode–>clustered mode transition pain in production.

On his Graybeards on Storage podcast, Howard Marks once reminisced about his time testing real enterprise technology products in a magazine’s tech lab. His observations became a column, printed on paper in an old-school pulp magazine which was shipped to readers. This was beneficial relationship for all.

Those days may be gone but thanks to scalable software infrastructure systems, the agnostic properties of x86, bloggers & community edition software, perhaps they’re back!

Whitebox lab server

Node1.daisettalabs.net, my primary PC and the best-equipped server in the homelab, has received an upgrade.

A whitebox upgrade. Literally:

IMG_20150303_052455318

 

I’m a fan of metaphors and whitebox everything is a powerful one in our line of work, so I figured why not roll my own whitebox server in the lab?

Node1 vitals:

  • Motherboard: Supermicro X10SAT with all the PCIe 3.0 slots you’d need, Thunderbolt port, and integrated Haswell graphics plus a pair of Intel NICs
  • CPU: Intel Core i7-4770K (Haswell), quad core with hyperthreading
  • RAM: 4x8GB Kingston Hyper-X non-ECC
  • Storage (Boot): 2xSamsung 850 SSD (240GB) in RAID 0 because I like to live dangerously  I’ve just about automated the buildout of this server and most of my data is in One Drive for Business
  • Storage (Tiered Storage Spaces): 2x 128GB SanDisk Extreme + 2x1TB WD Red 2.5″
  • Graphics: AMD FirePro W4100 w/ 2GB RAM makes my Visio buttery smooth.
  • Networking:  The Supermicro has a pair of Intels, an I-210 and a 217V, both of which connect up to my Cisco 2960S in the garage. To that I’ve also added a Pro1000 PCIe 2.0 card with dual ports, one of which also connects to the 2960S (I only ran 3 cables from the garage to my home office)
  • OS: Server 2012 R2 Standard, naturally, with full Desktop GUI and Windows Management Framework February 2015 preview so that I can tinker with DSC
  • Case: NZXT 340 something or other. Very nice case for $70. I’ve never wanted to exhibit the inside of a PC I’ve built, but this case makes it so simple to hide the nasty PC underlay (power, SATA etc)

#WhiteboxGlory shot of the innards that make the child partition go “wooooow!!”

IMG_20150303_052331601

 

 

Hunting Lettered Drives in a Microsoft Enterprise

Of all the lazy, out-dated constructs still hanging around in computing,SMB shares mapped as drive letters to client PCs has to be the worst.

Microsoft Windows is the only operating system that still employs these stubborn, vestigal organs of 1980s computing. Why?

Search me. Backwards compatibility perhaps, but  really? It’s not like you can install programs to shares mapped as drive letters, block-storage style.

If you work in Microsoft-powered shops like me, then you’re all too familiar with lettered drive pains. Let’s review:

  1. Lettered drives are paradigms from another era: Back in the dial-up and 300 baud modem days you got in your car and drove to Babbages to purchase a big box on a shelf. The box contained floppy diskettes, which contained the program you wanted to use. You put the floppy in your computer and you knew instinctively to type a: on your PC. Several hours later after installing the full program to your C: drive, you took the floppy out of its drive and A: ceased to exist. If this sounds archaic to you (it is), then welcome to IT’s version of Back to the Future, wherein we deploy, manage and try to secure systems tied to this model
  2. Lettered drives are dangerous:  The Crytpo* malware viruses of the last two years have proven that lettered drives = file server attack vector. I have friends dealing with Gen 3 of this problem today; a drive map from one server to all client PCs must be a Russian crypto-criminal’s dream come true.
  3. Your Users Don’t Understand Absolute/Relative paths:  When users want to share a cat video from the internet, they copy + paste the URL into an email, press send, and joyous hilarity ensues. But anger, confusion, despair & Help Desk tickets result when those same users paste a relative path of G:FridayFunDebsFunnyCatVids into an email and press send. Guess what Deb? Not everyone in the world has a G: drive. This is frustrating for IT, and Deb doesn’t understand why they’re so mad when she opens a ticket.
  4. Lettered drives spawn bad practice offspring: Many IT guys believe that lettered drives suck, but they end up making more of them out of laziness, fear or uncertainty. For instance: say the P:HR_Benefits folder is mapped to every PC via Group Policy, and everyone is happy. Then one day someone in HR decides to put something on the P: drive that users in a certain department shouldn’t see. IT hears about this and figures, “Well! Isn’t this a pickle. I think, good sir, that the only way out of this storm of bad design is to go through it!” and either stands-up a new share on a new letter (\fsSecretHRStuff maps to Q:) or puts an NTFS Deny ACL on the sub-folder rather than disabling inheritance. More Help Desk tickets result, twice as many if the drive mapping spans AD Sites and is dependent on Group Policy.
  5. Lettered drives don’t scale: Good on your company for surviving and thriving throughout the 90s, 2000s, and into the roaring teens, but it’s time for a heart-to-heart. That M:Deals thing you stood-up in 1997 isn’t the best way to share documents and information in 2015 when the company you helped scale from one small site to a global enterprise needs access to its files 24/7 from the nearest egress point.

I wish Microsoft would just tear the band-aid off and prevent disk mapping of SMB shares altogether. Barring that, they should kill it by subterfuge & pain ((Make it painful, like disabling signed drivers or something))

But at the end of the day, we the consumers of the Microsoft stack bear responsibility for how we use it. And unfortunately, there is no easy way to kill the lettered drive, but I’ll give you some alternatives. It’s up to you to sell them in your organization:

  1. OneDrive for Business: Good on Microsoft for putting advanced and updated OneDrive clients everywhere. This is about as close to a panacea as we get in IT. OneDrive should be your goal for files and your project plan should go a little something like this: 1) Classify your on-prem file shares, 2) upload those files & classification metadata to OneDrive for Business, and 3) install OneDrive for Business on every PC, device, and mobile phone in your enterprise, 4) unceremoniously kill your lettered drive shares
  2. What’s wrong with wack-wack? Barring OneDrive, it’s trivial to map a \sharefolder to a user’s Library so that it appears in Window Explorer in a univeral fashion just like a mapped drive would
  3. DFS: DFS is getting old, but it’s still really useful tech, and it’s on by default in an AD Domain. Don’t believe me? Type \yourdomain and see DFS in action via your NETLOGON & SYSVOL shares. You can build out a file server infrastructure -for free- using Distributed File Sharing tech, the same kit Microsoft uses for Active Directory. Say goodbye to to mapping \sharesharename to Site1 via Group Policy, say hello to automatic putting bits of data close to the user viaGroup Policy.
  4. Alternatives: If killing off the F: drive is too much of an ask for your organization, consider locking them down top prirority with tools like SMB signing, access-based enumeration and other security bits available in Server 2012 and 2012 R2.

My Little Red Zed Edge – ZyXEL Zywall USG-50 Review

So I have a confession to make. I love Zyxel USG firewalls.

There, I said it. Feels good to finally admit it, to come out of the closet as a ZedHead, more or less.

I do not fear the judgment of the packet-pushing literati on twitter, because my little Red Zed edge device is loaded with features and packed with value.  Way more value than an ASA 5505 at any rate.

And after like six months of trying to understand the damn thing, I finally get it. Let me tell you a little about RedZed.daisettalabs.net, the edge device guarding the home lab, Child Partition, Supervisor Mod spouse and me from the big bad internet.

redzed2

The Good

It’s so loaded with features, it’s practically a hyperconverged play: For $200 and change, my Zyxel USG-50 Zywall is packed with features other vendors would have sharded  out as separate SKUs long ago.  Just take a look at the feature list here. Granted, the sexier ones are subscriptions, but Zyxel lets you take them for a test drive for 30 days, which I of course did the moment I got it. I haven’t subscribed to any since they expired, and frankly was disappointed with the BlueCoat implementation, but I’m considering the IDP subscription.

Even excepting all of the subscription programs, the Zed punches above its class with features that offer real value for a small/medium business, or even nerds guarding the LAN at home. The ones I really appreciate are listed below.

It’s PKI in a box, with some good identity integration: I like Public Key Infrastructure systems and so should you. The ZyXEL comes with one built-in. Though modest in scope (essentially you can generate/sign certs, no revocation/responder pieces) this is a nifty thing to have at this pricepoint, just the kind of value-add a small business might look for.

The Zed also capably integrates with AD directly, though in my testing it was a bit clunky & quite slow to authenticate against a 2012R2 domain. So, you can do what I did and switch to RADIUS, or LDAP if that’s your speed.

Easy WAN LBFO:  With the USG-50, you get two WAN links with easy ability to failover or spillover between them.

I’m using this in the lab at home and it works quite well. Though I only have one consumer internet connection, I’ve found that my provider hands out two public, routable IP addresses if I I connect two cables to my modem. This is awesome -worth its own post really- as I’ve been able to test WAN failure on Zed.

On WAN Port 1, I’ve got my last edge firewall device, a small PFsense box with an AMD Sempron and privoxy.

On WAN Port 2, I’m cabled directly to the modem. You get quite a few options to manage failover/spillover between the links, just like when you’re making an MPIO storage policy to your array! Perfect.

Both links work (double-natting behind pfsense works too, though I only ran it like that for a short while), and failover is pretty much transparent on general web stuff,  even a VPN service I run on node1 maintains connectivity during the failover.

Time for some Gifcam action:

wanfailover

Zyxel seems to know its target market quite well, and that market has commodity internet circuits -not private leased lines- connecting branch to HQ and branch to internet. WAN failover (no aggregation here, but I’m not sold on WAN aggregation yet) is important, and it’s huge that the Zed rocks LBFO out of the box, no licenses needed, and a few clicks to configure.

Zone-based firewall: I am not a security guy, but I understand the state of the art thinking to be less Internal/External as it used to be, and more segmentation everywhere via zones based on a sort of defense-in-depth concept; Create checkpoints or at least rules between external & internal segments of your network, in other words.

Zones come built in by default with ZyXEL, and figuring out the proper way to use them is what caused me so much pain & suffering with this device for so many months.

Now, I think I’ve got the concept down, but I’m not confident enough to talk about how well this device secures zones internally or externally, so just know this: it’s there. The firewall is ICSA certified, though reading through those docs it didn’t seem like that was much more than a rubber-stamp.

Object-Oriented ports, interfaces, zones, and VLANs: So this is the heart of USG line, more or less. It’s why some  dislike working with USGs, and others, like me, warm up to and eventually appreciate it. YMMV.

 

So what’s this OO thing about? I like to think of it as an abstraction, just like anything else in virtualization. Let’s take a look at how the docs define Zones, for instance:

zones

Oh. That’s not so bad, right? As long as I know the rules, I should just be able to click this thing here, hit apply on that thing there, and voila! ping my SVI…ahhh damnit!

Locked out again.

But seriously, when you go to configure this screen:Untitled picture

lock yourself out again, and refer back to the manual to figure out what you did wrong and you see this:

zones2

then you hop on putty post-factory default and it shows you something like this:

zywalcli

you feel kind of stupid and you start to hate this device, which seems to suffer from an acute case of Layer 2/Layer 3 identity disorder.

But struggle through it packeteer, because what awaits you on the other end is, if not the SDN you’ve been waiting for, then at least pretty damn flexible.

Here’s a primer to help you through:

Zone: A group of interfaces + a security context. You get three on the USG-50 line, DMZ, LAN1, LAN2

Interfaces: Software-based, not hardware. Three: LAN1, LAN2, DMZ. RENAME THESE!

Ports: The physical RJ-45; you get four

Port Groups: Hardware-based links connecting ports with each other

And the soft bits:

VLANs: VLANs exist on interfaces and cannot span multiple Zones. They act like trunked ports, and they tag outbound, and look for tags on inbound. Do you like SVIs? Well if you do then you gotta put an IP on it (required)

Bridges: A software link between interfaces at Layer 2. More or less the traditional definition of a switch, right? But you can put an IP on a bridge and -strangely- span zones with Bridges.

Vifs: As you would expect,simple vifs can be created in the contexts above. Useful.

I’m a visual person, so I made a little chart to help me get it.

zywall-oo

The chart shows a couple of things: 1) there are three zones in the four boxe. All the things inside each box belong to those zones, but not other zones. 2) Center Circle area shows re-named port-groups. My best advice is to rename LAN 1 & LAN2 into something else, so that you don’t get mixed up as I did consistently. 3) VLANs can exist in only the same Zone but effectively span ports. 4) Bridge is all sorts of Twilight Zone as a Bridge can join a VLAN  in Zone 2 with  the DMZ port group in Zone DMZ (but not its VLAN). 5) Ports are really nothing, just agnostic Layer 1 interfaces, or at least you can turn them into that.

From your Cisco switch, this is great, and enabled me to finally do what I wanted to do in my lab: tagging, everywhere and always from edge to core, and out back again over the airwaves! From my Meraki (VLAN 420 is for 2.4GhZ and devices I don’t really trust, 421 is laptop net + 5ghZ) to the Zywall through my 2960s, all is tagged, all is controlled and segmented.

Was it worth the six month fight with Zed to get to this point?

Why yes, yes it was.

All around decent performance: Again, punching at  or a little above its weight in firewall performance, and still offering good bang for buck value on encryption & IPS compared to the ASA 5505 ($340 retail) the other device I see everywhere in SME. Performance table based off spec-sheets below:

[table]

Item,Zed USG-50, Cisco ASA 5505

SPI Firewall throughput, up to 225mb/s, up to150mb/s

3DES/AES VPN Throughput, 90mb/s, up to 100 mb/s

IPS Throughput, 30mb/s, Upto 75mb/s

RJ-45 Ports,2xGbE WAN+ 4xGbE LAN, 8xFaE two with PoE

IPSec Tunnels (Max), 10, 10 (base)

[/table]

You can buy it at Fry’s, which is how I got mine: Oh man, I am really putting myself out there by admitting to occasionally shopping at Fry’s Electronics. Visiting Fry’s usually depresses me…as a retail experience, it’s not aged well and seeing one row after another filled with discarded, rejected & returned technology items is a real downer.

But sometimes the sales are really compelling. I had my eye on the USG-50 for months at $240, but I couldn’t pull the trigger until I saw it was on sale at Fry’s one weekend for $200. So I bought it, racked/stacked it in my lab that evening, and now, six months later, I’m astounded that you can just walk in and buy a value & feature-packed device like this without talking to a VAR first.

ZyXel could probably make more money if they parsed out the features as SKUs & sold the USG through exclusively through the channel, but they don’t. They sell it in places you can find consumer/prosumer equipment and pack it with some nice features an IT guy can appreciate.

Good Update Tempo: No gripes on the amount of firmware updates ZyXel continuously pushes out for free. I watch the CVE list for vulnerabilities, and while ZyXel has a spotty record in other product lines, it looks like you have to go back to 2008 to find a CVE that applies to the USG line.

No one knows how to pronounce the goofy name, so you can nickname it: Wikipedia’s description of the origin of the ZyXEL name is fun:

When ZyXEL unveiled its first chip-design (ZyXEL was originally a modem-chip design company) back in the late 1980s, the company only had a Chinese name (pronounced Her-Chin = “people work together very hard”). So it had to come up with an English name for a trade show in Asia. The original idea was ZyTEL (“Zy” means nothing, “TEL” for telecommunications). The problem was that someone already had this name announced for the show. So they played around with the letters and came up with ZyXEL instead.

The name does not actually mean anything, although some people claim “XEL” is a word-play on “excellence”.

The next challenge was how to pronounce it (everybody in the company was Chinese at that time).

So they fed the name into an old speech synthesizer (reportedly it was an Amiga). And the synthesizer pronounced it “Zai-Cel

I gave up and call it Zed, the proper British phoentic for the letter Z.

Embrace color in your stack:  Everyone’s putting some flourish & color into rack-mounted equipment, but Zed’s been Red for years.

Great, readable, dense documentation: Though I poke fun at the documentation above, it’s actually very very good at this price range. Six hundred pages good. Well-written too, with adequate diagrams, organization and scenarios.

Links at the bottom.

The Bad

Don’t use it as your DG for everything: If you are using a USG line device, my advice is not to think of those LAN-side ports as a Layer 2 switch ports, and furthermore, not to use this device as the default gateway handed out to clients that need LAN performance. Why?

Simple. It’s not really a switch, and it doesn’t perform well if you use it as such at Layer 2, and especially at Layer 3. Remember the zones above? Well they are security contexts, which means that your packets must gate through them, which will -mark my words- slow them down.

Simple example: using Red Zed as DG on my LAN, I tested large (4GB) SMB 3 file copies to my storage box. I peaked at about 180 megabits/second, a truly pathetic number, but within the the performance spec listed for the inspection engine looking at packets flowing between zones. Even within the same Zone (same port-group, so effectively switching @ layer2) I couldn’t hit above 45 megabytes/second, far less than the 260MB/s transfers I can achieve wtih my switch & LACP.

If you need performance but you like the Zone model, I recommend you use your switch as DG for servers and make the USG the gateway of last resort on the switch. Assuming your packets are tagged, you stay in your VLAN context throughout.

For untrusted or clients that don’t need wired performance, use the USG-50 as DG.

The Ugly & Conclusion

I can’t find anything ‘ugly’ about the USG. It’s a great device with a ton of functionality and neat features that make it a superb value against a more traditional ASA 5505.

[dg]

Zywall USG50 CLI

ZyWALL USG 50_v3 manual

Microsoft is the original & ultimate hyperconverged play

The In Tech We Trust Podcast has quickly became my favorite enterprise technology podcast since it debuted late last year. If you haven’t tuned into it yet, I advise you to get the RSS feed on your favored podcast player of choice ASAP.

The five gents ((Nigel Poulton, Linux trainer at Pluralsight, Hans De Leenheer,datacenter/storage and one of my secret crushes, Gabe Chapman, Marc Farley and Rick Vanover)) putting on the podcast are among the sharpest guys in infrastructure technology, have great on-air chemistry with each other, and consistently deliver an organized & smart format that hits my player on-time as expected every week. Oh, and they’ve equalized the Skype audio feeds too!

And yet….I can’t let the analysis in the two most recent shows slip by without comment. Indeed, it’s time for some tough love for my favorite podcast.

Guys you totally missed the mark discussing hyperconvergence & Microsoft over the last two shows!

For my readers who haven’t listened, here’s the compressed & deduped rundown of 50+ minutes of good stimulating conversation on hyperconvergence:

  • There’s little doubt in 2015 that hyperconverged infrastructure (HCI) is a durable & real thing in enterprise technology, and that that thing is changing the industry. HCI is real and not a fad, and it’s being adopted by customers.
  • But if HCI is a real, it’s also different things to different people; for Hans, it’s about scale-out node-based architecture, for others on the show, it’s more or less the industry definition: unified compute & storage with automation & management APIs and a GUI framework over the top.
  • But that loose definition is also evolving, as Rick Vanover sharply pointed out that EMC’s new offering, vSpex Blue, offers something more than what we’d traditionally (like two weeks ago) think of as hyperconvergence

Good stuff and good discussion.

And then the conversation turned to Microsoft. And it all went downhill. A summary of the guys’ views:

  • Microsoft doesn’t have a hyperconverged pony in the race, except perhaps Storage Spaces, which few like/adopt/bet on/understand
  • MS has ceded this battlefield to VMware
  • None of the cool & popular hyperconverged kids, save for Nutanix and Gridstore, want to play with Microsoft
  • Microsoft has totally blown this opportunity to remain relevant and Hyper-V is too hard. Marc Farley in particularly emphasized how badly Microsoft has blown hyperconvergence

I was, you might say, frustrated as I listened to this sentiment on the drive into my office today. My two cents below:

The appeal of Hyperconvergence is a two-sided coin. On the one side are all the familiar technical & operational benefits that are making it a successful and interesting part of the market.

  • It’s an appliance: Technical complexity and (hopefully) dysfunction are ironed out by the vendor so that storage/compute/network just work
  • It’s Easy: Simple to deploy, maintain, manage
  • It’s software-based and it’s evolving to offer more: As the guys on the show noted, newer HCI systems are offering more than ones released 6 months or a year ago.

The other side of that coin is less talked about, but no less powerful. HCI systems are rational cost centers, and the success of HCI marks a subtle but important shift in IT & in the market.

    • It’s a predictable check cut to fewer vendors: Hyperconvergence is also about vendor consolidation in IT shops that are under pressure to make costs predictable and smoother (not just lower).
    • It’s something other than best-of-breed: The success of HCI systems also suggests that IT shops may be shying away from best-of-breed purchasing habits and warming up to a more strategic one-throat-to-choke approach ((EMC & VMware, for instance, are titans in the industry, with best-in-class products in storage & virtualization, yet I can’t help but feel there’s more going on than the chattering classes realize. Step back and think of all the new stuff in vSphere 6, and couple it with all the old stuff that’s been rebranded as new in the last year or so by VMware. Of all that ‘stuff’, how much is best of breed, and how much of it is decent enough that a VMware customer can plausibly buy it and offset spend elsewhere?))
    • It’s some hybrid of all of the above: HCI in this scenario allows IT to have its cake and eat it too, maybe through vendor consolidation, or cost-offsets. Hard to gauge but the effect is real I think.

((As Vanover noted, EMC’s value-adds on the vSpex Blue architecture are potentially huge: if you buy vSpex Blue architecture, you get backup & replication, which means you don’t have to talk to or cut yearly checks to Commvault, Symantec or Veeam. I’ve scored touchdowns using that exact same play, embracing less-than-best Microsoft products that do the same thing as best-in-class SAN licenses))

And that’s where Microsoft enters the picture as the original -and ultimate- Hyperconverged play.

Like any solid HCI offering, Microsoft makes your hardware less important by abstracting it, but where Microsoft is different is that they scope supported solutions to x86. VMware, in contrast only hands out EVO:RAIL stickers to hardware vendors who dress x86 up and call it an appliance, which is more or less the Barracuda Networks model. ((I’m sorry. I know that was a a cheapshot,  but I couldn’t resist))

With your vanilla, Plain Jane whitebox x86 hardware, you can then use Microsoft’s Hyperconverged software system (or what I think of as Windows Server) to virtualize & abstract all the things from network (solid NFV & evolving overlay/SDN controller) to compute to storage, which features tiering, fault-tolerance, scale-out and other features usually found in traditional SAN systems.

But it doesn’t stop there. That same software powers services in an enormous IaaS/PaaS cloud, which works hand-in-hand with a federated productivity cloud that handles identity, messaging, data-mining, mail and more. The IaaS cloud, by the way, offers DR capabilities today, and you can connect to it via routing & ipsec, or you can extend your datacenter’s layer 2 broadcast domain to it if you like.

On the management/automation side, I understand/sympathize with ignorance of non-‘softies. Microsoft fans enthuse  about Powershell so much because it is -today-  a unified management system across a big chunk of the MS stack, either masked by GUI systems like System Center & Azure Pack or exposed as naked cmdlets. Powershell alone isn’t cool though, but Powershell & Windows Server aligned with truly open management frameworks like CIM, SMI-S and WBEM is very cool, especially in contrast to feature-packed but closed APIs.

On the cost side,there’s even more to the MS hyperconverged story:  Customers can buy what is in effect a single SKU (the Enterprise Agreement) and get access to most if not all of the MS stack.

Usually,organizations pay for the EA in small, easier-to-digest bites over a three year span, which the CFO likes because it’s predictable & smooth. (( Now, of course, I’m drastically simplifying Microsoft’s licensing regime and the process of buying an EA as you can’t add an EA to your cart & checkout, it’s a friggin negotiation. And yes I know everyone hates the true-up. And I grant that an EA just answers the software piece; organizations will still need the hardware, but I’d argue that de-coupling software from hardware makes purchasing the latter much, much easier, and how much hardware do you really need if you have Azure IaaS to fill in the gaps?))

Are all these Microsoft things you’ve bought best of breed? No, of course not. But you knew that ahead of time, if you did you homework.

Are they good enough in a lot of circumstances?

I’ll let you judge that for yourself, but, speaking from experience here, IT shops that go down the MS/EA route strategically do end up in the same magical, end-of-the-rainbow fairy-tale place that buyers of HCI systems are seeking.

That place is pretty great, let me tell you. It’s a place where the spend & costs are more predictable and bigger checks are cut to fewer vendors.  It’s a place where there are fewer debutante hardware systems fighting each other and demanding special attention & annual maintenance/support renewals in the datacenter. It’s also a place where you can manage things by learning verb-noun pairs in Powershell.

If that’s not the ultimate form of hyperconvergence, what is?

Snover re-factoring Windows Server & System Center

My last two posts on Microsoft were filled with angst and despair at Microsoft’s announcement that the next gen versions of Server & System Center would be delayed until sometime in 2016. Why, I cried out, why the delay on Server, and what’s to become of my System Center, I wondered?

I went a bit off-the-rails, imagining that Satya Nadella had shaken things up for the System Center team. Then I wrote a letter to him asking him what was up.

Snover & Microsoft love Linux
Snover & Microsoft love Linux

Well, I was wrong on all that, or perhaps I was only a little bit right.

There was a shakeup, but it wasn’t Nadella who had angrily overturned a gigantic redwood table at System Center HQ, spilling Visio shapes & System Center management packs as he did so, rather it was Mr Windows himself, the Most Distinguished of Distinguished Technical Fellows, Dr. Jeffrey Snover who had shaken things up.

Yes. The Padre of Powershell himself filled in the gaps for me on why System Center & Windows Server were delayed during a TechDays online one day after my last post.

During that  talk, he announced that the Windows Server Team has been meshed with the System Center Team and, even better, the Azure team. Hot dog.

Redmond mag:

[Snover] explained that the System Center team and the Windows Server team are now “a single organization,” with common planning and scheduling. He said that the integration of the two formerly separate organizations isn’t 100 percent, but it’s better than it’s been in the past. The team also takes advantage of joint development efforts with the Microsoft Azure team, he added.

That’s outstanding news in my view.

Microsoft’s private|hybrid|public cloud story is second to none as far as I’m concerned. No one else offers deep integration between cutting edge public cloud systems (Azure) with your on-prem legacy infrastructure stack.

Yet that deep integration (not speaking of AAD Sync & ADFS 3 here) was becoming confused and muddled with overlap between the older tools (System Center) and the newer tools like Desired State Configuration, mixed in with AzurePack, an on-prem/cloud management engine.

It sounds to me like Snover’s going to put together a coherent strategy using all the tools, and I can’t think of a better guy to do the job.

But what of Windows server?

It’s getting Snovered too, but in a way that’s not as clear to me. Again, Redmond mag:

The next Windows Server product will be deeply refactored for cloud scenarios. It will have just the components for that and nothing else, Snover explained. Next, on top of that, Microsoft plans to build a server that will be the same as the Windows Servers that organizations currently use. This server it will have two application profiles. One of the application profiles will target the existing APIs for Windows Server, while the other will target the subsets of the APIs that are cloud optimized, Snover explained. On top of the server, it will be possible to install a client, he added. This redesign is happening to better support automation, he explained.

I watched most of Snover’s talk, took a few days to think about it, and still have no idea what to make of the high-level architecture slide below that flashed on screen briefly:

vnext

Some thoughts that ran through my head: is the cloud-optimized server akin to CoreOS, with active/passive boot partitions, something that will finally make Patch Tuesday obsolete? One could hope that with further abstraction, we’ll get something like that in Windows Server vNext.

In some sense, we already have parts of this: if you enable the Hyper-V feature on a bare-metal computer, you emerge, after a few reboots, running a Windows virtual machine atop a Type-1 Hypervisor.

Big deal right? Well, Snover’s slide seems to indicate this will be the default state for the next generation of Windows server, but more than that, it seems to indicate that what we think of as the Type-1 Hyperivisor is getting a bunch of new features, like container support.

We knew Docker support was coming, but at this level, and almost indistinguishable from the hypervisor itself?

That’s potentially all kinds of awesome.

Interestingly, Server Roles & Features look like they’re being recast into a “Client” level that operates above a Windows Server.

Which, if we continue down the rabbit hole, means we have to ask the question: If my AD Domain Controller  or my RemoteApp session host farm servers are now clients, what are they running on? It certainly doesn’t seem to be a Windows server anymore, but rather a kind agnostic compute fabric, made up of virtual “Servers” and/or “Containers” operating atop a cloud-optimized server running on bare-metal…an agnostic computing ((Damn straight, had to work that in there)) fabric that stretches across my old on-prem Dells all the way up to the Azure cloud…right?!?

I’m like four levels deep into Jeffrey Snover’s subconscious so I’ll stop, but suffice it to say, the delay of Windows Server & System Center appears to be justified and I can’t wait to start testing it in 2016.

Open Letter to Satya Nadella re: Windows Server/System Center Delay

Seattle, WA (AP):  Microsoft today postponed the release of its next generation computer server operating system, Windows Server 2015, as well as a companion app or program called “System Center” in a stunning move that left IT Pros throughout the world sad, angry, and in a state of bewilderment. The Redmond- based computing giant told reporters it had no further comment 

Whoa whoa whoa. This has got to be a mistake. Hey Cortana, why is the AP reporting that Server’s been delayed?

Cortana: That’s classified. 

Say what Cortana??

Cortana: Master Chief, that is classified information. 

Classified? Windows Server? System Center? Cortana, you’re buggy as hell. Take down this email addressed to Mr. Satya Nadella.

Cortana: Yes Master Chief. 

Dear Satya Nadella,

I read this about Server & System Center, and I’m in shock.

Microsoft today postponed the release of its next generation computer server operating system, Windows Server 2015, as well as a companion app or program called “System Center”

Like hell you are going to delay……come on Mr. Nadella, I thought you were an enterprise guy, like me! What gives?

I’ve been computing on Server Technical Preview Build 9481 -a four month old operating system for crying out loud- patiently waiting for something fresh and new, for some of the promised manna from the Azure clouds to drop onto me.

Waiting around like an unloved Android handset waited in vain for its Kit Kat update, hopeful on release day, yet the update progress bar never comes.

It never comes Mr Nadella, and I am just a sad, jilted little robot, green & jealous of all the attention the Insiders receive on Windows 10.I just don’t understand….Of all the things the things you could have punted on, all the useless consumer things like that health band,  you decided to punt on Windows Server & System Center?

The insiders complain about Windows 10, even as they download the new bits & enjoy the new features, they complain. How your team suffers these fools is beyond me.

Yet your Server fans can’t even install Windows Management Framework 5.0 and play with all the neat cutting edge things available to WMF 5 on Server 2012 R2, Do we bitch and moan? Not as much!

That cuts me deep, Mr. Nadella, real deep.

Mary Jo says you’ve halted the release of System Center & Server this year because you want to get more feedback from customers, that woe-is-me IT Professionals are urging you to slow down, begging, “Please…mercy sir!! We upgraded two years ago! We don’t want to do it again so soon! Please let us have Windows 2003 for a few more years!!”

If I may suggest Satya, if I can call you by your first name. These are not the IT Professionals you are looking for. You should talk to IT Pros like me, or Aidan Finn, or Didier van Hoye, we’re the guys who got the memo that servers are not pets, but cattle, and that like all cattle, VMs have a certain natural lifecycle, a lifecycle I, might add, that you can make into a Cattle Template right in System Center VMM.*

MSFTSystemCenterlogo1Or is it something else? Does the Google/VMware cloud thing have you worried? I wouldn’t fret too much; it’s totally obvious those two lack something important that can only be had by jumping in bed with the other. For VMware, it’s cloud credibility, for I have seen the vCloud Air, and yea though I was impressed, it still lacks a certain j‘ne Se Qua compared to Azure.  As for Google? They want into the brownfield enterprise, which, if it’s virtual, is about 2/3rds VMware and 25% Microsoft.

What they both want is what you have in spades Satya: a story. A story that sounds like science fiction but is actually running in production around the world right now today. A story with a hero whose name is Windows Server, a multi-talented, jack-of-all-trades/master-of-some genuine American hero that can beat just about any enterprise villain in the hands of a skilled IT Pro like me.  Like so:

Windows Server. Not afraid to rationalize and tier your storage, just like a SAN. 

Windows Server & VMM: It’s the Type 1 Hypervisor & automation engine that ropes, rides, and gathers up all your stray VMs so that they can be put to work for you. 

Windows Server. With Exchange as his sidekick, makes for a best-in-class messaging platform, on-prem, hybrid, or up in Office 365. 

Windows Server, SCOM, and SCCM: You don’t need to hire a Sheriff to police things when Windows Server comes to town, for he’s packing all sorts of security heat: PKI, RMS, RBAC, identity, sChannel, AD, Defender, antimalware, Forefront. And his sidekick SCCM will keep tabs on all the PCs, mobile devices. 

Windows Server: It’s Azure-scale, runs Nebula too, but is humble & approachable enough to slum it in a garage lab

Windows Server: GUI?!? We don’t need no stinkin’ gui for Windows Server has Powershell. 

Windows Server is all these things to me, and now you’ve delayed it Mr. Nadella. I’m crushed.

Please tell me it’s for a good reason, like you’re going to make all those cool things I learned at TechEd Barcelona come true (I need a VXLAN story, Mr. Nadella, and  a Docker story would be nice too, and keep working the Storage Spaces replication storyline, ok?).

Please tell the devs to go back to their TFS Consoles, their swank Visual Cloud Studios and tell them not to forget about us Server fans, ok?

Sincerely,

Jeff Wilson

Agnostic Computing.com

* My Physical servers &  VMs are cattle and I birth ’em, brand ’em, work ’em, drive ’em hard, slaughter ’em and experiment on their leftover parts in my lab. Yeeehaaa!!