It’s been awhile since I posted about my home lab, Daisettalabs.net, but rest assured, though I’ve been largely radio silent on it, I’ve been busy.

If 2013 saw the birth of Daisetta Labs.net, 2014 was akin to the terrible twos, with some joy & victories mixed together with teething pains and bruising.

So what’s 2015 shaping up to be?

Well, if I had to characterize it, I’d say it’s #LabGlory, through and through. Honestly. Why?

I’ve assembled a home lab that’s capable of simulating just about anything I run into in the ‘wild’ as a professional. And that’s always been the goal with my lab: practicing technology at home so that I can excel at work.

Let’s have a look at the state of the lab, shall we?

Hardware & Software

Daisetta Labs.net 2015 is comprised of the following:

  • Five (5) physical servers
  • 136 GB RAM
  • Sixteen (16) non-HT Cores
  • One (1) wireless access point
  • One (1) zone-based Firewall
  • Two (2) multilayer gigabit switches
  • One (1) Cable modem in bridge mode
  • Two (2) Public IPs (DHCP)
  • One (1) Silicon Dust HD
  • Ten (10) VLANs
  • Thirteen (13) VMs
  • Five (5) Port-Channels
  • One (1) Windows Media Center PC

That’s quite a bit of kit, as a former British colleague used to say. What’s it all do? Let’s dive in:

Physical Layout

The bulk of my lab gear is in my garage on a wooden workbench.

Nodes 2-4, the core switch, my Zywall edge device, modem, TV tuner, Silicon Dust device and Ooma phone all reside in a secured 12U, two post rack I picked up on ebay about two years ago for $40. One other server, core.daisettalabs.net, sits inside a mid-tower case stuffed with nine 2TB Hitachi HDDs and five 256GB SSDs below the rack.

Placing my lab in the garage has a few benefits, chief among them: I don’t hear (as many) complaints from the family cluster about noise. Also, because it’s largely in the garage, it’s isolated & out of reach of the Child Partition’s curious fingers, which, as every parent knows, are attracted to buttons of all types.

Power & Thermal

Of course you can’t build a lab at home without reliable power, so I’ve got one rack-mounted APC UPS, and one consumer-grade Cyberpower UPS for core.daisettalabs.net and all the internet gear.

On average, the lab gear in the garage consumes about 346 watts, or about 3 amps. That’s significant, no doubt, costing me about $38/month to power, or about 2/3rds the cost of a subscription to IT Pro TV or Pluralsight. 🙂

Thermals are a big challenge. My house was built in 1967, has decent insulation and holds temperature fairly well in the habitable parts of the space. But none of that is true about the garage, where my USB lab thermometer has recorded temps as low as 3C last winter and as high as 39c in Summer 2014. That’s air-temperature at the top of the rack, mind you, not at the CPU.

One of my goals for this year is to automate the shutdown/powerup of all node servers in the Garage based on the temperature reading of the USB thermometer. The $25 thermometer is something I picked up on Amazon awhile ago; it outputs to .csv but I haven’t figured out how to automate its software interface with powershell….yet.

Anyway, here’s my stack, all stickered up and ready for review:

IMG_20150329_214535914

Beyond the garage, the Daisetta Lab extends to my home’s main hallway, the living room, and of course, my home office.

Here’s the layout:

homelab2015

Compute

On the compute side of things, it’s almost all Haswell with the exception of core and node3:

[table]

Server, Architecture, CPU, Cores, RAM, Function, OS, Motherboard

Core, AMD A-series, A8-5500, 2, 8GB, Tiered Storage Spaces & DC/DHCP/DNS, Server 2012 R2, Gigabyte D4

Node1, Haswell, i7-4770k, 4, 32GB, Main PC/Office/VM host/storage, 2012R2, Supermicro X10SAT

Node2, Haswell, Xeon E3-1241, 4, 32GB, Cluster node, 2012r2 core, Supermicro X10SAF

Node3, Ivy Bridge, i7-2600, 4, 32GB, Cluster node, 2012r2 core, Biostar

Node4, Haswell, i5-4670, 4, 32GB, Cluster node/storage, 2012r2 core, Asus

[/table]

I love Haswell for its speed, thermal properties and affordability, but damn! That’s a lot of boxes, isn’t it? Unfortunately, you just can’t get very VM dense when 32GB is the max amount of RAM Haswell E3/i7 chipsets support. I love dynamic RAM on a VM as much as the next guy, but even with Windows core, it’s been hard to squeeze more than 8-10 VMs on a single host. With Hyper-V Containers coming, who knows, maybe that will change?

Node1, the pride of the fleet and my main productivity machine, boasting 2x850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces
Node1, the pride of the fleet and my main productivity machine, boasting 2×850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces

While I included it in the diagram, TVPC3 is not really a lab machine. It’s a cheap Ivy Bridge Pentium with 8GB of RAM and 3TB of local storage. It’s sole function in life is to decrypt the HD stream it receives from the Silicon Dust tuner and display HGTV for my mother-in-law with as little friction as possible. Running Windows 8.1 with Media Center, it’s the only PC in the house without battery backup.

Physical Network
About 18 months ago, I poured gallons of sweat equity into cabling my house. I ran at least a dozen CAT-5e cables from the garage to my home office, bedrooms, living room and to some external parts of the house for video surveillance.
I don’t regret it in the least; nothing like having a reliable, physical backbone to connect up your home network/lab environment!

Meet my underlay
Meet my underlay

At the core of the physical network lies my venerable Cisco 2960S-48TS-L switch. Switch1 may be a humble access-layer switch, but in my lab, the 2960S bundles 17 ports into five port channels, serves as my DG, routes with some rudimentary Layer 3 functions ((Up to 16 static routes, no dynamic route features are available)) and segments 9 VLANs and one port-security VLAN, a feature that’s akin to PVLAN.

Switch2 is a 10 port Cisco Small Business SG-300 running at Layer 3 and connected to Switch1 via a 2-port port-channel. I use a few ports on switch2 for the TV and an IP cam.

On the edge is redzed.daisettalabs.net, the Zyxel USG-50, which I wrote about last month.

Connecting this kit up to the internet is my Motorola Surfboard router/modem/switch/AP, which I run in bridge mode. The great thing about this device and my cable service is that for some reason, up to two LAN ports can be active at any given time. This means that CableCo gives me two public, DHCP addresses, simultaneously. One of these goes into a WAN port on the Zyxel, and the other goes into a downed switchport

Love Meraki's RF Spectrum chart!
Love Meraki’s RF Spectrum chart!

Lastly, there’s my Meraki MR-16, an access point a friend and Ubiquity networks fan gave me. Though it’s a bit underpowered for my tastes, I love this device. The MR-16 is trunked to switch1 and connects via an 802.3af power injector. I announce two SSIDs off the Meraki, both secured with WPA2 Personal ((WPA2 Enterprise is on the agenda this year)). Depending on which SSID you connect to, you’ll end up on the Device or VM VLANs.

Virtual Network

The virtual network was built entirely in System Center VMM 2012 R2. Nothing too fancy here, with multiple Gigabit adapters per physical host, one converged logical vSwitch and a separate NIC on each host fronting for the DMZ network:

Nodes 1, 2 & 4 are all Haswell, and are clustered. Node3 is standalone.

Thanks to VMM, building this out is largely a breeze, once you’ve settled on an architecture. I like to run the cmdlets to build the virtual & logical networks myself, but there’s also a great script available that will build a converged network for you.

A physical host typically looks like this (I say typically because I don’t have an equal number of adapters in all hosts):

I trust VLANs and VMM's segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks
I trust VLANs and VMM’s segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks

We’re already several levels deep in my personal abstraction cave, why stop here? Here’s the layout of VM Networks, which are distinguished from but related to logical networks in VMM:

labnet13

I get a lot of questions on this blog about jumbo frames and Hyper-V switching, and I just want to reiterate that it’s not that hard to do, and look, here’s proof:

jumbopacket

Good stuff!

Storage

And last, and certainly most-interestingly, we arrive at Daisetta Lab’s storage resources.

My lab journey began with storage testing, in particular ZFS via NexentaCore (Illumos), NAS4Free and Solaris 11. But that’s ancient history; since last summer, I’ve been all Windows, all the time in my lab, starting with SAN.Daisettalabs.net ((cf #StorageGlory : 30 Days on a Windows SAN)).

Now?

Well, I had so much fun -and importantly so few failures/pains- with Microsoft’s Tiered Storage Spaces that I’ve decided to deploy not one, or even two, but three Tiered Storage Spaces. Here’s the layout:

[table]Server, #HDD, #SSD, StoragePool Capacity, StoragePool Free, #vDisks, Function

Core, 9, 6, 16.7TB, 12.7TB, 6 So far, SMB3/iSCSI target for entire lab

Node1,2, 2, 2.05TB, 1.15TB,2, SMB3 target for Hyper-V replication

Node4,3,1, 2.86TB, 1.97TB,2, SMB3 target for Hyper-V replication

[/table]

I have to say, I continue to be very impressed with Tiered Storage Spaces. It’s super-flexible, the cmdlets are well-documented, and Microsoft is iterating on it rapidly. More on the performance of Tiered Storage Spaces in a subsequent post.

Thanks for reading!

Sign of the Times or just the best PKI book ever?

Like a lot of IT Pros, I’ve been studying up on security topics lately, both as a reaction to the increasing amount of breach news (Who got breached this week, Alex?) and because I felt weak in this area.

So, I went shopping for some books. My goals were simply to get a baseline understanding of crypto systems and best-practice guidance on setting up Microsoft Public Key Infrastructures, which I’ve done in the past but without much confidence in the end result.

Well, it turns out there’s not a whole lot of literature on Microsoft PKI systems. It seems the best of the genre is Windows Server 2008 PKI & Certificate Security, a Microsoft Press book published in 2008 and authored by Brian Komar:

pkiwin

This 3.2lb, 800 page book has a 4.9 out of 5 star rating on Amazon, with reviewers calling it the best Microsoft PKI guide out there.

Great! I thought, as I prepared to shell out about $80 and One Click my way to PKI knowledge.

That’s when I noticed that the book is out of print. There are digital versions available from O’Reilly, but it appears most don’t know that.

For the physical book itself, the least expensive used one on Amazon is $749.99. You read that right. $750!

If you want a new copy, there’s one available on Amazon, and it’s $1000.

I immediately jumped over to Camelcamelcamel.com to check the history of this book, thinking there must have been a run on Mr. Komar’s tome as Target, Home Depot, JP Morgan, and Sony Pictures fell.

Result:

pkiprice

 

The price of this book has spiked recently, but Peak PKI was a full three years ago.

I looked up security breaches/events of early 2012. Now correlation != causation, but it’s interesting nonetheless. Hopefully this means there’s a lot of solid Microsoft PKI systems being built out there!

Rather than shell out $750 for the physical book, I decided to get Ivan Ristic’s fantastic Bulletproof SSL/TLS, which I highly recommend. It’s got a chapter on securing Windows infrastructure, but is mostly focused on crypto theory & practical OpenSSL. I’ll buy Komar’s as a digital version next or wait for his forthcoming 2012 R2 revision.

Big Data for Server Guys : Azure OpsInsight Review

Maybe it’s just my IT scars that bias me, but when I hear a vendor push a “monitoring” solution,  I visualize an IT guy sitting in front of his screen, passively watching his monitors & counters, essentially waiting for that green thing over there to turn red.

He’s waiting for failure, Godot-style.

That’s not a recipe for success in my view. I don’t wait upon failure to visit, I seek it out, kick its ass, and prevent it from ever establishing a beachhead in my infrastructure. The problem is that I, just like that IT Guy waiting around for failure, am human, and I’m prone to failure myself.

Enter machine learning or Big Data for Server Guys as I like to think of it.

Big Data for Server Guys is a bit like flow monitoring on your switch. The idea here is to actively flow all your server events into some sort of a collector, which crunches them, finds patterns, and surfaces the signal from the noise.

Big Data for Server Guys is all about letting the computer do what the computer’s good at doing: sifting data, finding patterns, and letting you do what you  are good at doing: empowering your organization for tech success.

But we Windows guys have a lot of noise to deal with: Windows instruments just about everything imaginable in the Microsoft kingdom, and the Microsoft kingdom is vast.

So how do we borrow flow-monitoring techniques from the Cisco jockeys and apply it to Windows?

Splunk is one option, and it’s great: it’s agnostic and will hoover events from Windows, logs from your Cisco’s syslog, and can sift through your Apache/IIS logs too. It’s got a thriving community and loads of sexy, AJAX-licious dashboards, and you can issue powerful searches and queries that can help you find problems before problems find you.

It’s also pretty costly, and I’d argue not the best-in-class solution for Hoovering Windows infrastructure.

Fortunately, Microsoft’s been busy in the last few years. Microsoft shops have had SCOM and MOM before that, but now there’s a new kid in town ((He’s been working out and looks nothing like that the old kid, System Center Advisor)) : Azure Operational Insights, and OpsInsight functions a lot like a  good flow collector.

opsinsight3

And I just put the finishing touches on my second Big Data for Server Guys/OpsInsight deployment. Here’s a mini-review:

The Good:

  • It watches your events and finds useful data, which saves you time: OpsInsight is like a giant Hoover in the sky, sucking up on average about 36MB/day of Windows events from my fleet of nearly ~150 VMs in a VMware infrastructure. Getting data on this fleet via Powershell is trivial, but building logic that gives insight into that data is not trivial. OpsInsight is wonderful in this regard; it saves you from spending time in SSRS, Excel, or diving through the event viewer haystack MMC or via get-event looking for a nugget of truth.
  • It has a decent config recommendation engine: If you’re an IT Generalist/Converged IT Guy like me, you touch every element in your Infrastructure stack, from the app on down to the storage array’s rotating rust. And that’s hard work because you can’t be an expert in everything. One great thing about OpsInsight is that it saves you from searching Bing/Google (at worst) or thumbing through your well-worn AD Cookbook (at best) and offers Best practice advice and KB articles in the same tab in your browser. Awesome!
  • Thanks Opsinsight for keeping me out of this thing
    Thanks Opsinsight for keeping me out of this thing

    Query your data rather than surfing the fail tree: Querying your data is infinitely better than walking the Fail Tree that is the Windows Event Viewer looking for errors. OpsInsight has a powerful query engine that’s not difficult to learn or manipulate, and for me, that’s a huge win over the old school method of Event Viewer Subscriptions.

  • Dashboards you can throw in front of an executive:  I can’t understate how great it is to have automagically configured dashboards via OpsInsight. As an IT Pro, the less time I spend in SSRS trying to build a pretty report the better. OpsInsight delivers decent dashboards I’m proud to show off. SCOM 2012 R2’s dashboards are great, but SCOM’s fat client works better than its IIS pages. Though it’s Silverlight-powered, OpsInsight wins the award for friction-free dashboarding.
  • Flexible Architecture: Do you like SCOM? Well then OpsInsight is a natural fit for you. I really appreciate how the System Center team re-structured OpsInsight late last year: you can deploy it at the tail end of your SCOM build, or you can forego SCOM altogether and attach agents directly to your servers. The latter offers you speed in deployment, the former allows you to essentially proxy events from your fleet, through your Management Group, and thence onto Azure. I chose the latter in both of my deployments. Let OpsInsight gate through SCOM, and let both do what they are good at doing.
  • It’s secure: The architecture for OpsInsight is Azure, so if you’re comfortable doing work in Azure Storage blobs, you should be comfortable with this. That + encrypted uploads of events, SCOM data and other data means less friction with the security/compliance guy on your team.

The Bad:

  • It’s silverlight, which makes me feel like I’m flowing my server events to Steve Ballmer: I’m sure this will be changed out at some point. I used to love Silverlight -and maybe there’s still room in my cold black heart for it- but it’s kind of an orphan media/web child at the moment.
  • There’s no app for iOS or Android…yet: I had to dig out my 2014 Lumia Icon just to try out the OpsInsight app for Windows phone. It’s decent, just what I’d like to see on my 2015 Droid Turbo. Alas there is no app for Android or IOS yet, but it’s the #1 and #2 most requested feature at the OpsInsight feedback page (add your vote, I did!)
  • It’s only Windows at the moment: I love what Microsoft is doing with Big Data crunching; Machine Learning, Stream Analytics and OpsInsight. But while you can point just about any flow or data at AzureML or Stream Analytics, OpsInsight only accepts Windows, IIS, SQL,Sharepoint, Exchange. Which is great, don’t get me wrong, but limited. SCOM at least can monitor SNMP traps, interface with Unix/Linux and such, but that is not available in OpsInsight. However, it’s still in Preview, so I’ll be patient.
  • It’s really only Windows/IIS/SQL/Exchange at the moment: Sadface for the lack of Office 365/Azure intelligence packs for OpsInsight, but SCOM will do for now.
  • Pricing forecast is definitely…cloudy: Every link I find takes me to the general Azure pricing page. On the plus side, you can strip this bad boy down to the bare essentials if you have cost pressures.

The Ugly:

  • Where are my cmdlets? My interface of choice with the world of IT these days is Powershell ISE. But when I typed get-help *opsinsight, only errors resulted. How’d this get past Snover’s desk? All kidding aside, SCOM cmdlets work well enough if you deploy OpsInsight following SCOM, and I’m sure it’s coming. I can wait.

All in all, this is shaping up to be a great service for your on-prem Windows infrastructure, which, let’s face it, is probably neglected.

System Center MVP Stanislav Zhelyazkov has a great 9-part deep dive on OpsInsight if you want to learn more.

“Assume Breach” not just at work, but at home too

Security has been on my mind lately. I think that in the Spring of 2015, we’re in a new landscape regarding security, one that is much more sinister, serious and threatening than it was in years past. I used to think anonymity was enough, that there was saftey in the herd. But the rules & landscape have changed, and it’s different now than it was just 12 or 24 months ago. So, let’s do an exercise, let’s suppose for the sake of this post that the following are true:

  • Your credit history and your identity are objects in the marketplace that have value and thus are bought and sold between certain agents freely
  • These things are also true of your spouse or significant other’s credit history & identity, and even your child’s
  • Because these things are true, they are also true for malefactors (literally, bad actors) just like any other object that has value and can be traded
  • There is no legal structure in America aside from power of attorney that allows a single member of a family to protect the identity and credit history of another member of his/her family.
  • The same market forces that create innovation in enterprise technology are now increasing the potency of weaponized malware systems, that is to say that financial success attracts talent which begets better results which begets more financial success.
  • The engineers who build malware are probably better than you are at defending against them, and what’s more,they are largely beyond the reach of local, state, or national law enforcement agencies. ((Supposing that your local Sheriff’s Department even has the in-house know-how to handle security breaches, they lack jurisdiction in Ukraine))
  • The data breaches and mass identity theft of 2014 & 2015 are similar somehwat to a classic market failure, but no cure for this will be forthcoming from Washington, and the trial attorneys & courts who usually play a role in correcting market failures have determined your identity & credit history are worth about $0.14 (($10 million settlement for the 70 million victims of Target breach = $0.14))
  • Generally speaking most IT departments are bad and suffer from poor leadership, poorly-motivated staff, conflicting directions from the business, an inability to meet the business’ demands, or lack of C-level support. IT is Broken, in other words
  • All of this means it’s open season on you and your family’s identity & credit history, which we have to assume rest unencrypted on unpatched SQL servers behind an ASA with a list of unmitigated CVEs maintained by some guys in an IT department who hate their job
Don't be like these people. Secure your online identity now
Don’t be like these people. Secure your online identity now

There it is. That’s the state of personal identity & credit security in 2015 in America, in my view.

And worst of all, it’s not going to get better as every company in America with your data has done the math from the Target settlement and the beancounters have realized one thing: it’s cheaper to settle than to secure your information.

Assume breach at home

If this is truly the state of play -and I think it is- then you as an interested father/mother husband/wife need to take action. I suggest an approach in which you:

  1.  Own your Identity online by taking SMTP back: Your SMTP address is to the online world what your birth certificate and/or social security number is to the meatspace world: everything. Your SMTP address is the de facto unique identifier for you online ((By virtue of the fact that these two things are true of SMTP but are not true of rival identity systems, like Facebook or Google profiles: 1) Your SMTP address is required to transact business or utilize services online or is required at some point in a chain of identity systems and 2) SMTP is accepted by all systems and services as prima facie evidence of your identity because of its uniqueness & global acceptance and rival systems are not)) , which begs the question: why are you still using some hippy-dippy free email account you signed up for in college, and why are you letting disinterested third party companies host & mine something for free that is so vital to your identity? Own your identity and your personal security by owning and manipulating SMTP like corporations do: buy a domain, find a hosting service you like, and pay them to host your email. It doesn’t cost much, and besides, you should pay for that which you value. And owning your email has value in abundance: with your own domain, you can make alias SMTP addresses for each of the following things: social media, financial, shopping, food, bills, bulk and direct your accounts to them as appropriate. This works especially well in a family context, where you can point various monthly recurring accounts at a single SMTP address that you can redistribute via other methods and burn/kill as needed. ((Pretty soon, you and your loved ones will get the hang of it, and you and your family will be handing out food@domain.com to the grocery store checkout person, retail@domain.com for receipts, shopping@domain.com for the ‘etailers’ and apple@domain.com for the two iPhones & three other Apple devices you own.))
  2. Proxy your financial accounts wherever possible: Mask your finances behind a useful proxy, like Paypal, perhaps even Mint. The idea here is to put a buffer between your financial accounts and the services, people, and corporations that want access to them and probably don’t give two shits about protecting your identity or vetting their own IT systems properly. Whenever possible, I buy things online/pay people/services via Paypal or other tools so that use of my real accounts is minimized. Paypal even offers a business credit card backed by the Visa logo, which means you can use it in brick ‘n mortar stores like Target, where the infosec is as fast and loose as the sales and food quality.
  3. Filter the net at home and wherever else you can: Spyware, malware and viruses used to be an annoyance, the result of a global dick-measuring contest for geeks and nerds who liked to tinker and brag. But no more; today’s malware systems are weaponized and potent, and that puts you and your family at a huge disadvantage as it’s difficult to secure all the devices creeping into your life, let alone worry about the bad IT departments stewarding your sosh, DOB, mother’s maiden name and home address at RetailCo. I suggest a heavy filtering strategy by whatever means you can employ: employ whitelist javascript filtering on Windows PCs, use and pay for OpenDNS malware filtering, or buy something like ITUS Networks or even a ZyXel like the one I have. Get to know Privoxy well as I think filtering ads from websites is even fair now as the major ad agencies apparently can’t prevent malware from creeping into them. Finally invest some time and study into certificates and periodically review their use, as there are Certificate Authorities out there that you should not trust.
  4. Use Burner Numbers: Similar to SMTP, your standard US 10 digit POTS/Mobile phone is a kind of unique identifier to companies, existing somewhere in a unsecured table no doubt. Use burners where you can as your 10 digit mobile is important as  a unique identifier and an off-net secondary notification/authentication channel.  If Google Voice is to be killed off, as it appears to be, consider Ooma, where for $100/year, you can spawn burner numbers and use them in the same way you use SMTP. Else, use the app on your phone for quick burner numbers.
  5. Consider Power of Attorney or Incorporation: This is admittedly a little crazy, but words can’t describe how furious you’ll be when a family member’s identity has been stolen and some scummy organization that calls itself a bank is calling to verify that you’ve purchased $1000 in Old Navy gift certificates in Texas -something completely out-of-sync with your credit history- but they refuse to stop the theft because it’s happening to your wife, not you, and your wife can’t come to the phone right now.  The solution to this problem is beyond me, but probably involves a “You can’t beat ’em, join ’em” approach coupled with an attorney’s threatening letter.
  6. Learn to Love Sandboxing: Microsoft has a free and incredibly powerful tool called Enhanced Mitigation Experience Tool, or EMET, which allows you to select applications and essentially sandbox them so that they can’t pwn your entire operating system. Learn to use and love it. But the idea here goes beyond Win32 to the heart of what we should be doing as IT Pros: standing-up and tearing-down instances of environments, whether those environments are Docker containers, Windows VMs, jails in BSD, or KVM virtual machines. Such techniques are useful beyond devops, they are also useful as operational security techniques at home in my view.
  7. Go with local rather than national financial institutions: Where possible, consider joining a local credit union, where infosec practices might not be state of the art, but your family’s finances have more influence and weight than they do at a Bank of America.

I am not a security expert, but that’s how I see it. If we IT pros are to assume breach at work, as many experts advise us to, we should assume breach at home too, where our identities and those of our loved ones are even more vulnerable and even more valuable.

How to Superfish Your Users : SSL Proxy in a Windows Network

When in the course of IT events it becomes necessary to inspect all traffic that hits your user’s PCs, there is but one thing you can do in 2015: get a proxy server or service, deploy a certificate to your trusted root store, and direct all traffic through the proxy.

Why would you do what amounts to a Man in the Middle Attack on your users as a responsible & honest IT Pro? Why Superfish your users? ((

IT Shakespeare put it like this:

To proxy SSL or not to proxy, that is the question

whether ’tis nobler in the mind to suffer

the breaches and theft of outrageous malware

or to take Arms against a sea of digital foes

and by opposing, only mitigate the threat.

To protect via decrypt ; Aye there’s the rub

Thus Conscience does make Cowards of us all

and lose the name of Action))

Numbers are hard to pin down, ((I am not a security expert, and though I checked sources I respect like the Norse IP Viking security blog, Malwarebytes Unpacked blog, SearchSecurity.com etc, I found very few sources that a percentage on how much malware is encrypted and thus difficult to detect. This NSS Labs report from summer 2013 comparing Next Gen Firewall SSL Decryption performance, for instance, says that “the average proportion of SSL traffic within a typical enterprise is 25-35%”  and that only ~1% of malware is encrypted. A GWU security researcher named Andre DiMino has a series of good blog posts on the topic, showing what SSL-encrypted malware looks like in WireShark. Team CYMURU’s TotalHash database is probably the most comprehensive open DB of malware samples, but I don’t feel qualified to search it frankly)) but it seems an increasing amount of virulent & potent malware is arriving at your edge encrypted. Because those packets are encrypted, you essentially can’t scan the contents. All you get is source/destination IP address, some other IP header information, and that’s about it.

No bueno.

One option, really your only option at that point, is to crack open those packets and inspect them. Here’s how.

1.You need a proxy server or service that does security inspection. 

I’ve seen ZScaler used at several firms. ZScaler dubs itself the premiere cloud-based, SaaS proxy service, and it’s quite a nifty service.

For a fee per user, ZScaler will proxy most if not all of your internet traffic from several datacenters around the globe, sort of like how CloudFlare protects your websites.

The service scans all that http and https traffic, filters out the bad and malicious stuff, blocks access to sites you tell it to, and sends inspected http/https content to your users, wherever they are, on-prem or connected to the unsecured Starbucks access point.

2. You need to bundle those proxy settings up into a .pac file

Getting the service is one thing, you still need to direct your users and computers through it. The easiest way is via Group Policy & what’s called a .pac file.

A .pac file is a settings file generated by ZScaler that contains your preferences, settings, and lists of sites you’d prefer bypass the filter. It looks like this:


function FindProxyForURL(url, host)
{
    var resolved_host_ip = dnsResolve(host);

    if (!isResolvable("gateway.zscaler.net"))
        return "DIRECT";

    if (url.substring(0, 4) == "ftp:")
        return "DIRECT";

    // If the requested website is hosted within the internal network, send direct
    if (isPlainHostName(host) ||
        isInNet(resolved_host_ip, "1.1.1.1", "255.0.0.0") ||
        return "DIRECT";

    // If the requested website is SSL and associated with Microsoft O365, send direct
    return "DIRECT";

3. Deploy the .pac file via Group Policy to Users

Next, you need to pick your favorite deployment tool to push the .pac file out and set Windows via IE to proxy through ZScaler. We’ll use Group Policy because it’s fast and easy.

Under User Configuration > Policies > Windows Settings > Internet Explorer Maintenance > Connection / Automatic Browser Configuration, select Enable.

Then point the Auto-proxy URL to your Zscaler .pac file URL. It looks like this:

grouppolicy

Keep Group Policy open, because we’re not done quite yet.

4. Download the ZScaler Root CA certificates

You’ll find the certs in the administration control screen of ZScaler. There are two:

  • ZScaler Root Certificate -2048.crt
  • ZScalerRoot Certificate -2048-SHA256.crt

The two certificates are scoped similarly, the only difference seems to be SHA1 or SHA256 encoding.

Double-click the certificate you prefer to use, and notice that Windows alerts you to the fact that it’s not trusted. Good on ya Microsoft, you’re right.

To validate this setup, you’ll probably want to test before you deploy. So select Install Certificate, select your Computer (not user) and navigate to the Trusted Root CA Store:

rootca

or you can do it via powershell:


PS C:daisettalabs.netImport-Certificate -FilePath C:usersjeffDownloadsZscalerRootCertsZscalerRootCertificate-2048-SHA256.crt -CertStoreLocation Cert:LocalMachineRoot
Directory: Microsoft.PowerShell.SecurityCertificate::LocalMachineRoot
Thumbprint Subject
---------- -------
thumbprint E=support@company.com, CN=Zed, OU=Zed Inc, O=Zed's Head, L=The CPT, S=CaliforniaLove, C=USA 

4. Verify that the .pac file is in use

Now that you’ve installed the .pac file and the certificate, ensure that IE (and thus Chrome, but not necessarily Firefox) have been set to proxy through Zscaler:

Your settings will differ no doubt from my screenshot

5. SSL Proxy Achievement Unlocked: 

Go to Google or any SSL/TLS encrypted site and check the Certificate in your browser.

You should see something like this:

googlewithz

 

6. You can now deploy that same certificate via Group Policy to your Computers.

It’s trivial at this point to deploy the ZScaler certificates to end-user PCs via Group Policy. You’ll want to use Computer Preferences.

Once deployed, you’ll get comprehensive scanning, blocking and reporting on your users http/https use. You can of course exempt certain sites from being scanned ((Before you do this, make sure you get your Legal department or corporate controller’s sign-off on this. Your company needs to understand exactly what SSL Proxy means, and the Gordian Knot of encryption.

By making all SSL traffic visible to your proxy service,  you may gain some ability to prevent potent malware attacks, but at the cost of your user’s privacy. When a user transacts business with their bank, their session will be secured, but only between the ZScaler cloud and the bank’s webserver. The same is true of Facebook or personal email sites.

By doing this, you’re placing an immense amount of trust in the proxy server/service of your choice. You’re trusting that they know what they’re doing with Certificates, that they didn’t use a weak password. You’re trusting that they have their act together, and you’re doing this on behalf of all your users who trust you. This is not to be taken lightly, so run it up the legal/HR flagpole before you do this. ))

Microsoft’s commitment to open initiatives & the riddle of whitebox networking

On Tuesday Microsoft surprised me by announcing an open switching/networking plan in partnership with Mellanox and as part of the Open Compute initiative.

Wait, what?

Microsoft’s building a switch?

Not quite, but before we get into that, some background on Microsoft’s participation in what I call OpenMania: the cloud & enterprise technology vendor tendency to prefix any standards-ish cooperative work effort with the word Open.

Microsoft’s participating in several OpenMania efforts, but I only really care about these two because they highlight something neat about Microsoft and apply or will soon apply to me, the Converged IT Guy.

Open Compute, or OCP, is the Facebook-led initiative to build agnostic hardware platforms on x86 for the datacenter. I like to think of OCP as a ground-up re-imagining of hardware systems by guys who do software systems.

As part of their participation in OCP, Microsoft is devoting engineering resources and talent into building out specifications, blueprints and full hardware designs for things like this, a 12U converged chassis comprised of storage and compute resources.

ocs
Are those brown Zunes in the blades?

 

Then there’s Open Management Infrastructure (OMI), an initiative of the The Open Group (TOG). Microsoft joined OMI almost three years ago to align & position Windows to share common management frameworks across disparate hardware & software systems.

That’s a lot of words with little meaning, so let me break it down for the Windows guys and gals reading this. The promise of Microsoft’s OMI participation is this: you can configure other people’s hardware and software via the same frameworks your Windows Server runs on (CIM, the next-gen WMI) using the same techniques and tooling you manage other things with: Powershell.

All your management constructs are belong to CIM
All your management constructs are belong to CIM

I’ve been keenly interested in Microsoft & their OMI push because it’s an awesome vision, and it’s real, or real-close at any rate: SMI-S, for instance, is gaining traction as a management play on other people’s hardware/software storage systems ((cf NIMBLE STORAGE NOW INTEGRATES WITH SCVMM)) , and is already baked-into Windows server as a feature you can install and use to manage Windows Storage Spaces, which itself is a first-class citizen of CIMville.

All your CIM classes -running as part of Windows or not- manipulated & managed via Powershell, the same ISE you and I use to deploy Hyper-V hosts, spin-up VMs, manage our tenants in Office 365, fiddle around in Azure, and make each day at work a little better and a little more automated than the last.

That’s the promised land right there, ladies and gentlemen.

Except for networking, the last stubborn holdout in my fevered powershell dream.

Jeff Snover, the architect of the vision, teases me with Powershell Leaf Spine Tweets like this:

//platform.twitter.com/widgets.js

but  I have yet to replace Putty with Powershell, I still have to do show int status rather than show-interface -status “connected” on my switch because I don’t have an Arista or N7K, and few other switches vendors seem to be getting the OMI religion.

All of which makes Microsoft’s Tuesday announcement that it is extending its commitment to OCP’s whitebox switching development really odd yet worthy of more consideration:

The Switch Abstraction Interface (SAI) team at Microsoft is excited to announce that we will showcase our first implementations of the specification at the Open Compute Project Summit, which kicks off today at the San Jose Convention Center. SAI is a specification by the OCP that provides a consistent programming interface for common networking functions implemented by network switch ASIC’s. In addition, SAI allows network switch vendors to continue to build innovative features through extensions.

The SAI v0.92 introduces numerous proposals including:

Access Control Lists (ACL)
Equal Cost Multi Path (ECMP)
Forwarding Data Base (FDB, MAC address table)
Host Interface
Neighbor database, Next hop and next hop groups
Port management
Quality of Service (QoS)
Route, router, and router interfaces

At first glance, I wouldn’t blame you if you thought that this thing, this SAI, means OMI is dead in networking, that managing route/switch via Powershell is gone.

But looking deeper, this development speaks to Microsoft’s unique position in the market (all markets, really!)

  1. SAI is probably more about low-level interaction with Broadcom’s Trident II ((At least that’s my read on the Github repo material)) and Microsoft’s participation in this is more about Azure and less about managing networking stuff w/Powershell
  2. But this is also perhaps Microsoft acknowledging that Linux-powered whitebox switching is really enjoying some momentum, and Microsoft needs to have something in this space

So, let’s review: Microsoft has embraced Open Compute & Open Management. It breaks down like this:

  • Microsoft + OCP =  Contributions of hardware blueprints but also low-level software code for things like ASIC interaction
  • Microsoft + OMI = A long-term strategic push to manage x86 hardware & software systems that may run Windows, but likely run something Linuxy yet

In a perfect world, OCP and OMI would just join forces and be followed by all the web-scale players, the enterprise technology vendors, the storage guys & packet pushers. All would gather together under a banner singing kumbaya and praising agnostic open hardware managed via a common, well-defined framework named CIM that you can plug into any front-end GUI or CLI construct you like.

Alas, it’s not a perfect world and OCP & OMI are different things. In the real world, you still need a proprietary API to manage a storage system, or a costly license to utilize another switchport. And worst of all, in this world, Powershell is not my interface to everything, it is not yet the answer to all IT questions.

Yet Microsoft, by virtue of its position in so many different markets, is very close now to creating its own perfect world. If they find some traction with SAI, I’m certain it won’t be long before you can manage an open Microsoft-designed switch that’s a first-class OMI citizen and gets along famously with Powershell! ((Or buy one, as you can buy the Azure-in-a-box which is simply the OCP blueprint via Dell/Microsoft Cloud Platform System program))

The Value of Community Editions

I was excited to hear on the In Tech We Trust podcast this week that the godfather of all the hyperconverged things -Nutanix- may release a community edition of their infrastructure software this year.

That. Would. Be. Amazing.

I’ve crossed paths with Nutanix a few times in my career, but they’ve always remained just a bit out of reach in my various infrastructure projects. Getting some hands-on experience with the Google-inspired infrastructure system in my lab at home would be most excellent, not just for me, but for them, as I like to recommend product stacks I’ve touched above ones I haven’t.

Take Nexenta as an example. As Hans D. pointed out on the show, aside from downloading & running Oracle Solaris 12, Nexenta’s just about the only way one can experience a mature & enterprise-focused implementation of ZFS. I had a blast testing Nexenta out in my lab in 2014 and though I can’t say my posts on ZFS helped them move copies of NexentaStore, it surely didn’t hurt in my view.

VEEAM is also big in the community space, and though I’ve not tested their various products, I have used their awesome stencil collection.

Lest you think storage & hyperconvergence vendors are the only ones thinking ‘community, today my favorite yellow load balancer Kemp announced in effect a community edition of their L4/L7 Loadmaster vAppliance. Kemp holds a special place in the hearts of Hyper-V guys; as long as I can remember, yes even back in the dark days of 2008 R2, they’ve always released a Loadmaster that’s just about on-par with what they offer to VMware shops. In 2015 that support is paying off I think; Kemp’s best-in-class for Microsoft shops running Hyper-V or building out Azure, and with the announcement you can now stress a Kemp at home in your lab or in Azure with your MSDN sub. Excellent.

Speaking of Microsoft, I’d be remiss if I didn’t mention Visual Studio 2013, which got a community edition last fall.

I’d love to see more community editions, namely:

  • Nimble Storage: I’ve had a lot of success in the last 18 months racking/stacking Nimble arrays in environments with older, riskier storage. I must not be the only one;  the company recently celebrated its 5,000th customer. Yet, Nimble’s rapid evolution from storage startup with potential to serious storage player is somewhat bittersweet for me as I no longer work at the places I’ve installed Nimble arrays and can’t tinker with their rapidly-evolving features & support. Come on guys, just give me the CASL caching system in download form and let me evaluate your Fiber Channel support and test out your support for System Center
  • NetApp: A community release of Clustered Data OnTAP 8.2x would accomplish something few NetApp products have accomplished in the last few years: create some genuine excitement about the big blocky blue N. I’m certain they’ve got a software-only release in-house as they’ve already got an appliance for vSphere and I heard rumors about this from channel sources for years. So what are you waiting for NetApp? Let us build-out, support, and get excited about cDOT community-style since it’s been too hard to see past the 7-mode–>clustered mode transition pain in production.

On his Graybeards on Storage podcast, Howard Marks once reminisced about his time testing real enterprise technology products in a magazine’s tech lab. His observations became a column, printed on paper in an old-school pulp magazine which was shipped to readers. This was beneficial relationship for all.

Those days may be gone but thanks to scalable software infrastructure systems, the agnostic properties of x86, bloggers & community edition software, perhaps they’re back!