I’ve not always had a bromance with Microsoft’s Office suite. I cut my word processing teeth on WordPerfect 5.1, did most of my undergrad papers in BeOS’ one productivity suite ((GoBe Productive, still the best Office suite name)) , and touch-typed my way to graduating cum laude in grad school with countless Turabian-style Google Docs papers.
That was for corporate suits, man. Rich corporate suits.
But all that’s ancient history. Or maybe I’ve become a suit. Either way, I’m loving Office today.
In 2015, Office has transformed into the ultimate agnostic git ‘r done productivity package. It’s free to use in many cases, but if you want to ‘own’ it, you can subscribe to it, just like HBO ((For the IT Pro, this is a huge advantage, as a cheap E-class sub gives you access to your own Exchange instance, your own Sharepoint server, and your own Office tenant. It’s awesome!)) . It’s also available on just about any device or computing system you can think of, works just as well inside a browser as Google Docs does, and has an enormous install base.
Office has become so impressive and so ubiquitous that it’s truly a platform unto itself, consumed a la carte or as part of a well-balanced Microsoft meal. I’m bullish on Windows but if Office’s former partner ever sunsets, I’m convinced my kid and his kid will still grow up in an Office world.
All of that makes Office really important for IT, so important that you as an IT Guy should consider standing-up some easy instrumentation around it.
Enter Office Telemetry, a super-simple package that flows your Office data to a SQL collector, mashes it up, and gives you important insight into how your users are using Office. It also surfaces the problems in Office -or Office documents- before your users do, and it’s free.
Oh, did I mention it’s called Office Telemetry? This thing makes you feel like an astronaut when you’re using it!
Here’s how you deploy it. Total time: about an hour.
Spin-up a 2008 R2 or 2012 VM, or find a modestly-equipped physical box that at least has Windows Management Framework 3.0/Powershell 3.0 on it. If it has a SQL 2012 instance on it that you can use, even better. If not, don’t stress and proceed to the next step.
Set-aside a folder on a separate volume (ideally) for the telemetry data. If you’re going to flow data from hundreds of Office users, plan for a minimum of 5-25 megabytes per user, at a minimum.
If your users are on the WAN, plan accordingly. Telemetry data is pretty lightweight (50k chunks for older Office clients, 64k chunks for Office 2013)
Install Office ProPlus 2013 or 365 on the VM. You do not need to use an Office 365 license for it to run.
Because it’s a script, you’ll need to temporarily change your server’s execution policy, self-sign it, or configure Group Policy as appropriate to run it. TechNet has instructions.
Run the script; it will download SQL 2012 express and install it for you if you don’t have SQL. It will also set proper SMB read/modify permissions on that folder you set up earlier.
As if that wasn’t enough, the script will give you a single registry keyfile you can use to deploy to your user’s machines.
But I prefer the Group Policy/SCCM route. Remember the ADMX files you deployed? Flip the switches as appropriate under User Configuration>Administrative Templates>Microsoft Office 2013> Telemetry Dashboard.
Sit back, and watch the data flow in, and pat yourself on the back because you’re being a proactive IT Pro!
As I’ve deployed this solution, I’ve found broken documents, expensive add-ons that delay Office, and multiple other issues that were easy to resolve but difficult to surface. It’s totally worth your time to install it.
It’s been awhile since I posted about my home lab, Daisettalabs.net, but rest assured, though I’ve been largely radio silent on it, I’ve been busy.
If 2013 saw the birth of Daisetta Labs.net, 2014 was akin to the terrible twos, with some joy & victories mixed together with teething pains and bruising.
So what’s 2015 shaping up to be?
Well, if I had to characterize it, I’d say it’s #LabGlory, through and through. Honestly. Why?
I’ve assembled a home lab that’s capable of simulating just about anything I run into in the ‘wild’ as a professional. And that’s always been the goal with my lab: practicing technology at home so that I can excel at work.
Let’s have a look at the state of the lab, shall we?
Hardware & Software
Daisetta Labs.net 2015 is comprised of the following:
Five (5) physical servers
136 GB RAM
Sixteen (16) non-HT Cores
One (1) wireless access point
One (1) zone-based Firewall
Two (2) multilayer gigabit switches
One (1) Cable modem in bridge mode
Two (2) Public IPs (DHCP)
One (1) Silicon Dust HD
Ten (10) VLANs
Thirteen (13) VMs
Five (5) Port-Channels
One (1) Windows Media Center PC
That’s quite a bit of kit, as a former British colleague used to say. What’s it all do? Let’s dive in:
The bulk of my lab gear is in my garage on a wooden workbench.
Nodes 2-4, the core switch, my Zywall edge device, modem, TV tuner, Silicon Dust device and Ooma phone all reside in a secured 12U, two post rack I picked up on ebay about two years ago for $40. One other server, core.daisettalabs.net, sits inside a mid-tower case stuffed with nine 2TB Hitachi HDDs and five 256GB SSDs below the rack.
Placing my lab in the garage has a few benefits, chief among them: I don’t hear (as many) complaints from the family cluster about noise. Also, because it’s largely in the garage, it’s isolated & out of reach of the Child Partition’s curious fingers, which, as every parent knows, are attracted to buttons of all types.
Power & Thermal
Of course you can’t build a lab at home without reliable power, so I’ve got one rack-mounted APC UPS, and one consumer-grade Cyberpower UPS for core.daisettalabs.net and all the internet gear.
On average, the lab gear in the garage consumes about 346 watts, or about 3 amps. That’s significant, no doubt, costing me about $38/month to power, or about 2/3rds the cost of a subscription to IT Pro TV or Pluralsight. 🙂
Thermals are a big challenge. My house was built in 1967, has decent insulation and holds temperature fairly well in the habitable parts of the space. But none of that is true about the garage, where my USB lab thermometer has recorded temps as low as 3C last winter and as high as 39c in Summer 2014. That’s air-temperature at the top of the rack, mind you, not at the CPU.
One of my goals for this year is to automate the shutdown/powerup of all node servers in the Garage based on the temperature reading of the USB thermometer. The $25 thermometer is something I picked up on Amazon awhile ago; it outputs to .csv but I haven’t figured out how to automate its software interface with powershell….yet.
Anyway, here’s my stack, all stickered up and ready for review:
Beyond the garage, the Daisetta Lab extends to my home’s main hallway, the living room, and of course, my home office.
Here’s the layout:
On the compute side of things, it’s almost all Haswell with the exception of core and node3:
Node4, Haswell, i5-4670, 4, 32GB, Cluster node/storage, 2012r2 core, Asus
I love Haswell for its speed, thermal properties and affordability, but damn! That’s a lot of boxes, isn’t it? Unfortunately, you just can’t get very VM dense when 32GB is the max amount of RAM Haswell E3/i7 chipsets support. I love dynamic RAM on a VM as much as the next guy, but even with Windows core, it’s been hard to squeeze more than 8-10 VMs on a single host. With Hyper-V Containers coming, who knows, maybe that will change?
While I included it in the diagram, TVPC3 is not really a lab machine. It’s a cheap Ivy Bridge Pentium with 8GB of RAM and 3TB of local storage. It’s sole function in life is to decrypt the HD stream it receives from the Silicon Dust tuner and display HGTV for my mother-in-law with as little friction as possible. Running Windows 8.1 with Media Center, it’s the only PC in the house without battery backup.
About 18 months ago, I poured gallons of sweat equity into cabling my house. I ran at least a dozen CAT-5e cables from the garage to my home office, bedrooms, living room and to some external parts of the house for video surveillance.
I don’t regret it in the least; nothing like having a reliable, physical backbone to connect up your home network/lab environment!
At the core of the physical network lies my venerable Cisco 2960S-48TS-L switch. Switch1 may be a humble access-layer switch, but in my lab, the 2960S bundles 17 ports into five port channels, serves as my DG, routes with some rudimentary Layer 3 functions ((Up to 16 static routes, no dynamic route features are available)) and segments 9 VLANs and one port-security VLAN, a feature that’s akin to PVLAN.
Switch2 is a 10 port Cisco Small Business SG-300 running at Layer 3 and connected to Switch1 via a 2-port port-channel. I use a few ports on switch2 for the TV and an IP cam.
On the edge is redzed.daisettalabs.net, the Zyxel USG-50, which I wrote about last month.
Connecting this kit up to the internet is my Motorola Surfboard router/modem/switch/AP, which I run in bridge mode. The great thing about this device and my cable service is that for some reason, up to two LAN ports can be active at any given time. This means that CableCo gives me two public, DHCP addresses, simultaneously. One of these goes into a WAN port on the Zyxel, and the other goes into a downed switchport
Lastly, there’s my Meraki MR-16, an access point a friend and Ubiquity networks fan gave me. Though it’s a bit underpowered for my tastes, I love this device. The MR-16 is trunked to switch1 and connects via an 802.3af power injector. I announce two SSIDs off the Meraki, both secured with WPA2 Personal ((WPA2 Enterprise is on the agenda this year)). Depending on which SSID you connect to, you’ll end up on the Device or VM VLANs.
The virtual network was built entirely in System Center VMM 2012 R2. Nothing too fancy here, with multiple Gigabit adapters per physical host, one converged logical vSwitch and a separate NIC on each host fronting for the DMZ network:
Thanks to VMM, building this out is largely a breeze, once you’ve settled on an architecture. I like to run the cmdlets to build the virtual & logical networks myself, but there’s also a great script available that will build a converged network for you.
A physical host typically looks like this (I say typically because I don’t have an equal number of adapters in all hosts):
We’re already several levels deep in my personal abstraction cave, why stop here? Here’s the layout of VM Networks, which are distinguished from but related to logical networks in VMM:
I get a lot of questions on this blog about jumbo frames and Hyper-V switching, and I just want to reiterate that it’s not that hard to do, and look, here’s proof:
And last, and certainly most-interestingly, we arrive at Daisetta Lab’s storage resources.
Well, I had so much fun -and importantly so few failures/pains- with Microsoft’s Tiered Storage Spaces that I’ve decided to deploy not one, or even two, but three Tiered Storage Spaces. Here’s the layout:
[table]Server, #HDD, #SSD, StoragePool Capacity, StoragePool Free, #vDisks, Function
Core, 9, 6, 16.7TB, 12.7TB, 6 So far, SMB3/iSCSI target for entire lab
Node1,2, 2, 2.05TB, 1.15TB,2, SMB3 target for Hyper-V replication
Node4,3,1, 2.86TB, 1.97TB,2, SMB3 target for Hyper-V replication
I have to say, I continue to be very impressed with Tiered Storage Spaces. It’s super-flexible, the cmdlets are well-documented, and Microsoft is iterating on it rapidly. More on the performance of Tiered Storage Spaces in a subsequent post.
Like a lot of IT Pros, I’ve been studying up on security topics lately, both as a reaction to the increasing amount of breach news (Who got breached this week, Alex?) and because I felt weak in this area.
So, I went shopping for some books. My goals were simply to get a baseline understanding of crypto systems and best-practice guidance on setting up Microsoft Public Key Infrastructures, which I’ve done in the past but without much confidence in the end result.
This 3.2lb, 800 page book has a 4.9 out of 5 star rating on Amazon, with reviewers calling it the best Microsoft PKI guide out there.
Great! I thought, as I prepared to shell out about $80 and One Click my way to PKI knowledge.
That’s when I noticed that the book is out of print. There are digital versions available from O’Reilly, but it appears most don’t know that.
For the physical book itself, the least expensive used one on Amazon is $749.99. You read that right. $750!
If you want a new copy, there’s one available on Amazon, and it’s $1000.
I immediately jumped over to Camelcamelcamel.com to check the history of this book, thinking there must have been a run on Mr. Komar’s tome as Target, Home Depot, JP Morgan, and Sony Pictures fell.
The price of this book has spiked recently, but Peak PKI was a full three years ago.
I looked up security breaches/events of early 2012. Now correlation != causation, but it’s interesting nonetheless. Hopefully this means there’s a lot of solid Microsoft PKI systems being built out there!
Rather than shell out $750 for the physical book, I decided to get Ivan Ristic’s fantastic Bulletproof SSL/TLS, which I highly recommend. It’s got a chapter on securing Windows infrastructure, but is mostly focused on crypto theory & practical OpenSSL. I’ll buy Komar’s as a digital version next or wait for his forthcoming 2012 R2 revision.
Maybe it’s just my IT scars that bias me, but when I hear a vendor push a “monitoring” solution, I visualize an IT guy sitting in front of his screen, passively watching his monitors & counters, essentially waiting for that green thing over there to turn red.
He’s waiting for failure, Godot-style.
That’s not a recipe for success in my view. I don’t wait upon failure to visit, I seek it out, kick its ass, and prevent it from ever establishing a beachhead in my infrastructure. The problem is that I, just like that IT Guy waiting around for failure, am human, and I’m prone to failure myself.
Enter machine learning or Big Data for Server Guys as I like to think of it.
Big Data for Server Guys is a bit like flow monitoring on your switch. The idea here is to actively flow all your server events into some sort of a collector, which crunches them, finds patterns, and surfaces the signal from the noise.
Big Data for Server Guys is all about letting the computer do what the computer’s good at doing: sifting data, finding patterns, and letting you do what you are good at doing: empowering your organization for tech success.
But we Windows guys have a lot of noise to deal with: Windows instruments just about everything imaginable in the Microsoft kingdom, and the Microsoft kingdom is vast.
So how do we borrow flow-monitoring techniques from the Cisco jockeys and apply it to Windows?
Splunk is one option, and it’s great: it’s agnostic and will hoover events from Windows, logs from your Cisco’s syslog, and can sift through your Apache/IIS logs too. It’s got a thriving community and loads of sexy, AJAX-licious dashboards, and you can issue powerful searches and queries that can help you find problems before problems find you.
It’s also pretty costly, and I’d argue not the best-in-class solution for Hoovering Windows infrastructure.
Fortunately, Microsoft’s been busy in the last few years. Microsoft shops have had SCOM and MOM before that, but now there’s a new kid in town ((He’s been working out and looks nothing like that the old kid, System Center Advisor)) : Azure Operational Insights, and OpsInsight functions a lot like a good flow collector.
And I just put the finishing touches on my second Big Data for Server Guys/OpsInsight deployment. Here’s a mini-review:
It watches your events and finds useful data, which saves you time: OpsInsight is like a giant Hoover in the sky, sucking up on average about 36MB/day of Windows events from my fleet of nearly ~150 VMs in a VMware infrastructure. Getting data on this fleet via Powershell is trivial, but building logic that gives insight into that data is not trivial. OpsInsight is wonderful in this regard; it saves you from spending time in SSRS, Excel, or diving through the event viewer haystack MMC or via get-event looking for a nugget of truth.
It has a decent config recommendation engine: If you’re an IT Generalist/Converged IT Guy like me, you touch every element in your Infrastructure stack, from the app on down to the storage array’s rotating rust. And that’s hard work because you can’t be an expert in everything. One great thing about OpsInsight is that it saves you from searching Bing/Google (at worst) or thumbing through your well-worn AD Cookbook (at best) and offers Best practice advice and KB articles in the same tab in your browser. Awesome!
Query your data rather than surfing the fail tree: Querying your data is infinitely better than walking the Fail Tree that is the Windows Event Viewer looking for errors. OpsInsight has a powerful query engine that’s not difficult to learn or manipulate, and for me, that’s a huge win over the old school method of Event Viewer Subscriptions.
Dashboards you can throw in front of an executive: I can’t understate how great it is to have automagically configured dashboards via OpsInsight. As an IT Pro, the less time I spend in SSRS trying to build a pretty report the better. OpsInsight delivers decent dashboards I’m proud to show off. SCOM 2012 R2’s dashboards are great, but SCOM’s fat client works better than its IIS pages. Though it’s Silverlight-powered, OpsInsight wins the award for friction-free dashboarding.
Flexible Architecture: Do you like SCOM? Well then OpsInsight is a natural fit for you. I really appreciate how the System Center team re-structured OpsInsight late last year: you can deploy it at the tail end of your SCOM build, or you can forego SCOM altogether and attach agents directly to your servers. The latter offers you speed in deployment, the former allows you to essentially proxy events from your fleet, through your Management Group, and thence onto Azure. I chose the latter in both of my deployments. Let OpsInsight gate through SCOM, and let both do what they are good at doing.
It’s secure: The architecture for OpsInsight is Azure, so if you’re comfortable doing work in Azure Storage blobs, you should be comfortable with this. That + encrypted uploads of events, SCOM data and other data means less friction with the security/compliance guy on your team.
It’s silverlight, which makes me feel like I’m flowing my server events to Steve Ballmer: I’m sure this will be changed out at some point. I used to love Silverlight -and maybe there’s still room in my cold black heart for it- but it’s kind of an orphan media/web child at the moment.
There’s no app for iOS or Android…yet: I had to dig out my 2014 Lumia Icon just to try out the OpsInsight app for Windows phone. It’s decent, just what I’d like to see on my 2015 Droid Turbo. Alas there is no app for Android or IOS yet, but it’s the #1 and #2 most requested feature at the OpsInsight feedback page (add your vote, I did!)
It’s only Windows at the moment: I love what Microsoft is doing with Big Data crunching; Machine Learning, Stream Analytics and OpsInsight. But while you can point just about any flow or data at AzureML or Stream Analytics, OpsInsight only accepts Windows, IIS, SQL,Sharepoint, Exchange. Which is great, don’t get me wrong, but limited. SCOM at least can monitor SNMP traps, interface with Unix/Linux and such, but that is not available in OpsInsight. However, it’s still in Preview, so I’ll be patient.
It’s really only Windows/IIS/SQL/Exchange at the moment: Sadface for the lack of Office 365/Azure intelligence packs for OpsInsight, but SCOM will do for now.
Pricing forecast is definitely…cloudy: Every link I find takes me to the general Azure pricing page. On the plus side, you can strip this bad boy down to the bare essentials if you have cost pressures.
Where are my cmdlets? My interface of choice with the world of IT these days is Powershell ISE. But when I typed get-help *opsinsight, only errors resulted. How’d this get past Snover’s desk? All kidding aside, SCOM cmdlets work well enough if you deploy OpsInsight following SCOM, and I’m sure it’s coming. I can wait.
All in all, this is shaping up to be a great service for your on-prem Windows infrastructure, which, let’s face it, is probably neglected.
Security has been on my mind lately. I think that in the Spring of 2015, we’re in a new landscape regarding security, one that is much more sinister, serious and threatening than it was in years past. I used to think anonymity was enough, that there was saftey in the herd. But the rules & landscape have changed, and it’s different now than it was just 12 or 24 months ago. So, let’s do an exercise, let’s suppose for the sake of this post that the following are true:
Your credit history and your identity are objects in the marketplace that have value and thus are bought and sold between certain agents freely
These things are also true of your spouse or significant other’s credit history & identity, and even your child’s
Because these things are true, they are also true for malefactors (literally, bad actors) just like any other object that has value and can be traded
There is no legal structure in America aside from power of attorney that allows a single member of a family to protect the identity and credit history of another member of his/her family.
The same market forces that create innovation in enterprise technology are now increasing the potency of weaponized malware systems, that is to say that financial success attracts talent which begets better results which begets more financial success.
The engineers who build malware are probably better than you are at defending against them, and what’s more,they are largely beyond the reach of local, state, or national law enforcement agencies. ((Supposing that your local Sheriff’s Department even has the in-house know-how to handle security breaches, they lack jurisdiction in Ukraine))
The data breaches and mass identity theft of 2014 & 2015 are similar somehwat to a classic market failure, but no cure for this will be forthcoming from Washington, and the trial attorneys & courts who usually play a role in correcting market failures have determined your identity & credit history are worth about $0.14 (($10 million settlement for the 70 million victims of Target breach = $0.14))
Generally speaking most IT departments are bad and suffer from poor leadership, poorly-motivated staff, conflicting directions from the business, an inability to meet the business’ demands, or lack of C-level support. IT is Broken, in other words
All of this means it’s open season on you and your family’s identity & credit history, which we have to assume rest unencrypted on unpatched SQL servers behind an ASA with a list of unmitigated CVEs maintained by some guys in an IT department who hate their job
There it is. That’s the state of personal identity & credit security in 2015 in America, in my view.
And worst of all, it’s not going to get better as every company in America with your data has done the math from the Target settlement and the beancounters have realized one thing: it’s cheaper to settle than to secure your information.
Assume breach at home
If this is truly the state of play -and I think it is- then you as an interested father/mother husband/wife need to take action. I suggest an approach in which you:
Own your Identity online by taking SMTP back: Your SMTP address is to the online world what your birth certificate and/or social security number is to the meatspace world: everything. Your SMTP address is the de facto unique identifier for you online ((By virtue of the fact that these two things are true of SMTP but are not true of rival identity systems, like Facebook or Google profiles: 1) Your SMTP address is required to transact business or utilize services online or is required at some point in a chain of identity systems and 2) SMTP is accepted by all systems and services as prima facie evidence of your identity because of its uniqueness & global acceptance and rival systems are not)) , which begs the question: why are you still using some hippy-dippy free email account you signed up for in college, and why are you letting disinterested third party companies host & mine something for free that is so vital to your identity? Own your identity and your personal security by owning and manipulating SMTP like corporations do: buy a domain, find a hosting service you like, and pay them to host your email. It doesn’t cost much, and besides, you should pay for that which you value. And owning your email has value in abundance: with your own domain, you can make alias SMTP addresses for each of the following things: social media, financial, shopping, food, bills, bulk and direct your accounts to them as appropriate. This works especially well in a family context, where you can point various monthly recurring accounts at a single SMTP address that you can redistribute via other methods and burn/kill as needed. ((Pretty soon, you and your loved ones will get the hang of it, and you and your family will be handing out firstname.lastname@example.org to the grocery store checkout person, email@example.com for receipts, firstname.lastname@example.org for the ‘etailers’ and email@example.com for the two iPhones & three other Apple devices you own.))
Proxy your financial accounts wherever possible: Mask your finances behind a useful proxy, like Paypal, perhaps even Mint. The idea here is to put a buffer between your financial accounts and the services, people, and corporations that want access to them and probably don’t give two shits about protecting your identity or vetting their own IT systems properly. Whenever possible, I buy things online/pay people/services via Paypal or other tools so that use of my real accounts is minimized. Paypal even offers a business credit card backed by the Visa logo, which means you can use it in brick ‘n mortar stores like Target, where the infosec is as fast and loose as the sales and food quality.
Use Burner Numbers: Similar to SMTP, your standard US 10 digit POTS/Mobile phone is a kind of unique identifier to companies, existing somewhere in a unsecured table no doubt. Use burners where you can as your 10 digit mobile is important as a unique identifier and an off-net secondary notification/authentication channel. If Google Voice is to be killed off, as it appears to be, consider Ooma, where for $100/year, you can spawn burner numbers and use them in the same way you use SMTP. Else, use the app on your phone for quick burner numbers.
Consider Power of Attorney or Incorporation: This is admittedly a little crazy, but words can’t describe how furious you’ll be when a family member’s identity has been stolen and some scummy organization that calls itself a bank is calling to verify that you’ve purchased $1000 in Old Navy gift certificates in Texas -something completely out-of-sync with your credit history- but they refuse to stop the theft because it’s happening to your wife, not you, and your wife can’t come to the phone right now. The solution to this problem is beyond me, but probably involves a “You can’t beat ’em, join ’em” approach coupled with an attorney’s threatening letter.
Learn to Love Sandboxing: Microsoft has a free and incredibly powerful tool called Enhanced Mitigation Experience Tool, or EMET, which allows you to select applications and essentially sandbox them so that they can’t pwn your entire operating system. Learn to use and love it. But the idea here goes beyond Win32 to the heart of what we should be doing as IT Pros: standing-up and tearing-down instances of environments, whether those environments are Docker containers, Windows VMs, jails in BSD, or KVM virtual machines. Such techniques are useful beyond devops, they are also useful as operational security techniques at home in my view.
Go with local rather than national financial institutions: Where possible, consider joining a local credit union, where infosec practices might not be state of the art, but your family’s finances have more influence and weight than they do at a Bank of America.
I am not a security expert, but that’s how I see it. If we IT pros are to assume breach at work, as many experts advise us to, we should assume breach at home too, where our identities and those of our loved ones are even more vulnerable and even more valuable.
When in the course of IT events it becomes necessary to inspect alltraffic that hits your user’s PCs, there is but one thing you can do in 2015: get a proxy server or service, deploy a certificate to your trusted root store, and direct all traffic through the proxy.
Why would you do what amounts to a Man in the Middle Attack on your users as a responsible & honest IT Pro? Why Superfish your users? ((
IT Shakespeare put it like this:
To proxy SSL or not to proxy, that is the question
whether ’tis nobler in the mind to suffer
the breaches and theft of outrageous malware
or to take Arms against a sea of digital foes
and by opposing, only mitigate the threat.
To protect via decrypt ; Aye there’s the rub
Thus Conscience does make Cowards of us all
and lose the name of Action))
Numbers are hard to pin down, ((I am not a security expert, and though I checked sources I respect like the Norse IP Viking security blog, Malwarebytes Unpacked blog, SearchSecurity.com etc, I found very few sources that a percentage on how much malware is encrypted and thus difficult to detect. This NSS Labs report from summer 2013 comparing Next Gen Firewall SSL Decryption performance, for instance, says that “the average proportion of SSL traffic within a typical enterprise is 25-35%” and that only ~1% of malware is encrypted. A GWU security researcher named Andre DiMino has a series of good blog posts on the topic, showing what SSL-encrypted malware looks like in WireShark. Team CYMURU’s TotalHash database is probably the most comprehensive open DB of malware samples, but I don’t feel qualified to search it frankly)) but it seems an increasing amount of virulent & potent malware is arriving at your edge encrypted. Because those packets are encrypted, you essentially can’t scan the contents. All you get is source/destination IP address, some other IP header information, and that’s about it.
One option, really your only option at that point, is to crack open those packets and inspect them. Here’s how.
1.You need a proxy server or service that does security inspection.
I’ve seen ZScaler used at several firms. ZScaler dubs itself the premiere cloud-based, SaaS proxy service, and it’s quite a nifty service.
For a fee per user, ZScaler will proxy most if not all of your internet traffic from several datacenters around the globe, sort of like how CloudFlare protects your websites.
The service scans all that http and https traffic, filters out the bad and malicious stuff, blocks access to sites you tell it to, and sends inspected http/https content to your users, wherever they are, on-prem or connected to the unsecured Starbucks access point.
2. You need to bundle those proxy settings up into a .pac file
Getting the service is one thing, you still need to direct your users and computers through it. The easiest way is via Group Policy & what’s called a .pac file.
A .pac file is a settings file generated by ZScaler that contains your preferences, settings, and lists of sites you’d prefer bypass the filter. It looks like this:
function FindProxyForURL(url, host)
var resolved_host_ip = dnsResolve(host);
if (url.substring(0, 4) == "ftp:")
// If the requested website is hosted within the internal network, send direct
if (isPlainHostName(host) ||
isInNet(resolved_host_ip, "18.104.22.168", "255.0.0.0") ||
// If the requested website is SSL and associated with Microsoft O365, send direct
3. Deploy the .pac file via Group Policy to Users
Next, you need to pick your favorite deployment tool to push the .pac file out and set Windows via IE to proxy through ZScaler. We’ll use Group Policy because it’s fast and easy.
Under User Configuration > Policies > Windows Settings > Internet Explorer Maintenance > Connection / Automatic Browser Configuration, select Enable.
Then point the Auto-proxy URL to your Zscaler .pac file URL. It looks like this:
Keep Group Policy open, because we’re not done quite yet.
4. Download the ZScaler Root CA certificates
You’ll find the certs in the administration control screen of ZScaler. There are two:
ZScaler Root Certificate -2048.crt
ZScalerRoot Certificate -2048-SHA256.crt
The two certificates are scoped similarly, the only difference seems to be SHA1 or SHA256 encoding.
Double-click the certificate you prefer to use, and notice that Windows alerts you to the fact that it’s not trusted. Good on ya Microsoft, you’re right.
To validate this setup, you’ll probably want to test before you deploy. So select Install Certificate, select your Computer (not user) and navigate to the Trusted Root CA Store:
Now that you’ve installed the .pac file and the certificate, ensure that IE (and thus Chrome, but not necessarily Firefox) have been set to proxy through Zscaler:
5. SSL Proxy Achievement Unlocked:
Go to Google or any SSL/TLS encrypted site and check the Certificate in your browser.
You should see something like this:
6. You can now deploy that same certificate via Group Policy to your Computers.
It’s trivial at this point to deploy the ZScaler certificates to end-user PCs via Group Policy. You’ll want to use Computer Preferences.
Once deployed, you’ll get comprehensive scanning, blocking and reporting on your users http/https use. You can of course exempt certain sites from being scanned ((Before you do this, make sure you get your Legal department or corporate controller’s sign-off on this. Your company needs to understand exactly what SSL Proxy means, and the Gordian Knot of encryption.
By making all SSL traffic visible to your proxy service, you may gain some ability to prevent potent malware attacks, but at the cost of your user’s privacy. When a user transacts business with their bank, their session will be secured, but only between the ZScaler cloud and the bank’s webserver. The same is true of Facebook or personal email sites.
By doing this, you’re placing an immense amount of trust in the proxy server/service of your choice. You’re trusting that they know what they’re doing with Certificates, that they didn’t use a weak password. You’re trusting that they have their act together, and you’re doing this on behalf of all your users who trust you. This is not to be taken lightly, so run it up the legal/HR flagpole before you do this. ))
On Tuesday Microsoft surprised me by announcing an open switching/networking plan in partnership with Mellanox and as part of the Open Compute initiative.
Microsoft’s building a switch?
Not quite, but before we get into that, some background on Microsoft’s participation in what I call OpenMania: the cloud & enterprise technology vendor tendency to prefix any standards-ish cooperative work effort with the word Open.
Microsoft’s participating in several OpenMania efforts, but I only really care about these two because they highlight something neat about Microsoft and apply or will soon apply to me, the Converged IT Guy.
Open Compute, or OCP, is the Facebook-led initiative to build agnostic hardware platforms on x86 for the datacenter. I like to think of OCP as a ground-up re-imagining of hardware systems by guys who do software systems.
As part of their participation in OCP, Microsoft is devoting engineering resources and talent into building out specifications, blueprints and full hardware designs for things like this, a 12U converged chassis comprised of storage and compute resources.
That’s a lot of words with little meaning, so let me break it down for the Windows guys and gals reading this. The promise of Microsoft’s OMI participation is this: you can configure other people’s hardware and software via the same frameworks your Windows Server runs on (CIM, the next-gen WMI) using the same techniques and tooling you manage other things with: Powershell.
I’ve been keenly interested in Microsoft & their OMI push because it’s an awesome vision, and it’s real, or real-close at any rate: SMI-S, for instance, is gaining traction as a management play on other people’s hardware/software storage systems ((cf NIMBLE STORAGE NOW INTEGRATES WITH SCVMM)) , and is already baked-into Windows server as a feature you can install and use to manage Windows Storage Spaces, which itself is a first-class citizen of CIMville.
All your CIM classes -running as part of Windows or not- manipulated & managed via Powershell, the same ISE you and I use to deploy Hyper-V hosts, spin-up VMs, manage our tenants in Office 365, fiddle around in Azure, and make each day at work a little better and a little more automated than the last.
That’s the promised land right there, ladies and gentlemen.
Except for networking, the last stubborn holdout in my fevered powershell dream.
Jeff Snover, the architect of the vision, teases me with Powershell Leaf Spine Tweets like this:
but I have yet to replace Putty with Powershell, I still have to do show int status rather than show-interface -status “connected” on my switch because I don’t have an Arista or N7K, and few other switches vendors seem to be getting the OMI religion.
The Switch Abstraction Interface (SAI) team at Microsoft is excited to announce that we will showcase our first implementations of the specification at the Open Compute Project Summit, which kicks off today at the San Jose Convention Center. SAI is a specification by the OCP that provides a consistent programming interface for common networking functions implemented by network switch ASIC’s. In addition, SAI allows network switch vendors to continue to build innovative features through extensions.
The SAI v0.92 introduces numerous proposals including:
Access Control Lists (ACL)
Equal Cost Multi Path (ECMP)
Forwarding Data Base (FDB, MAC address table)
Neighbor database, Next hop and next hop groups
Quality of Service (QoS)
Route, router, and router interfaces
At first glance, I wouldn’t blame you if you thought that this thing, this SAI, means OMI is dead in networking, that managing route/switch via Powershell is gone.
But looking deeper, this development speaks to Microsoft’s unique position in the market (all markets, really!)
SAI is probably more about low-level interaction with Broadcom’s Trident II ((At least that’s my read on the Github repo material)) and Microsoft’s participation in this is more about Azure and less about managing networking stuff w/Powershell
So, let’s review: Microsoft has embraced Open Compute & Open Management. It breaks down like this:
Microsoft + OCP = Contributions of hardware blueprints but also low-level software code for things like ASIC interaction
Microsoft + OMI = A long-term strategic push to manage x86 hardware & software systems that may run Windows, but likely run something Linuxy yet
In a perfect world, OCP and OMI would just join forces and be followed by all the web-scale players, the enterprise technology vendors, the storage guys & packet pushers. All would gather together under a banner singing kumbaya and praising agnostic open hardware managed via a common, well-defined framework named CIM that you can plug into any front-end GUI or CLI construct you like.
Alas, it’s not a perfect world and OCP & OMI are different things. In the real world, you still need a proprietary API to manage a storage system, or a costly license to utilize another switchport. And worst of all, in this world, Powershell is not my interface to everything, it is not yet the answer to all IT questions.
Yet Microsoft, by virtue of its position in so many different markets, is very close now to creating its own perfect world. If they find some traction with SAI, I’m certain it won’t be long before you can manage an open Microsoft-designed switch that’s a first-class OMI citizen and gets along famously with Powershell! ((Or buy one, as you can buy the Azure-in-a-box which is simply the OCP blueprint via Dell/Microsoft Cloud Platform System program))