It’s been awhile since I posted about my home lab, Daisettalabs.net, but rest assured, though I’ve been largely radio silent on it, I’ve been busy.

If 2013 saw the birth of Daisetta Labs.net, 2014 was akin to the terrible twos, with some joy & victories mixed together with teething pains and bruising.

So what’s 2015 shaping up to be?

Well, if I had to characterize it, I’d say it’s #LabGlory, through and through. Honestly. Why?

I’ve assembled a home lab that’s capable of simulating just about anything I run into in the ‘wild’ as a professional. And that’s always been the goal with my lab: practicing technology at home so that I can excel at work.

Let’s have a look at the state of the lab, shall we?

Hardware & Software

Daisetta Labs.net 2015 is comprised of the following:

  • Five (5) physical servers
  • 136 GB RAM
  • Sixteen (16) non-HT Cores
  • One (1) wireless access point
  • One (1) zone-based Firewall
  • Two (2) multilayer gigabit switches
  • One (1) Cable modem in bridge mode
  • Two (2) Public IPs (DHCP)
  • One (1) Silicon Dust HD
  • Ten (10) VLANs
  • Thirteen (13) VMs
  • Five (5) Port-Channels
  • One (1) Windows Media Center PC

That’s quite a bit of kit, as a former British colleague used to say. What’s it all do? Let’s dive in:

Physical Layout

The bulk of my lab gear is in my garage on a wooden workbench.

Nodes 2-4, the core switch, my Zywall edge device, modem, TV tuner, Silicon Dust device and Ooma phone all reside in a secured 12U, two post rack I picked up on ebay about two years ago for $40. One other server, core.daisettalabs.net, sits inside a mid-tower case stuffed with nine 2TB Hitachi HDDs and five 256GB SSDs below the rack.

Placing my lab in the garage has a few benefits, chief among them: I don’t hear (as many) complaints from the family cluster about noise. Also, because it’s largely in the garage, it’s isolated & out of reach of the Child Partition’s curious fingers, which, as every parent knows, are attracted to buttons of all types.

Power & Thermal

Of course you can’t build a lab at home without reliable power, so I’ve got one rack-mounted APC UPS, and one consumer-grade Cyberpower UPS for core.daisettalabs.net and all the internet gear.

On average, the lab gear in the garage consumes about 346 watts, or about 3 amps. That’s significant, no doubt, costing me about $38/month to power, or about 2/3rds the cost of a subscription to IT Pro TV or Pluralsight. 🙂

Thermals are a big challenge. My house was built in 1967, has decent insulation and holds temperature fairly well in the habitable parts of the space. But none of that is true about the garage, where my USB lab thermometer has recorded temps as low as 3C last winter and as high as 39c in Summer 2014. That’s air-temperature at the top of the rack, mind you, not at the CPU.

One of my goals for this year is to automate the shutdown/powerup of all node servers in the Garage based on the temperature reading of the USB thermometer. The $25 thermometer is something I picked up on Amazon awhile ago; it outputs to .csv but I haven’t figured out how to automate its software interface with powershell….yet.

Anyway, here’s my stack, all stickered up and ready for review:

IMG_20150329_214535914

Beyond the garage, the Daisetta Lab extends to my home’s main hallway, the living room, and of course, my home office.

Here’s the layout:

homelab2015

Compute

On the compute side of things, it’s almost all Haswell with the exception of core and node3:

[table]

Server, Architecture, CPU, Cores, RAM, Function, OS, Motherboard

Core, AMD A-series, A8-5500, 2, 8GB, Tiered Storage Spaces & DC/DHCP/DNS, Server 2012 R2, Gigabyte D4

Node1, Haswell, i7-4770k, 4, 32GB, Main PC/Office/VM host/storage, 2012R2, Supermicro X10SAT

Node2, Haswell, Xeon E3-1241, 4, 32GB, Cluster node, 2012r2 core, Supermicro X10SAF

Node3, Ivy Bridge, i7-2600, 4, 32GB, Cluster node, 2012r2 core, Biostar

Node4, Haswell, i5-4670, 4, 32GB, Cluster node/storage, 2012r2 core, Asus

[/table]

I love Haswell for its speed, thermal properties and affordability, but damn! That’s a lot of boxes, isn’t it? Unfortunately, you just can’t get very VM dense when 32GB is the max amount of RAM Haswell E3/i7 chipsets support. I love dynamic RAM on a VM as much as the next guy, but even with Windows core, it’s been hard to squeeze more than 8-10 VMs on a single host. With Hyper-V Containers coming, who knows, maybe that will change?

Node1, the pride of the fleet and my main productivity machine, boasting 2x850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces
Node1, the pride of the fleet and my main productivity machine, boasting 2×850 Pro SSDs in RAID 0, an AMD FirePro, and Tiered Storage Spaces

While I included it in the diagram, TVPC3 is not really a lab machine. It’s a cheap Ivy Bridge Pentium with 8GB of RAM and 3TB of local storage. It’s sole function in life is to decrypt the HD stream it receives from the Silicon Dust tuner and display HGTV for my mother-in-law with as little friction as possible. Running Windows 8.1 with Media Center, it’s the only PC in the house without battery backup.

Physical Network
About 18 months ago, I poured gallons of sweat equity into cabling my house. I ran at least a dozen CAT-5e cables from the garage to my home office, bedrooms, living room and to some external parts of the house for video surveillance.
I don’t regret it in the least; nothing like having a reliable, physical backbone to connect up your home network/lab environment!

Meet my underlay
Meet my underlay

At the core of the physical network lies my venerable Cisco 2960S-48TS-L switch. Switch1 may be a humble access-layer switch, but in my lab, the 2960S bundles 17 ports into five port channels, serves as my DG, routes with some rudimentary Layer 3 functions ((Up to 16 static routes, no dynamic route features are available)) and segments 9 VLANs and one port-security VLAN, a feature that’s akin to PVLAN.

Switch2 is a 10 port Cisco Small Business SG-300 running at Layer 3 and connected to Switch1 via a 2-port port-channel. I use a few ports on switch2 for the TV and an IP cam.

On the edge is redzed.daisettalabs.net, the Zyxel USG-50, which I wrote about last month.

Connecting this kit up to the internet is my Motorola Surfboard router/modem/switch/AP, which I run in bridge mode. The great thing about this device and my cable service is that for some reason, up to two LAN ports can be active at any given time. This means that CableCo gives me two public, DHCP addresses, simultaneously. One of these goes into a WAN port on the Zyxel, and the other goes into a downed switchport

Love Meraki's RF Spectrum chart!
Love Meraki’s RF Spectrum chart!

Lastly, there’s my Meraki MR-16, an access point a friend and Ubiquity networks fan gave me. Though it’s a bit underpowered for my tastes, I love this device. The MR-16 is trunked to switch1 and connects via an 802.3af power injector. I announce two SSIDs off the Meraki, both secured with WPA2 Personal ((WPA2 Enterprise is on the agenda this year)). Depending on which SSID you connect to, you’ll end up on the Device or VM VLANs.

Virtual Network

The virtual network was built entirely in System Center VMM 2012 R2. Nothing too fancy here, with multiple Gigabit adapters per physical host, one converged logical vSwitch and a separate NIC on each host fronting for the DMZ network:

Nodes 1, 2 & 4 are all Haswell, and are clustered. Node3 is standalone.

Thanks to VMM, building this out is largely a breeze, once you’ve settled on an architecture. I like to run the cmdlets to build the virtual & logical networks myself, but there’s also a great script available that will build a converged network for you.

A physical host typically looks like this (I say typically because I don’t have an equal number of adapters in all hosts):

I trust VLANs and VMM's segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks
I trust VLANs and VMM’s segmentation abilities, but chose to build what is in effect air-gapped vSwitch for the DMZ/DIA networks

We’re already several levels deep in my personal abstraction cave, why stop here? Here’s the layout of VM Networks, which are distinguished from but related to logical networks in VMM:

labnet13

I get a lot of questions on this blog about jumbo frames and Hyper-V switching, and I just want to reiterate that it’s not that hard to do, and look, here’s proof:

jumbopacket

Good stuff!

Storage

And last, and certainly most-interestingly, we arrive at Daisetta Lab’s storage resources.

My lab journey began with storage testing, in particular ZFS via NexentaCore (Illumos), NAS4Free and Solaris 11. But that’s ancient history; since last summer, I’ve been all Windows, all the time in my lab, starting with SAN.Daisettalabs.net ((cf #StorageGlory : 30 Days on a Windows SAN)).

Now?

Well, I had so much fun -and importantly so few failures/pains- with Microsoft’s Tiered Storage Spaces that I’ve decided to deploy not one, or even two, but three Tiered Storage Spaces. Here’s the layout:

[table]Server, #HDD, #SSD, StoragePool Capacity, StoragePool Free, #vDisks, Function

Core, 9, 6, 16.7TB, 12.7TB, 6 So far, SMB3/iSCSI target for entire lab

Node1,2, 2, 2.05TB, 1.15TB,2, SMB3 target for Hyper-V replication

Node4,3,1, 2.86TB, 1.97TB,2, SMB3 target for Hyper-V replication

[/table]

I have to say, I continue to be very impressed with Tiered Storage Spaces. It’s super-flexible, the cmdlets are well-documented, and Microsoft is iterating on it rapidly. More on the performance of Tiered Storage Spaces in a subsequent post.

Thanks for reading!

Big Data for Server Guys : Azure OpsInsight Review

Maybe it’s just my IT scars that bias me, but when I hear a vendor push a “monitoring” solution,  I visualize an IT guy sitting in front of his screen, passively watching his monitors & counters, essentially waiting for that green thing over there to turn red.

He’s waiting for failure, Godot-style.

That’s not a recipe for success in my view. I don’t wait upon failure to visit, I seek it out, kick its ass, and prevent it from ever establishing a beachhead in my infrastructure. The problem is that I, just like that IT Guy waiting around for failure, am human, and I’m prone to failure myself.

Enter machine learning or Big Data for Server Guys as I like to think of it.

Big Data for Server Guys is a bit like flow monitoring on your switch. The idea here is to actively flow all your server events into some sort of a collector, which crunches them, finds patterns, and surfaces the signal from the noise.

Big Data for Server Guys is all about letting the computer do what the computer’s good at doing: sifting data, finding patterns, and letting you do what you  are good at doing: empowering your organization for tech success.

But we Windows guys have a lot of noise to deal with: Windows instruments just about everything imaginable in the Microsoft kingdom, and the Microsoft kingdom is vast.

So how do we borrow flow-monitoring techniques from the Cisco jockeys and apply it to Windows?

Splunk is one option, and it’s great: it’s agnostic and will hoover events from Windows, logs from your Cisco’s syslog, and can sift through your Apache/IIS logs too. It’s got a thriving community and loads of sexy, AJAX-licious dashboards, and you can issue powerful searches and queries that can help you find problems before problems find you.

It’s also pretty costly, and I’d argue not the best-in-class solution for Hoovering Windows infrastructure.

Fortunately, Microsoft’s been busy in the last few years. Microsoft shops have had SCOM and MOM before that, but now there’s a new kid in town ((He’s been working out and looks nothing like that the old kid, System Center Advisor)) : Azure Operational Insights, and OpsInsight functions a lot like a  good flow collector.

opsinsight3

And I just put the finishing touches on my second Big Data for Server Guys/OpsInsight deployment. Here’s a mini-review:

The Good:

  • It watches your events and finds useful data, which saves you time: OpsInsight is like a giant Hoover in the sky, sucking up on average about 36MB/day of Windows events from my fleet of nearly ~150 VMs in a VMware infrastructure. Getting data on this fleet via Powershell is trivial, but building logic that gives insight into that data is not trivial. OpsInsight is wonderful in this regard; it saves you from spending time in SSRS, Excel, or diving through the event viewer haystack MMC or via get-event looking for a nugget of truth.
  • It has a decent config recommendation engine: If you’re an IT Generalist/Converged IT Guy like me, you touch every element in your Infrastructure stack, from the app on down to the storage array’s rotating rust. And that’s hard work because you can’t be an expert in everything. One great thing about OpsInsight is that it saves you from searching Bing/Google (at worst) or thumbing through your well-worn AD Cookbook (at best) and offers Best practice advice and KB articles in the same tab in your browser. Awesome!
  • Thanks Opsinsight for keeping me out of this thing
    Thanks Opsinsight for keeping me out of this thing

    Query your data rather than surfing the fail tree: Querying your data is infinitely better than walking the Fail Tree that is the Windows Event Viewer looking for errors. OpsInsight has a powerful query engine that’s not difficult to learn or manipulate, and for me, that’s a huge win over the old school method of Event Viewer Subscriptions.

  • Dashboards you can throw in front of an executive:  I can’t understate how great it is to have automagically configured dashboards via OpsInsight. As an IT Pro, the less time I spend in SSRS trying to build a pretty report the better. OpsInsight delivers decent dashboards I’m proud to show off. SCOM 2012 R2’s dashboards are great, but SCOM’s fat client works better than its IIS pages. Though it’s Silverlight-powered, OpsInsight wins the award for friction-free dashboarding.
  • Flexible Architecture: Do you like SCOM? Well then OpsInsight is a natural fit for you. I really appreciate how the System Center team re-structured OpsInsight late last year: you can deploy it at the tail end of your SCOM build, or you can forego SCOM altogether and attach agents directly to your servers. The latter offers you speed in deployment, the former allows you to essentially proxy events from your fleet, through your Management Group, and thence onto Azure. I chose the latter in both of my deployments. Let OpsInsight gate through SCOM, and let both do what they are good at doing.
  • It’s secure: The architecture for OpsInsight is Azure, so if you’re comfortable doing work in Azure Storage blobs, you should be comfortable with this. That + encrypted uploads of events, SCOM data and other data means less friction with the security/compliance guy on your team.

The Bad:

  • It’s silverlight, which makes me feel like I’m flowing my server events to Steve Ballmer: I’m sure this will be changed out at some point. I used to love Silverlight -and maybe there’s still room in my cold black heart for it- but it’s kind of an orphan media/web child at the moment.
  • There’s no app for iOS or Android…yet: I had to dig out my 2014 Lumia Icon just to try out the OpsInsight app for Windows phone. It’s decent, just what I’d like to see on my 2015 Droid Turbo. Alas there is no app for Android or IOS yet, but it’s the #1 and #2 most requested feature at the OpsInsight feedback page (add your vote, I did!)
  • It’s only Windows at the moment: I love what Microsoft is doing with Big Data crunching; Machine Learning, Stream Analytics and OpsInsight. But while you can point just about any flow or data at AzureML or Stream Analytics, OpsInsight only accepts Windows, IIS, SQL,Sharepoint, Exchange. Which is great, don’t get me wrong, but limited. SCOM at least can monitor SNMP traps, interface with Unix/Linux and such, but that is not available in OpsInsight. However, it’s still in Preview, so I’ll be patient.
  • It’s really only Windows/IIS/SQL/Exchange at the moment: Sadface for the lack of Office 365/Azure intelligence packs for OpsInsight, but SCOM will do for now.
  • Pricing forecast is definitely…cloudy: Every link I find takes me to the general Azure pricing page. On the plus side, you can strip this bad boy down to the bare essentials if you have cost pressures.

The Ugly:

  • Where are my cmdlets? My interface of choice with the world of IT these days is Powershell ISE. But when I typed get-help *opsinsight, only errors resulted. How’d this get past Snover’s desk? All kidding aside, SCOM cmdlets work well enough if you deploy OpsInsight following SCOM, and I’m sure it’s coming. I can wait.

All in all, this is shaping up to be a great service for your on-prem Windows infrastructure, which, let’s face it, is probably neglected.

System Center MVP Stanislav Zhelyazkov has a great 9-part deep dive on OpsInsight if you want to learn more.

Microsoft is the original & ultimate hyperconverged play

The In Tech We Trust Podcast has quickly became my favorite enterprise technology podcast since it debuted late last year. If you haven’t tuned into it yet, I advise you to get the RSS feed on your favored podcast player of choice ASAP.

The five gents ((Nigel Poulton, Linux trainer at Pluralsight, Hans De Leenheer,datacenter/storage and one of my secret crushes, Gabe Chapman, Marc Farley and Rick Vanover)) putting on the podcast are among the sharpest guys in infrastructure technology, have great on-air chemistry with each other, and consistently deliver an organized & smart format that hits my player on-time as expected every week. Oh, and they’ve equalized the Skype audio feeds too!

And yet….I can’t let the analysis in the two most recent shows slip by without comment. Indeed, it’s time for some tough love for my favorite podcast.

Guys you totally missed the mark discussing hyperconvergence & Microsoft over the last two shows!

For my readers who haven’t listened, here’s the compressed & deduped rundown of 50+ minutes of good stimulating conversation on hyperconvergence:

  • There’s little doubt in 2015 that hyperconverged infrastructure (HCI) is a durable & real thing in enterprise technology, and that that thing is changing the industry. HCI is real and not a fad, and it’s being adopted by customers.
  • But if HCI is a real, it’s also different things to different people; for Hans, it’s about scale-out node-based architecture, for others on the show, it’s more or less the industry definition: unified compute & storage with automation & management APIs and a GUI framework over the top.
  • But that loose definition is also evolving, as Rick Vanover sharply pointed out that EMC’s new offering, vSpex Blue, offers something more than what we’d traditionally (like two weeks ago) think of as hyperconvergence

Good stuff and good discussion.

And then the conversation turned to Microsoft. And it all went downhill. A summary of the guys’ views:

  • Microsoft doesn’t have a hyperconverged pony in the race, except perhaps Storage Spaces, which few like/adopt/bet on/understand
  • MS has ceded this battlefield to VMware
  • None of the cool & popular hyperconverged kids, save for Nutanix and Gridstore, want to play with Microsoft
  • Microsoft has totally blown this opportunity to remain relevant and Hyper-V is too hard. Marc Farley in particularly emphasized how badly Microsoft has blown hyperconvergence

I was, you might say, frustrated as I listened to this sentiment on the drive into my office today. My two cents below:

The appeal of Hyperconvergence is a two-sided coin. On the one side are all the familiar technical & operational benefits that are making it a successful and interesting part of the market.

  • It’s an appliance: Technical complexity and (hopefully) dysfunction are ironed out by the vendor so that storage/compute/network just work
  • It’s Easy: Simple to deploy, maintain, manage
  • It’s software-based and it’s evolving to offer more: As the guys on the show noted, newer HCI systems are offering more than ones released 6 months or a year ago.

The other side of that coin is less talked about, but no less powerful. HCI systems are rational cost centers, and the success of HCI marks a subtle but important shift in IT & in the market.

    • It’s a predictable check cut to fewer vendors: Hyperconvergence is also about vendor consolidation in IT shops that are under pressure to make costs predictable and smoother (not just lower).
    • It’s something other than best-of-breed: The success of HCI systems also suggests that IT shops may be shying away from best-of-breed purchasing habits and warming up to a more strategic one-throat-to-choke approach ((EMC & VMware, for instance, are titans in the industry, with best-in-class products in storage & virtualization, yet I can’t help but feel there’s more going on than the chattering classes realize. Step back and think of all the new stuff in vSphere 6, and couple it with all the old stuff that’s been rebranded as new in the last year or so by VMware. Of all that ‘stuff’, how much is best of breed, and how much of it is decent enough that a VMware customer can plausibly buy it and offset spend elsewhere?))
    • It’s some hybrid of all of the above: HCI in this scenario allows IT to have its cake and eat it too, maybe through vendor consolidation, or cost-offsets. Hard to gauge but the effect is real I think.

((As Vanover noted, EMC’s value-adds on the vSpex Blue architecture are potentially huge: if you buy vSpex Blue architecture, you get backup & replication, which means you don’t have to talk to or cut yearly checks to Commvault, Symantec or Veeam. I’ve scored touchdowns using that exact same play, embracing less-than-best Microsoft products that do the same thing as best-in-class SAN licenses))

And that’s where Microsoft enters the picture as the original -and ultimate- Hyperconverged play.

Like any solid HCI offering, Microsoft makes your hardware less important by abstracting it, but where Microsoft is different is that they scope supported solutions to x86. VMware, in contrast only hands out EVO:RAIL stickers to hardware vendors who dress x86 up and call it an appliance, which is more or less the Barracuda Networks model. ((I’m sorry. I know that was a a cheapshot,  but I couldn’t resist))

With your vanilla, Plain Jane whitebox x86 hardware, you can then use Microsoft’s Hyperconverged software system (or what I think of as Windows Server) to virtualize & abstract all the things from network (solid NFV & evolving overlay/SDN controller) to compute to storage, which features tiering, fault-tolerance, scale-out and other features usually found in traditional SAN systems.

But it doesn’t stop there. That same software powers services in an enormous IaaS/PaaS cloud, which works hand-in-hand with a federated productivity cloud that handles identity, messaging, data-mining, mail and more. The IaaS cloud, by the way, offers DR capabilities today, and you can connect to it via routing & ipsec, or you can extend your datacenter’s layer 2 broadcast domain to it if you like.

On the management/automation side, I understand/sympathize with ignorance of non-‘softies. Microsoft fans enthuse  about Powershell so much because it is -today-  a unified management system across a big chunk of the MS stack, either masked by GUI systems like System Center & Azure Pack or exposed as naked cmdlets. Powershell alone isn’t cool though, but Powershell & Windows Server aligned with truly open management frameworks like CIM, SMI-S and WBEM is very cool, especially in contrast to feature-packed but closed APIs.

On the cost side,there’s even more to the MS hyperconverged story:  Customers can buy what is in effect a single SKU (the Enterprise Agreement) and get access to most if not all of the MS stack.

Usually,organizations pay for the EA in small, easier-to-digest bites over a three year span, which the CFO likes because it’s predictable & smooth. (( Now, of course, I’m drastically simplifying Microsoft’s licensing regime and the process of buying an EA as you can’t add an EA to your cart & checkout, it’s a friggin negotiation. And yes I know everyone hates the true-up. And I grant that an EA just answers the software piece; organizations will still need the hardware, but I’d argue that de-coupling software from hardware makes purchasing the latter much, much easier, and how much hardware do you really need if you have Azure IaaS to fill in the gaps?))

Are all these Microsoft things you’ve bought best of breed? No, of course not. But you knew that ahead of time, if you did you homework.

Are they good enough in a lot of circumstances?

I’ll let you judge that for yourself, but, speaking from experience here, IT shops that go down the MS/EA route strategically do end up in the same magical, end-of-the-rainbow fairy-tale place that buyers of HCI systems are seeking.

That place is pretty great, let me tell you. It’s a place where the spend & costs are more predictable and bigger checks are cut to fewer vendors.  It’s a place where there are fewer debutante hardware systems fighting each other and demanding special attention & annual maintenance/support renewals in the datacenter. It’s also a place where you can manage things by learning verb-noun pairs in Powershell.

If that’s not the ultimate form of hyperconvergence, what is?

Containers! For Windows! Courtesy of Docker

DockerWithWindowsSrvAndLinux-1024x505 (1)

Big news yesterday for fans of agnostic cloud/on-prem computing.

Docker -the application virtualization stack that’s caught on like wildfire among the *nix set- is coming to Windows.

Yeah baby.

Mary Jo with the details:

Under the terms of the agreement announced today, the Docker Engine open source runtime for building, running and orchestrating containers will work with the next version of Windows Server. The Docker Engine for Windows Server will be developed as a Docker open source project, with Microsoft participating as an active community member. Docker Engine images for Windows Server will be available in the Docker Hub. The Docker Hub will also be integrated directly into Azure so that it is accessible through the Azure Management Portal and Azure Gallery. Microsoft also will be contributing to Docker’s open orchestration application programming interfaces (APIs).

When I first heard the news, emotion was mixed.

On the one hand, I love it. Virtualization of all flavors -OS, storage, network, and application- is where I want to be, as a blogger, at home in my lab, and professionally.

Yet, as a Windows guy (I dabble, of course), Docker was just a bit out of reach for me, even with my lab, which is 100% Windows.

On the other hand, I also remembered how dreadful it used to be to run Linux applications on Windows. Installing GTK+ Libraries on Windows isn’t fun, and the end-result often isn’t very attractive. In my world, keeping the two separate on the application & OS side/uniting them via Kerberos and/or https/rest has always been my preference.

But that’s old world thinking, ladies and gentlemen.

Because you see, this announcement from Microsoft & Docker Inc sounds deep, rich, functional. Microsoft’s going to contribute some of its Server code to the Docker folks, and the Docker crew will help build Container tech into Windows Server and Azure. I’m hopeful Docker will just be another Role in Server, and that Jeffrey Snover’s powershell cmdlets will hook deep into the Docker stuff.

This probably marks the death of App-V, which I wrote about in comparison to Docker just last month, but that’s fine with me.

Docker on Windows marks a giant step forward for Agnostic Computing…do we dare imagine a future in which our application stacks are portable? Today I’m running an application in a Docker Container on Azure, and tomorrow I move it to AWS?

Microsoft says that’s exactly the vision:

Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere. This partnership will enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. This level of interoperability is what we at MS Open Tech strive to deliver through contributions to open source projects such as Docker.

Full announcement.

Tales from the Hot Lane

A few brief updates & random thoughts from the last few days on all the stuff I’ve been working on.

Refreshing the Core at work: Summer’s ending, but at work, a new season is advancing, one rack unit at a time. I am gradually racking up & configuring new compute, storage, and network as it arrives; It Is Not About the Hardware™, but since you were wondering: 64 Ivy Bridge cores and about 512GB RAM, 30TB of storage, and Nexus 3k switching.

Cisco_logoAhh, the Nexus line. Never had the privilege to work on such fine switching infrastructure. Long time admirer, first-time NX-OS user. I have a pair of them plus a Layer 3 license so the long-term thinking involves not just connecting my compute to my storage, but connecting this dense stack northbound & out via OSPF or static routes over a fault-tolerant HSRP or VRRP config.

To do that, I need to get familiar with some Nexus-flavored acronyms that aren’t familiar to me: virtual port channels (VPC), Control Plane policy (COPP), VRF, and oh-so-many-more. I’ll also be attempting to answer the question once and for all: what spanning tree mode does one use to connect a Nexus switch to a virtualization host running Hyper-V’s converged switching architecture? I’ve used portfast in the lab on my Catalyst, but the lab switch is five years old, whereas this Nexus is brand new. And portfast never struck me as the right answer, just the easy one.

To answer those questions and more, I have TAC and this excellent tome provided gratis by the awesome VAR who sold us much of the equipment.

Into the vCPU Blender goes Lync: Last Friday, I got a call from my former boss & friend who now heads up a fast-growing IT department on the coast. He’s been busy refreshing & rationalizing much of his infrastructure as well, but as is typical for him, he wants more. He wants total IT transformation, so as he’s built out his infrastructure, he laid the groundwork to go 100% Microsoft Lync 2013 for voice.

Yeah baby. Lync 2013 as your PBX, delivering dial tone to your endpoints, whether they are Bluetooth-connected PC headsets, desk phones, or apps on a mobile.

Forget software-defined networking. This is software-defined voice & video, with no special server hardware, cloud services, or any other the other typical expensive nonsense you’d see in a VoIP implementation.

If Lync 2013 as PBX is not on your IT Bucket List, it should be. It was something my former boss & I never managed to accomplish at our previous employer on Hyper-V.

Now he was doing it alone. On a fast VMware/Nexus/NetApp stack with distributed vSwitches. And he wanted to run something by me.

So you can imagine how pleased I was to have a chat with him about it.

He was facing one problem which threatened his Go Live date: Mean Opinion Score, or MOS, a simple 0-5 score Lync provides to its administrators that summarizes call quality. MOS is a subset of a hugely detailed Media Quality Summary Report, detailed here at TechNet.

thMy friend was scoring a .6 on his MOS. He wanted it to be at 4 or above prior to go-live.

So at first we suspected QoS tags were being stripped somewhere between his endpoint device and the Lync Mediation VM. Sure enough, Wireshark proved that out; a Distributed vSwitch (or was it a Nexus?) wasn’t respecting the tag, resulting in a sort of half-duplex QoS if you will.

He fixed that, ran the test again, and still: .6. Yikes! Two days to go live. He called again.

That’s when I remembered the last time we tried to tackle this together. You see, the Lync Mediation Server is sort of the real PBX component in Lync Enterprise Voice architecture. It handles signalling to your endpoints, interfaces with the PSTN or a SIP trunk, and is the one server workload that, even in 2014, I’d hesitate making virtual.

My boss had three of them. All VMs on three different VMware hosts across two sites.

I dug up a Microsoft whitepaper on virtualizing Lync, something we didn’t have the last time we tried this. While Redmond says Lync Enterprise Voice on top of VMs can work, it’s damned expensive from a virtualization host perspective. MS advises:

  • You should disable hyperthreading on all hosts.
  • Do not use processor oversubscription; maintain a 1:1 ratio of virtual CPU to physical CPU.
  • Make sure your host servers support nested page tables (NPT) and extended page tables (EPT).
  • Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.

Talk about Harshing your vBuzz. Essentially, building Lync out virtually with Enterprise Voice forces you to go sparse on your hosts, which is akin to buying physical servers for Lync. If you don’t, into the vCPU blender goes Lync, and out comes poor voice quality, angry users, bitterness, regret and self-punishment.

Anyway, he did as advised, put some additional vCPU & memory reservations in place on his hosts, and yesterday, whilst I was toiling in the Hot Lane, he called me from Lync via his mobile.

He’s a married man just like me, but I must say his voice sounded damn sexy as it was sliced up into packets, sent over the wire, and converted back to analog on my mobile’s speaker. A virtual chest bump over the phone was next, then we said goodbye.

Another Go Live Victory (by proxy). Sweet.

Azure Outage: Yesterday’s bruising hours-long global Azure outage affected Virtual Machines, storage blobs, web services, database services and HD Insight, Microsoft’s service for big data crunching. As it unfolded, I navel-gazed, when I felt like helping. There was literally nothing I could do. Had I some crucial IaaS or PaaS in the Azure stack, I’d be shit out of luck, just like the rest. I felt quite helpless; refreshing Mary Jo’s pageyellow-exclamation-mark-in-triangle-md and the Azure dashboard didn’t help. I wondered what the problem was; it’s been a difficult week for Microsofties whether on-prem or in Azure. Had to be related to the update cycle, I thought.

On the plus side, Azure Active Directory services never went down, nor did several other services. Office 365 stayed up as well, though it is built atop separate-but-related infrastructure in my understanding.

Lastly, I pondered two thoughts: if you’re thinking of reducing your OpEx by replacing your DR strategy with an Azure Site Recovery strategy, does this change your mind? And if you’re building out Azure as your primary IaaS or PaaS, do you just accept such outages or do you plan a failback strategy?

Labworks : Towards a 100% Windows-defined Daisetta Lab: What’s next for the Daisetta Lab? Well, I have me an AMD Duron CPU, a suitable motherboard, a 1U enclosure with PSU, and three Keepin’ it RealTek NICs. Oh, I also have a case of the envies, envies for the VMware crowd and their VXLAN and NSX and of course VMworld next week. So I’m thinking of building a Network Virtualization Gateway appliance. For those keeping score at home, that would mean from Storage to Compute to Network Edge, I’d have a 100% Windows lab environment, infused with NVGRE which has more use cases than just multi-tenancy as I had thought.

Cloud Praxis lifehax: Disrupting the consumer cloud with my #Office365 E1 sub

E1, just like its big brothers E3 & E4, gives you real Microsoft Exchange 2013, just like the one at work. There’s all sorts of great things you can do with your own Exchange instance:

  • Practice your Powershell remoting skills
  • Get familiar with how Office 365 measures and applies storage settings among the different products
  • Run some decent reporting against device, browser and fat client usage

But the greatest of these is Exchange public-facing, closed-membership distribution groups.

Whazzat, you ask?

Well, it’s a distribution group. With you in it. And it’s public facing. Meaning you can create your own SMTP addresses that others can send to. And then you can create Exchange-based rules that drop those emails into a folder, deletes them after a certain time, runs scripts against them, all sorts of cool stuff before it hits your device or Outlook.

All this for my Enterprise of One, Daisetta Labs.net. For $8/month.

You might think it’s overkill to have a mighty Exchange instance for yourself, but your ability to create a public-facing distribution group is a killer app that can help you rationalize some of your cloud hassles at home and take charge & ownership of your email, which I argue, is akin to your birth certificate in the online services world.

My public facing distribution groups, por ejemplo:

distrogroups

 

There are others, like career@, blog@ and such.

The only free service that offers something akin to this powerful feature is Microosft’s own Outlook.com. If the prefixed email address is available @outlook.com, you can create aliases that are public-facing and use them in a similar way as I do.

But that’s a big if. @outlook.com names must be running low.

Another, perhaps even better use of these public-facing distribution groups: exploiting cloud offerings that aren’t dependent on a native email service like Gmail. You can use your public-facing distribution groups to register and rationalize the family cluster’s cloud stack!

app

It doesn’t solve everything, true, but it goes along way. In my case, the problem was a tough one to crack. You see, ever since the child partition emerged out of dev, into the hands of a skilled QA technician, and thence, under extreme protest, into production, I’ve struggled to capture, save & properly preserve the amazing pictures & videos stored on the Supervisor Module’s iPhone 5.

Until recently, Supe had the best camera phone in the cluster (My Lumia Icon outclasses it now). She, of course, uses Gmail so her pics are backed up in G+, but 1) I can’t access them or view them, 2) they’re downsized in the upload and 3) AutoAwesome’s gone from being cool & nifty to a bit creepy while iCloud’s a joke (though they smartly announced family sharing yesterday, I understand).

She has the same problems accessing the pictures I take of Child Partition on the Icon. She wants them all, and I don’t share much to the social media sites.

And neither one of us want to switch email providers.

So….

Consumer OneDrive via Microsoft account registered with general@mydomain.com with MFA. Checks all the Boxes. I even got 100GB just for using Bing for a month

Available on iPhone, Windows phone, desktop, etc? Check.

Easy to use, beautifully designed even? Check

Can use a public-facing distribution group SMTP address for account creation? Check

All tied into my E1 Exchange instance!

It works so well I’m using general@mydomain.com to sync Windows 8.1 between home, work & in the lab. Only thing left is to convince the Supe to use OneNote rather than Evernote.

I do the same thing with Amazon (caveat_emptor@), finance stuff, Pandora (general@), some Apple-focused accounts, basically anything that doesn’t require a native email account, I’ll re-register with an O365 public-facing distribution group.

Then I share the account credentials among the cluster, and put the service on the cluster’s devices. Now the Supe’s iPhone 5 uploads to OneDrive, which all of us can access.

So yeah. E1 & public facing distribution groups can help sooth your personal cloud woes at home, while giving you the tools & exposure to Office 365 for #InfrastructureGlory at work.

Good stuff!