Cloud Praxis #4 : Syncing our Dir to Office 365

praxis4dirsync
The Apollo-Soyuz metaphor is too rich to resist. With apologies to NASA, astronauts & cosmonauts everywhere

Right. So if you’ve been following me through Cloud Praxis #1-3 and took my advice, you now have a simple Active Directory lab on your premises (Wherever that may be) and perhaps you did the right thing and purchased a domain name, then bought an Office 365 Enterprise E1 subscription for yourself. Because reading about contoso.com isn’t enough.

What am I talking about “if”. I know you did just what I recommended you do. I know because you’re with me here, working through the Cloud Praxis Program because you, like me, are an IT Infrastructurist who likes to win! You are a fellow seeker of #InfrastructureGlory, and you will pursue that ideal wherever it is, on-prem, hybrid, in the cloud, buried in a signed cmdlet, on your hybrid iSCSI array or deep inside an NVGRE-encapsulated packet, somewhere up in the Overlay.

Right. Right?

Someone tell me I’m not alone here.

You get there through this thing.
You get there through this thing.

So DirSync. Or Directory Synchronization. In the grand Microsoft tradition of product names, DirSync has about the least sexy name possible. Imagine yourself as a poor Microsoft technology reseller; you’ve just done the elevator pitch for the Glories that are to be had in Office 365 Enterprise & Azure, and your mark is interested and so he asks:

Mark: “How do I get there?”

Sales guy: “DirSync”

Mark: “Pardon me?”

Sales Guy: “DirSync.”

Mark: Are you ok? Your voice is spasming or something. Is there someone I can call?

DirSync has been around for a long, long time. I hadn’t even heard of it or considered the possibility of using it until 2012 or 2013, but while prepping the Daisetta Lab, I realized this goes back to 2008 & Microsoft Online Services.

But today, in 2014, it’s officially called Windows Azure Active Directory Sync, and though I can’t wait to GifCam you some cool powershell cmdlets that show it in action, we’ve got some prep work to do first.

Lab Prep for DirSync

As I said in Cloud Praxis #3, to really simulate your workplace, I recommend you build your on prem lab AD with a fully-routable domain name, then purchase that same name from a registrar on the internet. I said in Cloud Praxis #2 that you should have a lab computer with 16GB of RAM and you should expect to build at least two or three VMs using Client Hyper-V at the minimum.

Now’s the time to firm this all up, prep our lab. I know you’re itching to get deep into some O365, but hang on and do your due dilligence, just like you would at work.

  • Lab DHCP : What do you have as your DHCP server? If it’s a consumer-level wifi router that won’t let you assign an FQDN to your devices, consider ditching it for DHCP and stand-up a DHCP instance in your Lab Domain Controller. Your wife will never know the difference and you can ensure 1) that your VMs (whether 1 or 2 or several) get the proper FQDN suffix assigned, and 2) you can disable NetBIOS via MS DHCP
  • Get your on-prem DNS in order: This is the time to really focus on your lab DNS. I want you to test everything; make some A-records, ensure your PTRs are created automatically. Create some C-Names and test forwarding. Download a tool like Steve Gibson’s DNS Benchmark to see which public name servers are the closest to you and answer the quickest. For me, it’s Level 3. Set your forwarders appropriately. Enable logging & automatic testing
  • Build a second DC: Not strictly required, but best practice & wisdom dictates you do this ahead of DirSync. Do what I did; go with a Windows core VM for your second DC. That VM will only need 768mb of ram or so, and a 15GB .vhdx. But with it, you will have a healthier domain on-prem

Now over to O365 Enterprise portal. Read the official O365 Induction Process as I did, then take a look at the steps/suggestions below. I went through this in April; it’s easy, but the official guides leave out some color.

Office 365 Prep & Domain Port ahead of DirSync

  • Go to your registrar and assign and verify to Microsoft you own the domain via TXT record: Process here
  • Pick from the following options for DNS and read this:
    • Easy but not realistic: Just handover DNS to O365. I took the easy way admittedly. Daisetta Labs.net DNS is hosted by O365. It’s decent as DNS hosting goes, but I wouldn’t have chosen this option for my workplace as I use an Anycast DNS service that has fast CDN Propagation globally
    • More realistic: Create the required A Records, C Names, TXT and SRV records at your registrar or DNS host and point them where Microsoft says to point them
    • Balls of Steel Option: Put your Lab VM in your DMZ, harden it up, point the registrar at it and host your own DNS via Windows baby. Probably not advisable from a residential internet connection.
  • Keep your .onmicrosoft.com account for a week or two: Whether you’re starting out in O365 at work or just to learn the system like I did, you’ll need your first O365 account for a few days as the domain name porting process is a 24-36 hour process. Don’t assign your E1 licenses to your @domain.com account just yet.
  • I wouldn’t engage MFA just yet…let things settle before you turn on Multifactor authentication. Also be sure your backup email account (The oh shit account Microsoft wants you to use that’s not associated with O365) is accessible and secure.
  • Fresh start cause I couldn't build out an Exchange lab :sadface:
    Fresh start cause I couldn’t build out an Exchange lab :sadface:

    If you are simulating Exchange on-prem to hybrid for this exercise, you’ll have more steps than I did. Sadly, I had to give O365 the easy way out and selected “Fresh Start” in the process.

  • Proceed with the standard O365 wizard setups, but halt at OnRamp: I’m happy to see the Wizard configuration method is surviving in the cloud. Setting all this up won’t take long; the whole portal is pretty easy & obvious until you get to Sharepoint stuff.

Total work here is a couple of hours. I can’t stress how important your lab DNS & AD health are. You need to be rock solid in replication between your DCs, your DNS should be fast & reliably return accurate results, and you should have a good handle on your lab replication topology, a proper Sites & Services setup, and dial in your Group Policy and OU structure.

Daisetta Labs.net looks like this:

daisettalabsad

 

and dcdiag /e & repadmin show no errors.

Final Steps before DirSync Blastoff

  • With a healthy Domain on-prem, you need now to create some A Records, C-Names and TXT records so Lync, Outlook, and all your other fat clients dependent Exchange, Sharepoint and such know where to go. This is quite important; at work, you’ll run into this exact same situation.  Getting this right is why we chose to use routable domain, it’s a big chunk of the reason why we’re doing this whole Cloud Praxis thing in the first place. It’s so our users have an enjoyable and hassle-free transition to O365
  • Follow the directions here. Not as hard as it sounds. For me it went very smoothly. In fact, the O365 Enterprise portal gives you everything you need in the Domain panel, provided you’ve waited about 36 hours after porting your domain. Here’s what mine looks like on-prem after manually creating the records.

dns

And that’s it. We’re ready to Sync our Dirs to O365s Dirs, to get a little closer to #InfrastructureGlory. On one side: your on-prem AD stack, on the launch pad, in your lab ready for liftoff.

Sure, it’s a little hair-brained, admittedly, but if you’re like me, this is how you learn. And I’m learning. Aren’t you?

On the other launch pad, Office 365. Superbly architected by some Microsoft engineers, no longer joke-worthy like it was in the BPOS days, a place your infrastructure is heading to whether you like it or not.

I want you to be there ahead of all the other guys, and that’s what Cloud Praxis is all about: staying sharp on this cloud stack so we can keep our jobs and find #InfrastructureGlory.

DirSync is the first step here, and I’ll show you it on the next Cloud Praxis. Thanks for reading!

Cloud Praxis #3 : Office 365, Email and the best value in tech

Email.

What’s the first thought that comes to your mind when you read that word?

Exchange_2013-logo
I seek #ExchangeGlory !! said no IT blogger ever

If you’re in IT in the Microsoft space, maybe you think of huge mailbox stores, Exchange, Outlook, legal discovery requirements, spam headaches and the pressure & demand that stack places on your infrastructure. Terabytes and terabytes of the stuff, going back years. All up in your stack, DAG on your spindles, CAS on your edge, all load balanced at Layer 4/7 behind a physical or virtual device & wrapped up in a nice legitimate, widely-recognized CA-issued SSL cert. The stuff is everywhere.

I almost forgot. You have to back all that stuff up too. To tape in my case.

Oh, and perhaps you also recall the cold chills & instant sense of dread & fear you’ve felt just about every time an end user has asked (sometimes via email no less) “Is our email down?” I know the feeling.

Like a lot of Microsoft IT pros, I have my share of email war stories. I think email is one of those things in technology that lends itself to a sort of dualism, a sort of Devil on this shoulder, Angel on that shoulder . You can’t say something positive about email without adding a “but….” at the end, and that’s ok. Cognitive dissonance is allowed here; you can believe contrary ideas about email at the same time.

I know I do:

[table]

I love Email because, I hate email because

SMTP is last great agnostic open communication protocol, SMTP is too open and prone to abuse

Email is democratic and foundational to the internet, Email is fundamentally broken

Email will be around in some form forever, There’s no Tread Left on this Tire

Email is your online identity, Messaging applications are all the rage and so much richer

It’s how businesses communicate and thrive, One man’s business communication is another man’s spam

It’s always there, It goes down sometimes

Spam fighters and blacklists, Spam fighters and blacklists

It justifies Infrastructure Spend, It uses so much of my stack

Exchange is awesome and flexible, I broke Exchange once and fear it

[/table]

Whatever your thoughts on email are, one thing is clear: for Microsoft Infrastructure guys pondering the Microsoft cloud, the path to #InfrastructureGlory clearly travels through Exchange Country. In fact, it’s like the first step we’re supposed to take via Office 365.

daisettalabs large logoI don’t know about you, but I worry about the bandits in Exchange Country. Bandits that may break mail flow, or allow the tidal wave of spam in, prompt my users excessively for passwords, engage in various SSL hijinks, or otherwise change any of the finely-tuned ingredients in the delicate recipe that is my Exchange 2010 stack.

And yet, I bet if you polled Microsoft IT guys like me, you would find that of all the things they want to stick up in the Microsoft Cloud, Exchange & the email stack is probably at the top of the list. Just take it off our plate Microsoft as Exchange and email are in a sort of weird place in IT; it’s mission-critical and extremely important to have a durable Exchange infrastructure, yet raise your hand if you think Exchange Administration/Engineering are good career paths to take in 2014.

Didn’t think so.

So how do we get there?

I don’t have all the answers yet, but I at least have a good picture of the project, some hands-on experience, and some optimism, all of which means I’m one step closer to #InfrastructureGlory in the cloud.

Hard to build a realistic Exchange Lab 

First of all, recognize this. While it’s easy to build out a lab infrastructure (Cloud Praxis #2) for Active Directory, it’s quite another thing to build out an Exchange lab as I found out. You can’t do SMTP from home anymore (the spammers ruined that) which means you need resources at work, which might or might not be available. They aren’t in my case, so I struggled for awhile.

Maybe you have some resources at work (a few extra public IPs, a walled-off virtual network, some storage) with which you can build out an Exchange lab. If so, evaluate whether that’s going to benefit you and your organization. It might be a black hole of wasted time; it might pay off in a huge way as you wargame your way from on-prem to hybrid then to cloud and finally #InfrastructureGlory

Office 365 Praxis with the E1 Plan

For me and Daisetta Labs.net, I decided I couldn’t adequately simulate my workplace Exchange. So I did the next best thing.

I bought an Office 365 Enterprise E1 subscription.

That’s right baby. Daisetta Labs.net is on the O365 Enteprise E1 plan. It’s an Enteprrise of 1 (me!) but an Enterprise-scaled O365 account nonetheless.

And it’s fantastically cheap & easy to do, less than $100 a year for all this:

o365e1

For that measly amount, you can be an Enterprise of one in O365 and get all this:

  • A real Office 365 Enterprise account with Exchange 2013 and all of its incredibly rich features & options, including Powershell remoting, which you’ll need in your real O365 migration
  • That’s private email too...no ad bots gathering data against your profile. Up to you, but I moved my personal stack to O365 (more on that later)
  • Lync 2013. Forget Skype and all the other messengers. You get Lync service! Which interfaces with Skype and many others and makes you look like a real pro. Also useful if you have on-prem Lync, though I’m sad to report to you that, as of this month, Lync 2013 in O365 can’t kill your PBX off…yet.
  • Sharepoint & OneDrive for Business : I’ll admit it, I’ve done my fair share of Sharepoint hating but IT Infrastructurists need to realize Sharepoint is the gateway drug to many things businesses are interested in, like Business Intelligence & SQL, data visualizations and more. Besides, Sharepoint 2013 is not your daddy’s Sharepoint; it can do some neat stuff (not that I can show you, yet).
  • OneDrive for Business, again: If you’re in a Microsoft shop that’s still mostly on-prem, you probably experience Dropbox creep, where your users share documents via dropbox or other personal online storage solutions. With E1, you can get familiar with OneDrive for Business,within the context of Sharepoint & O365 management, dirsync, and all the rest.
  • One Terabyte of OneDrive for Business Storage. Outstanding. This was a recent announcement. It tickles me to think that my data is being deduped by a Windows storage spaces VM somewhere, just like I do on my storage at work.
  • Office Online : full on WAC server baby, with Excel in your Chrome or IE browser. Better, and better looking, than Google Docs.
  • With this plan, you can really test out Office for the iPad. You’ll get read and write to your O365 documents via an iPad, which can help you at work with that one C-level who loves his iPad as much as he loves Excel.
  • DirSync: The very directory synchronization tool you have stressed over at work is available to you with this simple, cheap E1 subscription. And it’s working. I’ve done it. Daisetta Labs.net is dirsynced to O365 from my home lab and I have SSO between my on-prem AD & Office 365. I deliberately kept my passwords separate between the two, but now they are in sync.

Anyway you cut it O365 E1 is an amazingly affordable and a very effective way to confront your cloud angst and get comfortable with Office 365. Even if you can’t fully simulate your workplace Exchange stack, you should consider doing this; you will use these same tools (particularly Powershell Remoting, the wizards in O365 & dirsync) at some point; best to get familiar with them now.

I could have hosted my Daisetta Labs.net domain anywhere; but I have zero regrets putting it in O365 on the E1 plan and committing for 12 months. If you’re an IT pro like me trying to get your infrastructure to the Microsoft cloud, you’d be well-served by doing the same thing I did. You may even want to ditch your personal email account and just go full Office 365…to eat the same dog food we’re going to serve to our users soon.

More to come on this tomorrow, suffice it to say, DaisettaLabs.net is dirsyncing as I write this. I’ll have screenshots, wizard processes and more to show.

Cloud Praxis #2 : Let’s build some on-prem Infrastructure

daisettalabs large logo

If you haven’t seen the opening post in this series, I encourage you to read it.The TL;DR version is this: I’m an IT guy in Infrastructure supporting the Microsoft stack, specializing mostly in virtualization & storage. I’ve been in IT for almost 15 years now and I love what I do, but what I do is being disrupted out of existence by the Microsoft cloud. 

Cloud Praxis is my response. It’s a response addressed to my situation and to other IT guys like me. It’s a response & a method that’s repeatable, something you can practice, hone, and master. That’s how I learn- hands-on experimentation. Like the man Cicero said at the top of the mission patch above, “Constant practice devoted to one subject often outdoes both intelligence and skill.” 

Above all, Cloud Praxis is a recognition that the 1) The Microsoft cloud is real & it’s here to stay, 2) my skills are entirely based in the old on-prem model, 3) I better adapt to the new regime, lest I find myself irrelevant, 4) it’s urgent that I tackle this weakness in my portfolio; I can’t wait on my workplace to adopt the cloud, I need some puffy cloud stuff in my CV post-haste, not next year or in two years. 

This is I how did it.

It may not be the Technet way, or the only way, but it was my way. And I’m sharing it with you because maybe you’re like me; a mid-career IT generalist with a child partition at home, perhaps a little nervous about this cloud thing yet determined to stay competitive, employable and sharp. Or maybe you are just a fellow seeker of #InfrastructureGlory.

If that’s the case, join me; I’ll walk you through the steps I took get a handle on this thing.

Oh, it’s also a lot of fun. Join me!

[table caption=”PRAXIS #2 : BUILD THEE SOME INFRASTRUCTURE – Infrastructure Requirements”]

Item Type,Suggested Config, Cost,License?,Notes

Compute/Storage,A PC with at least 16GB RAM & ethernet, Depends, No, Needs to be virutalization capable

Compute/Hypervisor,Windows 8.1 Pro or Ent, $200 or free 90 day eval, Yes, 2012 R2 eval works too

VM OS,Windows 2008 R2-2012R2 Standard, $0 with 90 day eval, Yes, Timer starts the day you install

Network,Always-on high speed internet at home/work,$-,No,Obviously [/table]

The very first step on our path to #InfrastructureGlory in the Microsoft Cloud is this: we need to build ourselves some on-prem infrastructure of sufficient size & scope to simulate our workplace infrastructure.

The good news is that the very same technology that revolutionized Infrastructure 4-5 years ago (virtualization) is now available downmarket, so downmarket in fact that you can build an inexpensive yet capable virtualization lab on a cheap consumer-level PC your family can use at home.

And you don’t even need a server OS on your parent partition to do this. As remarkable as it sounds, you can build a simple virtualization lab on consumer hardware running (at minimum) Windows 8.1 Pro as long as your PC is 1) virtualization capable and 2) has sufficient memory (16GB RAM but I suppose you could get by with as little as 8GB) and 3) storage resources and 4) a NIC (the one in your motherboard will likely work fine, just connect it to your home router).

If you don't have this installed on your Windows 8 machine, you're missing out.
If you don’t have this installed on your Windows 8 machine, you’re missing out.

How’s this possible?

Client Hyper-V baby. It’s the first and only feature you need to build a modest virtualization lab at home on your road to #InfrastructureGlory in the cloud. Client Hyper-V has about 60% of the features server Hyper-V has and uses a common management snap-in and cmdlets. You can’t build a converged fabric switch in Client Hyper-V nor play around with LACP and Live Migration, but at this scale, you don’t need to. You just need a place to park two or three VMs, an ethernet adapter on top of which you’ll build a virtual switch, and a bit of storage space for your VMs.

And some focus & intensity. I’m a virtualization guy and anything prefixed with a “v” gets me excited. It’s easy to get distracted in lab work, but my advice is to keep it simple and keep your focus where it belongs: Azure & Office 365. As much as I love virtualization, it’s just a bit player now.

Now What? 

Broadly outlined, here’s what you need to do once you’ve got your lab infrastructure ready.

  1. Make with the ISO downloads!: Check with your IT management and ask about your organization’s licensing relationship with Microsoft and/or the reseller your group works with. You might be surprised by what you find; though Microsoft has stopped selling Technet subscriptions to individuals, if your IT group has an Enterprise Agreement, Software Assurance, or MSDN subscriptions, you may be able to get access to those Server products under those licensing schemes. See how far your boss lets you take this; some licenses, for instance, give you $100 worth of credit in Azure, something I’m taking advantage of right now. I am not a licensing expert though, so read the fine print, get sign-off from your boss before you do anything with licensed products and understand the limitations.
  2. Consider your workplace Domain Functional level: If you are at 2008 functional level at work, try to get the Server 2008 iso. If you’re at (gasp!) 2003, get that iso if you can and start reading up on Domain Functional Levels & dirsync requirements. I see some Powershell in your future. At my work, we’re relatively clean & up to date in AD: Forrest functional level is at 2012, limited only by Exchange 2010 at this point (haven’t done the latest roll-up that supports 2012 R2). The idea here is to simulate, to the greatest degree possible, your workplace-to-cloud path.
  3. Build at least two VMs: You can follow the process as outlined here on Technet. VM1 is going to be your domain controller, so if you’re at 2008 functional level at work, build a 2008 VM. Your second VM will host dirsync and other cloud utilities. Technet says it can run 2012 R2, so you can use that. In my lab, I stood up a 2008 R2 server for this purpose
  4. Decide on a domain name: Now for some fun. You need to think of a routable domain name for your Windows domain, unless your workplace is on a .local or other non-routable domain. My workplace’s domain is routable, so I built a routable domain in the lab, then took the optional next step: I purchased the domain name from a registrar ($15) as this most closely simulates my workplace (on-prem domain matching internet domain) You should do the same unless you’re really confident in yourself; this step is very important for the next stage as we start to think about User Principal Name attributes and synchronizing our directory with Office 365 via Windows Azure.

And that, friends, is how Daisetta Labs.net* was born. I needed a domain name. Agnostic Computing.com was taken by some jerk blogger and I wanted something fast.

In retrospect, it might have been better to use Agnostic Computing.com as my lab domain because that’s a more realistic scenario; in the real world, I gots me some internet infrastructure tied to my routable domain and a 3rd party DNS host. I also gots me some on-prem Windows domain infrastructure tied to a routable domain name tied to my Windows DNS infrastructure on-prem. If you’re rusty on DNS, this is your chance to get up to speed as it’s everything in Microsoft-land.

On the domain name itself; pick a domain name and have some fun with it but maintain a veneer of professionalism and respectability. I want you to be able to put this on your resume, which means someone might ask you about it someday. If you take the next step and buy a domain from a registrar, you’ll want a domain name you’re not ashamed to have as an SMTP address.

In Cloud Praxis #3, we’re going to take some baby steps into Office 365. Hope you check back tomorrow!

*Some readers have asked me what a Daisetta is, and why it’s a lab. Not sure how to answer that. Maybe Daisetta is the name of my first love; or my first pet dog. Perhaps it’s a street name or a place in Texas, or maybe it’s the spirit of innovation & excitement that propels me forward, that compels me to build a crazy home lab. Or maybe it’s a fugazi, fogazzi, it’s a wazzi, it’s a woozie..it’s fairy dust.

fogazzi

 

Cloud Praxis #1: Advice for Microsoft IT Pros w/ Cloud angst

It’s been a tough year for those of us in IT who engineer, deploy, support & maintain Microsoft technology products.

First, Windows 8 happened, which, as I’ve written about before, sent me into a downward spiral of confusion and despair. Shortly after that but before Windows 8.1, Microsoft killed off Technet subscriptions in the summer of 2013, telling Technet fans they should get used to the idea of MSDN subscriptions. As the fall arrived, Windows 8.1 and 2012 R2 cured my Chrome fever just as Ballmer & Crew were heading out the door.

Next, Microsoft took Satya Nadella out of his office in the Azure-plex and sat him behind the big mahogany CEO desk at One Microsoft Way. I like Nadella, but his selection spelled more gloom for Microsoft Infrastructure IT guys; remember it was Nadella who told the New York Times that Microsoft’s on-prem infrastructure products are old & tired and don’t make money for Microsoft anymore.

And then, this spring…first at BUILD, then TechEd, Microsoft did the unthinkable. They invited the Linux & Open source guys into the tent, sat them in the front row next to the developers and handed them drinks and party favors, while more or less making us on-prrem Infrastructure guys feel like we were crashing the party.

No new products announced for us at BUILD or TechEd, ostensibly the event built for us. Instead, the TechEdders got Azured on until they were blue in the face, leading Ars’ @DrPizza to observe:

//platform.twitter.com/widgets.js

We think it feels pretty shitty Dr. Pizza, that’s how. It feels like we’re about to be made obsolete, that we in the infrastructure side of the IT house are about to be disrupted out of existence by Jeffrey Snover’s cmdlets, Satya’s business sense and something menacingly named the Azure Pack.

And the guys who will replace us are all insufferable devs, Visual Studio jockeys who couldn’t tell you the difference between a spindle and a port-channel, even when threatened with a C#.

Which makes it hurt even more Dr. Pizza, if that is your real name.

But it also feels like a wake-up call and a challenge. A call to end the cynicism and embrace this cloud thing because it’s not going away. In fact, it’s only getting bigger, encroaching more and more each day into the DMZ and onto the LAN, forcing us to reckon with it.

daisettalabs large logoThe writing’s on the wall fellow Microsofties. BPOS uptime jokes were funny in 2011 and Azure doesn’t go down anymore because of expired certs. The stack is mature, scalable, and actually pretty awesome (even if they’re still using .vhd for VMs, which is crazy). It’s time we step up, adopt the language & manners of the dev, embrace the cloud vision, and take charge & ownership of our own futures.

I’d argue that learning Microsoft’s cloud is so urgent you should be exploring it and getting experienced with it even if your employer is cloud-shy and can’t commit.Don’t wait on them if that’s the case, do it yourself!

Because, if you don’t, you’ll get left behind. Think of the cloud as an operating system or technology platform and now imagine your resume in two, five, or seven years without any Office 365 or Azure experience on it. Now think of yourself actually scoring an interview, sitting down before the guy you want to work for in 2017 or 2018, and awkwardly telling him you have zero or very little experience in the cloud.

Would you hire that guy? I wouldn’t.

That guy will end up where all failed IT Pros end up: at Geek Squad, repairing consumer laptops & wifi routers and up-selling anti-virus subscriptions until he dies, sad, lonely & wondering where he went wrong.

Don’t be that guy. Aim for #InfrastructureGlory on-prem, hybrid, or in the cloud.

Over the coming days, I’ll show you how I did this on my own in a series of posts titled Cloud Praxis.

[table]

Link, On-prem/Hybrid/Cloud?, Notes

Cloud Praxis #2, On Prem, General guidance on building an AD lab to get started

Cloud Praxis #3, Cloud, Wherein I think about on-prem email and purchase an O365 E1 sub

Cloud Praxis #4, -,Forthcoming likely Dirsync focused

Cloud Praxis #5, Hybrid, Got 24 days & $100 in Azure credits + a wildcard SSL cert. Floor it!

[/table]

 

Been iterating so fast, log file can’t keep up

Sorry for the lack of content lately; between the hyperactive child partition redlining my cpu and hogging all my spare bandwidth for himself and some interesting developments at work (Why hello there spare MSDN sub and $100/month in Azure credits! and testing out a certain OpsMan package I raved about in March), Agnostic blogging has virtually ground to a standstill. Fail.

I promise some good stuff tomorrow and in the days that follow, including:

  • How to Win the Cloud Wars @ Home and Join the Battle for them at Work
  • So long iSCSI & Block storage, I found a new love : SMB 3 Multicast

 

Azure RemoteApp Announced at #MSTechEd

Application delivery….at the end of the day, it’s really what we do in IT, isn’t it? It’s what all the complexity, all the cost, and all the headaches are for: delivering applications to our users.

Sure it’s cool to talk endlessly of hypervisors and spindles and hybrid storage arrays and network virtualization, but we’re not paid to just have fun with racks of gear. We’re paid to make sure the applications our users need are accessible when they need it, wherever they may be.

Yeah. Let’s talk about Layer 7 baby. User space. C:UsersAppData, the registry Hive all that good stuff.

RXL004025An executive once came to my IT department and said what he wants out of IT is for it to function like an electrical utility. When the user toggles the light switch, it should just work; the light should turn on, he said. Our job as infrastructure engineers, he continued, was to watch all the turbines and generators and power lines in the background to ensure the reliable delivery of just-in-time electrical capacity for the moment the user toggles that switch.

I wish I could talk with that executive again, because I think that analogy is useful, and what’s more, it’s kind of the model public cloud providers like Google Compute Engine, Amazon, and Azure are selling. They want to become your company’s computing utility provider.

I’m down with the “cloud” and excited by some of its potential, but to go back to that executive’s analogy, I don’t see the whole picture here. Sure, take my infrastructure, cloudify it, put some Azure or AWS way up into it…have at it. I get that.

But what’s the light switch look like? Does it operate the same as the one my users are familiar with on-prem? Does it toggle vertically, or does Cloud Provider.com require horizontally-toggled light switches for some obscure reason? Is it a radically different light switch, operating on Direct Current rather than the familiar but inefficient Alternating Current? What other surprises regarding the light switches are there?

Because I hate surprises. Especially on high visiblility & fundamental things like light switches that my users need to do their jobs.

  • On Google Compute Engine’s cloud, app delivery from what I can gather is some mix of HTML 5, ChromeOS or Android apps, or, perhaps VMware View + ChromeBooks. Lots of Linuxy stuff. For an on-prem Windows environment, with some in-house .net coded business applications, the path to the Google cloud is murky and probably involves quite a bit of dev work.
  • Amazon offers cloud VDI...they say they can virtualize your company’s desktop PC and park it in the cloud. Which is like taking my on-prem light switch and just putting in in the cloud. Cool! But the Windows tech they’re using, as Aidan Finn points out, is still Server 2008 R2. And my line of business applications are all in Windows 2012 & SQL 2012.

So Microsoft, you’re at bat: how do I deliver my apps to my users in Azure? What’s your light switch look like?

Well today they took a big step forward by announcing something I’m intimately familiar with: RemoteApp for Azure.

Virtualization basesRemoteApp, if you’re not familiar with it, is old-school session virtualization, a sort of “first base” in the virtualization story. It’s how we got more out of our hardware before Hypervisors came along (Second Base, in my Virtualization is Like Baseball Bases theory, a diagram of which you can see to the left). It’s the bit of tech that made Citrix into an amazing software company and a valuable Microsoft partner.

RemoteApp is user session virtualization and it’s still around as part of Microsoft’s Remote Desktop Services suite. And it’s how many folks deliver rich Windows apps to their end users (XenApp is king in this space of course) on an increasingly large amount of diverse platforms.

And now Remote App is in Azure, in preview form, but still. This means the light switch in Azure is the same light switch my users are used to. It’s a little less friction in my path to the cloud, both for me, and my users.

That said, session virtualization can be a royal pain in the ass. So from an engineering standpoint, I’d love to see if Microsoft, acting as a computing utility provider, can fix the top three problems I have with session virtualization technologies:

  • The Group Policy Blender: Session virtualization is tricky at scale because a lot of the management aspects for RemoteApp are Group Policy based. This was really true in Server 2008 R2; 2012 offers better control, but still, much can go wrong. If you use RDS/RemoteApp at scale, with multiple child domains logging into an RDS farm, you have to spend considerable time researching & perfecting Group Policy because you’ll be blending User & Computer group policies from multiple sources (and multiple domains) into that session. Guess what? A lot can, and does at times, go wrong when you build a computer that is logged into by multiple people simultaneously; this alone makes session virtualization almost as tough a nut to crack as VDI.Azure has the scale to just build out VMs to address that complexity; I don’t. Hoping there’s some new logic in place that may trickle down to me or justify me offloading this to Azure completely.
  • Localization: Here’s hoping Azure RemoteApp has something more elegant and less-hackish than I what have on-prem to localize sessions. My RDS server is in North America. My user is in Australia. Make the session reflect the Aussie users’ time & date format and the goofy way they use commas instead of periods in monetary units. Oh and when the French user logs into the same RDS box, apply, the….je nais se quoi….qualities the French userbase demands. You know how I do this now? A simple vbs script is triggered upon login; if an LDAP lookup of the user matches certain criteria, the French regional settings.reg file is applied to the registry hive. I want desperately to Powershell this; I wonder how Azure does it…maybe the fix is to park the session in the Azure datacenter closest to Paris & Sydney, something I can’t do. In that case, awesome!
  • Printing: Whole companies have been founded to optimize printing from session virtualization instances to the HP Laserjet on your desk. To say that printing can be a headache in session virtualization is a bit like saying a fire at a gas station can be a reason to call the fire department.

If Azure can solve these things, or at least make them operate reliably & securely and speedily (in the case of printing), they can really put themselves at the head of the pack when it comes to cloud adoption in organizations like mine*. As long as the cost for Azure RemoteApp is the same as or cheaper than on-prem RDS licenses, I don’t see why anyone would want to keep RemoteApp on prem.

Now, about App-V….

* I do not speak for my organization even though I just did

Folding Paper Experiment

Have you ever been in a position in IT where you’re asked to do what is, by rational standards, impossible?

As virtualization engineers, we operate under a kind of value-charter in my view. Our primary job is to continuously improve things with the same set of resources, thereby increasing the value of our gear & ourselves.

paper-fold-ppt-14Looked at economically, our job isn’t so much different than what some people view as the great benefit of a free market economy: we are supposed to be effeciency multipliers, just like entrepreneurs are in the market. We take a set of raw resources, manipulate & reshape them, and extract more value out of them.

I hate to go all tech-crucnh on you, but we disrupt. In our own way. And it’s something you should be proud of.

Maybe you never thought of yourself like that, but you should…and you should never sell yourself short.

For guys and gals like us, compute, storage & network are raw resources at our disposal. Anything capable of being virtualized or abstracted can, or at least should, potentially have some value, as there are so many variables we can fine-tune and manipulate.

That old Dell PowerEdge 2950 with some DDR2 RAM that shipped to you in 2007? Sure it’s old and slow, but it’s got the virtualization bits in its guts that can, in the right hands, multiply & extend its value. Sure it’s not ideal, but raise your hand if you’re an engineer who gets The Platonic Ideal all the time?

I sure don’t. Even when I think it’s inescapably rational & completely reasonable.

Old switches with limited backplane bandwidth & small amounts of buffers? It’s junk compared to a modern Arista 10GbE switch, but when push comes to shove, you, as a virtualization engineer, can make it perform in service to your employer.

This is what we do. Or I should say, it’s what some of us forced to do.

We are, as a group, folding paper again and again, defying the rules & getting more & more value out of our gear.

It can be stressful and thankless. No one sees it or appreciates it, but we are engineers. Many have gone before us, and many will come after us. Resources are always going to be limited for people like us, and it’s our job to manage them well and extract as much as we can out of them.

This post written as much as a pep-talk for myself as for others!

Vintage Greg Ferro

declarativeprogrammingGreg Ferro, Philosopher King of networking and prolific tech blogger/personality, had me in stitches during the latest Coffee Break episode on the Packet Pushers podcast.

Coffee Breaks are relatively short podcasts focused on networking vendor & industry news, moves and initiatives. Ferro usually hosts these episodes and chats about the state of the industry with two other rotating experts in the “time it takes have a coffee break.”

Some Coffee Breaks are great, some I skip, and then, some, are Vintage Greg Ferro, encapsulated IT wisdom with some .co.uk attitude.

Like April 25th’s, in which discussion centered around transitioning to public cloud services, Cisco’s new OpFlex platform, and other news.

During the public cloud services discussion, the conversation turned toward on-prem expertise in firewalls, which, somehow, touched Ferro’s IT Infrastructure Library (ITIL) nerve.

ITIL, if you’re not familiar with it, is sort of a set of standards & processes for IT organizations, or, as Ferro sees it:

ITIL is an emotional poison that sucks the inspiration and joy from technology and reduces us to grey people who can evaluate their lives in terms of “didn’t fail”. I have spent two decades of my professional living a grey zone of never winning and never failing.

Death to bloody ITIL. I want to win.

Classic.

Anyway on the podcast, Ferro got animated discussing a theoretical on-prem firewall guy operating under an ITIL framework:

“Oh give me a break. It’s all because of ITIL. Everybody’s in ITIL. So when you say you’re going to change your firewall, these people have a change management problem, a self change management problem, because ITIL prevents them from being clever enough. You’re not allowed to be a compute guy & a firewall guy [in an ITIL framework]. When you move to the public cloud, you throw away all those skills because you don’t need them.

Ferro’s point (I think), was that ITIL serves as a kind of retardant for IT organizations looking to move parts of their infrastructure to the public cloud, but not in just the obvious ways you might think (ie it’d be an arduous process to redo the Change Management Database & Configuration Items involved in putting some of your stack in the cloud!)

It seems Ferro is saying that specialized knowledge (ie the firewall guy & his bespoke firewall config) are threatened by the ease of deploying public cloud infrastructure, and to get to the cloud, some organizations will have to break through ITIL orthodoxy as it tends to elevate and protect complexity.

Good stuff.

But that wasn’t all. Ferro also helped me understand the real difference between declarative & imperative programming. Wheras before I just nodded my head and thought, “Hell yeah, Desired State Configuration & Declarative Programming. That’s where I want to be,” now I actually comprehend it.

It’s all about sausage rolls, you see:

Let’s say you want a sausage roll. And it’s a long way to the shop for a sausage roll. If you’re going to send a six year old down to the shop to get you a sausage roll, you’re going to say, Right. Here is $2, here’s the way to the shop. You go down the street, turn right, then left. You go into the shop you ask the man for a sausage roll. Then you carry the sausage roll home very carefully because you don’t want the sausage roll to get cold.

That’s imperative programming. Precise instructions for every step of the process. And you get a nice sausage roll at the end.

Declarative programming (or promise theory as Ferro called it), is more like:

You have a teenager of 13 or 14, old enough to know how to walk to the shop, but not intelligent enough to fetch a sausage roll without some instructions. Here’s $10, go and fetch me a sausage roll. The teenager can go to the shop, fetch you a sausage roll and return with change.

See the distinction? The teenager gets some loose instructions & rules within which to operate, yet you still get a sausage roll at the end.

Jeffrey Snover, Microsoft Senior Technical Fellow (or maybe he’s higher in the Knights of Columbus-like Microsoft order), likened declarative programming to Captain Picard simply saying, ‘Make it so!.” I was happy with that framework but I think sausage rolls & children work better.

By the way, for Americans, a Sausage Roll looks like the image above and appears to be what I would think of as a Pig in a Blanket, with my midwestern & west coast American roots. How awesome is it that you can buy Pigs in a Blanket in UK shops? 

Windows Phone? More like #WindowsFun with @Nokia Cinemagraph app

Some blatant fanboism here, but what’s not to like?

  • Child partition dancing
  • High quality optics
  • Fun camera app
  • Outputs to animated gif, the only truly agnostic video format out there

WP_20140501_19_43_28_Cinemagraph_export

Building something like this used to involve about 15 hours on the PC, a fast camera, a copy of Paint Shop Pro, and some serious patience and tolerance for folder sprawl.

Now I’m doing it on a phone in seconds, and I don’t even have to think about it. I can even do what were once difficult color modifications, holding the phone with one hand, and the dog’s leash with the other.

Of course, as a new dad, I’m a sucker for gimmicky pics, almost going out of my way to get them. Credit/blame to Google+ here; they sort of reintroduced the animated gif genre and burned probably thousands of compute hours just to try and impress me with Auto Awesome. Some have been gems, most have been duds. Thanks for the free compute G+!

In contrast, Nokkia’s Cinemagraph is hands-on. It’s not hard to use, but you have to visualize your shot, hold your phone steady, and frame the action. Then you press the screen and a few seconds later, you can introduce some neat & fun little effects.

And then you hit the save button, and the little Snapdragon inside gets busy and builds what you see above.

Gimmicky? Sure. But this is my gimmick, damnit, not a G+ cron job.

Final note: when are Facebook & Twitter going to man up and allow us to post animated gifs? I’m tired of having to use that other old yet fully agnostic communications protocol (SMTP) to share my cheesy dad pics with the fam.

Google lets me post these to G+ (if only someone were there to see them). My rinky-dink blog sitting on a shared LAMP server in some God-forsaken GoDaddy datacenter has no problem hosting these ineffecient but highly useful animated gifs.

Hell, I bet you can post animated gif shots to Yammer.

Yet Zuck & mighty Twitter can’t handle animated gifs. Sissies.

Labworks 1:4-7 – The Last Word in ZFS Labworks

Greetings to you Labworks readers, consumers, and conversationalists. Welcome to the last  verse of Labworks Chapter 1, which has been all about building a durable and performance-oriented ZFS storage array for Hyper-V and/or VMware.

Let’s review where we’ve been:

[table]

Labworks Chapter, Verse, Subject, Title & URL

Labworks 1:, 1, Storage, Building a Durable and Performance-Oriented ZFS Box for Hyper-V & VMware

,2-3, StorageI Heart the ARC & Let’s Pull Some Drives!

[/table]

Today we’re going to circle back to the very end of Labworks 1:1, where I assigned myself some homework: find out why my writes suck so bad. We’re going to talk about a man named ZIL and his sidekick the SLOG and then we’re going to check out some Excel charts and finish by considering ZFS’ sync models.

But first, some housekeeping: SAN2, the ZFS box, has undergone minor modification. You can find the current array setup below. Also, I have a new switch in the Daisetta Lab, and as switching is intimately tied to storage networking & performance, it’s important I detail a little bit about it.

Labworks 1:4 – Small Business SG300 vs Catalyst 2960S

Cisco’s SG-300 & SG-500 series switches are getting some pretty good reviews, especially in a home lab context. I’ve got an SG-300 and really like it as it offers a solid spectrum of switching options at Layer 2 as well as a nice Layer 3-lite mode all for a tick under $200. It even has a real web-interface if your CLI-shy, which

Small Business Cisco != Linksys
Small Business Cisco != Linksys

I’m not but some folks are.

Sadly for me & the Daisetta Lab, I need more ports than my little SG-300 has to offer. So I’ve removed it from my rack and swapped it for a 2960S-48TS-L from the office, but not just any 2960S.

No, I have spiritual & emotional ties to this 2960s, this exact one. It’s the same 2960s I used in my January storage bakeoff of a Nimble array, the same 2960s on which I broke my Hyper-V & VMware cherry in those painful early days of virtualization, yes, this five year old switch is now in my lab:

The pride of Cisco's 2009 Desktop Switching series, the 2960s
The pride of Cisco’s 2009 Desktop Switching series, the 2960s

Sure it’s not a storage switch, in fact it’s meant for IDFs and end-users and if the guys on that great storage networking podcast from a few weeks back knew I was using this as a storage switch, I’d be finished in this industry for good.

But I love this switch and I’m glad its at the top of my rack. I saved 1U, the energy costs of this switch vs two smaller ones are probably a wash, and though I lost Layer 3 Lite, I gained so much more: 48 x 1GbE ports and full LAN-licensed Cisco IOS v 15.2, which, agnostic computing goals aside for a moment, just feels so right and so good.

And with the increased amount of full-featured switch ports available to me, I’ve now got LACP teams of three on agnostic_node_1 & 2, jumbo frames from end to end, and the same VLAN layout.

Here’s the updated Labworks schematic and the disk layout for SAN2:

Lab 1-4-5 - Daisetta Labs

[table]

Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Zpool Members

Samsung 840 EVO SSD, 1, 128GB, 512byte, SATA 3, L2ARC Read Cache

Samsung 830 SSD, 1, 128GB, 512byte, SATA 3, L2ARC Read Cache

Seagate 2.5″ Momentus, 1, 500GB, 512byte, 80MB/r/w, Boot/swap/system

[/table]

Labworks 1:5 – A Man named ZIL and his sidekick, the SLOG

Labworks 1:1 was all about building durable & performance-oriented storage for Hyper-V & VMware. And one of the unresolved questions I aimed to solve out of that post was my poor write performance.

Review the hardware table and you’ll feel like I felt. I got me some SSD and some RAM, I provisioned a ZIL so write-cache that inbound IO already ZFS, amiright? Show me the IOPSMoney Jerry!

Well, about that. I mischaracterized the ZIL and I apologize to readers for the error. Let’s just get this out of the way: The ZFS Intent Log (ZIL) is not a write-cache device as I implied in Labworks 1:1.

ZFS storage in excellent Good/Better/Best format
ZFS storage layout in excellent Good/Better/Best format courtesy of Nexenta, which has some outstanding documentation & guides

The ZIL, whether spread out among your rotational disks by ZFS design, or applied to a Separate Log Device (a SLOG), is simply a synchronous writes mechanism, a log designed to ensure data integrity and report (IO ACK) back to the application layer that writes are safe somewhere on your rotational media. The ZIL & SLOG are also a disaster recovery mechanisms/devices ; in the event of power-loss, the ZIL, or the ZIL functioning on a SLOG device, will ensure that the writes it logged prior to the event are written to your spinners when your disks are back online.

Now there seem to be some differences in how the various implementations of ZFS look at the ZIL/SLOG mechanism.

Nexenta Community Edition, based off Illumos which is the open source descendant of Sun’s Solaris, says your SLOG should just be a write-optimized SSD, but even that’s more best practice than hard & fast requirement. Nexenta touts the ZIL/SLOG as a performance multiplier, and their excellent documentation has helpful charts and graphics reinforcing that.

In contrast, the most popular FreeBSD ZFS implementations documentation paints the ZIL as likely more trouble than its worth. FreeNAS actively discourages you from provisioning a SLOG unless it’s enterprise-grade, accurately pointing out that the ZIL & a SLOG device aren’t write-cache and probably won’t make your writes faster anyway, unless you’re NFS-focused (which I’m proudly, defiantly even, not) or operating a large database at scale.

ZIL me

What’s to account for the difference in documentation & best practice guides? I’m not sure; some of it’s probably related to *BSD vs Illumos implementations of ZFS, some of it’s probably related to different audiences & users of the free tier of these storage systems.

The question for us here is this: Will you benefit from provisioning a SLOG device if you build a ZFS box for Hyper-V and VMWare storage for iSCSI?

I hate sounding like a waffling storage VAR here, but I will: it depends. I’ve run both Nexenta and NAS4Free; when I ran Nexenta, I saw my SLOG being used during random & synchronous write operations. In NAS4Free, the SSD I had dedicated as a SLOG never showed any activity in zfs-stats, gstat or any other IO disk tool I could find.

One could spend weeks of valuable lab time verifying under which conditions a dedicated SLOG device adds performance to your storage array, but I decided to cut bait. Check out some of the links at the bottom for more color on this, but in the meantime, let me leave you with this advice: if you have $80 to spend on your FreeBSD-based ZFS storage, buy an extra 8GB of RAM rather than a tiny, used SLC or MLC device to function as your SLOG. You will almost certainly get more performance out of a larger ARC than by dedicating a disk as your SLOG.

Labworks 1:6 – Great…so, again, why do my writes suck? 

Recall this SQLIO test from Labworks 1:1:

sqlio lab 1 short test

As you can see, read or write, I was hitting a wall at around 235-240 megabytes per second during much of “Short Test”, which is pretty close to the theoretical limit of an LACP team with two GigE NICs.

But as I said above, we don’t have that limit anymore. Whereas there were once 2x1GbE Teams, there are now 3x1GbE. Let’s see what the same test on the same 4KB block/4KB NTFS volume yields now.

SQLIO short test, take two, sort by Random vs Sequential writes & reads:

labworks147

By jove, what’s going on here? This graph was built off the same SQLIO recipe, but looks completely different than Labworks 1. For one, the writes look much better, and reads look much worse. Yet step back and the patterns are largely the same.

It’s data like this that makes benchmarking, validating & ultimately purchasing storage so tricky. Some would argue with my reliance on SQLIO and those arguments have merit, but I feel SQLIO, which is easy to script/run and automate, can give you some valuable hints into the characteristics of an array you’re considering.

Let’s look at the writes question specifically.

Am I really writing 350MB/s to SAN2?

storagenetworkingforthewinOn the one hand, everything I’m looking at says YES: I am a Storage God and I have achieved #StorageGlory inside the humble Daisetta Lab HQ on consumer-level hardware:

  • SAN2 is showing about 115MB/s to each Broadcom interface during the 32KB & 64KB samples
  • Agnostic_Node_1 perfmon shows about the same amount of traffic eggressing the three vEthernet adapters
  • The 2960S is reflecting all that traffic; I’m definitely pushing about 350 megabytes per second to SAN2; interface port channel 3 shows TX load at 219 out of 255 and maxing out my LACP team

On the other hand, I am just an IT Mortal and something bothers:

  • CPU is very high on SAN2 during the 32KB & 64KB runs…so busy it seems like the little AMD CPU is responsible for some of the good performance marks
  • While I’m a fan of the itsy-bitsy 2.5″ Western Digitial RED 1TB drives in SAN2, under no theoretical IOPS model is it likely that six of them, in RAIDZ-2 (RAID 6 equivalent) can achieve 5,000-10,000 IOPS under traditional storage principles. Each drive by itself is capable of only 75-90 IOPS
  • If something is too good to be true, it probably is

49286241Sr. Storage Engineer Neo feels really frustrated at this point; he can’t figure out why his writes suck, or even if they suck, and so he wanders up to the Oracle to get her take on the situation and comes across this strange Buddha Storage kid.

Labworks 1:7 – The Essence of ZFS & New Storage model

In effect, what we see here is is just a sample of the technology & techniques that have been disrupting the storage market for several years now: compression & caching multiply performance of storage systems beyond what they should be capable of, in certain scenarios.

As the chart above shows, the test2 volume is compressed by SAN2 using lzjb. On top of that, we’ve got the ZFS ARC, L2ARC, and the ZIL in the mix. And then, to make things even more complicated, we have some sync policies ZFS allows us to toggle. They look like this:

sync policy

The sync toggle documentation is out there and you should understand it it is crucial to understanding ZFS, but I want to demonstrate the choices as well.

I’ve got three choices + the compression options. Which one of these combinations is going to give me the best performance & durability for my Hyper-V VMs?

SQLIO Short Test Runs 3-6, all PivotTabled up for your enjoyment and ease of digestion:

compressionsync

As is usually the case in storage, IT, and hell, life in general, there are no free lunches here people. This graph tells you what you already know in your heart: the safest storage policy in ZFS-land (Always Sync, that is to say, commit writes to the rotationals post haste as if it was the last day on earth) is also the slowest. Nearly 20 seconds of latency as I force ZFS to commit everything I send it immediately (vs flush it later), which it struggles to do at a measly average speed of 4.4 megabytes/second.

Compression-wise, I thought I’d see a big difference between the various compression schemes, but I don’t. Lzgb, lz4, and the ultra-space-saving/high-cpu-cost gzip-9 all turn in about equal results from an IOPS & performance perspective. It’s almost a wash, really, and that’s likely because of the predictable nature of the IO SQLIO is generating.

Labworks 1:Epilogue

Last point: ZFS, as Chris Wahl pointed out, is a sort of virtualization layer atop your storage. Now if you’re a virtualization guy like me or Wahl, that’s easy to grasp; Windows 2012 R2’s Storage Spaces concept is similar in function.

But sometimes in virtualization, it’s good to peel away the abstraction onion and watch what that looks like in practice. ZFS has a number of tools and monitors that look at your Zpool IO, but to really see how ZFS works, I advise you to run gstat. GStat shows what your disks are doing and if you’re carefully setting up your environment, you ought to be able to see the effects of your settings on each individual spindle.

In this Gifcam, watch ada0-6 as they struggle under load with the "Always Sync" option enabled.
In this Gifcam, watch ada0-5 (the western digitals)as they struggle under load with the “Always Sync” option enabled. Notice that the zvol/Alpha-Pool/Test2 volume (The logical volume construct) is at 100% busy and the ops/s are not very stellar.

Now look at this gstat sample. Under SQLIO-load, the zvol is showing 10,000 IOPS, 300+MB/s. But ada0-5, the physical drives, aren't doing squat.

Now look at this gstat sample. Under SQLIO-load, the zvol is showing 10,000 IOPS, 300+MB/s. But ada0-5, the physical drives, aren’t doing squat for several seconds at a time as SAN2 absorbs & processes all the IO coming at it.

That, friends, is the essence of ZFS.

 Links/Knowledge/Required Reading Used in this Post:

[table]
Resource, Author, Summary

Nexenta’s awesome whitepapers and guides, Nexenta, Find ’em and collect ’em good stuff on MPIO config and ZFS performance

Comparing SSD vs NoSSD in Nexenta w/NFS, Larry Smith, A fellow ZFS fan with more focus on NFS & VMware

Get the Most out of ZFS SSD, Sebastian “vBagpipes” Laubscher, Sebastian finds a different way to provision the ZIL/SLOG

Nexenta & Scale, Hans DeLeenHeer, Fellow #TFD delegate looks at ZFS tiers in superhero context

SLOG/ZIL Insight, FreeNAS forum, Great forum-focused post on SLOG/ZIL in BSD ZFS

SLOG Blog, Oracle, 2007 post about the ZIL & SLOG heralding storage di

 Zpool and ZIL management, Magnus Strahlert, Excellent how-to guide for ZIL/L2ARC provisioning

[/table]