Big Data for Server Guys : Azure OpsInsight Review

Maybe it’s just my IT scars that bias me, but when I hear a vendor push a “monitoring” solution,  I visualize an IT guy sitting in front of his screen, passively watching his monitors & counters, essentially waiting for that green thing over there to turn red.

He’s waiting for failure, Godot-style.

That’s not a recipe for success in my view. I don’t wait upon failure to visit, I seek it out, kick its ass, and prevent it from ever establishing a beachhead in my infrastructure. The problem is that I, just like that IT Guy waiting around for failure, am human, and I’m prone to failure myself.

Enter machine learning or Big Data for Server Guys as I like to think of it.

Big Data for Server Guys is a bit like flow monitoring on your switch. The idea here is to actively flow all your server events into some sort of a collector, which crunches them, finds patterns, and surfaces the signal from the noise.

Big Data for Server Guys is all about letting the computer do what the computer’s good at doing: sifting data, finding patterns, and letting you do what you  are good at doing: empowering your organization for tech success.

But we Windows guys have a lot of noise to deal with: Windows instruments just about everything imaginable in the Microsoft kingdom, and the Microsoft kingdom is vast.

So how do we borrow flow-monitoring techniques from the Cisco jockeys and apply it to Windows?

Splunk is one option, and it’s great: it’s agnostic and will hoover events from Windows, logs from your Cisco’s syslog, and can sift through your Apache/IIS logs too. It’s got a thriving community and loads of sexy, AJAX-licious dashboards, and you can issue powerful searches and queries that can help you find problems before problems find you.

It’s also pretty costly, and I’d argue not the best-in-class solution for Hoovering Windows infrastructure.

Fortunately, Microsoft’s been busy in the last few years. Microsoft shops have had SCOM and MOM before that, but now there’s a new kid in town ((He’s been working out and looks nothing like that the old kid, System Center Advisor)) : Azure Operational Insights, and OpsInsight functions a lot like a  good flow collector.

opsinsight3

And I just put the finishing touches on my second Big Data for Server Guys/OpsInsight deployment. Here’s a mini-review:

The Good:

  • It watches your events and finds useful data, which saves you time: OpsInsight is like a giant Hoover in the sky, sucking up on average about 36MB/day of Windows events from my fleet of nearly ~150 VMs in a VMware infrastructure. Getting data on this fleet via Powershell is trivial, but building logic that gives insight into that data is not trivial. OpsInsight is wonderful in this regard; it saves you from spending time in SSRS, Excel, or diving through the event viewer haystack MMC or via get-event looking for a nugget of truth.
  • It has a decent config recommendation engine: If you’re an IT Generalist/Converged IT Guy like me, you touch every element in your Infrastructure stack, from the app on down to the storage array’s rotating rust. And that’s hard work because you can’t be an expert in everything. One great thing about OpsInsight is that it saves you from searching Bing/Google (at worst) or thumbing through your well-worn AD Cookbook (at best) and offers Best practice advice and KB articles in the same tab in your browser. Awesome!
  • Thanks Opsinsight for keeping me out of this thing
    Thanks Opsinsight for keeping me out of this thing

    Query your data rather than surfing the fail tree: Querying your data is infinitely better than walking the Fail Tree that is the Windows Event Viewer looking for errors. OpsInsight has a powerful query engine that’s not difficult to learn or manipulate, and for me, that’s a huge win over the old school method of Event Viewer Subscriptions.

  • Dashboards you can throw in front of an executive:  I can’t understate how great it is to have automagically configured dashboards via OpsInsight. As an IT Pro, the less time I spend in SSRS trying to build a pretty report the better. OpsInsight delivers decent dashboards I’m proud to show off. SCOM 2012 R2’s dashboards are great, but SCOM’s fat client works better than its IIS pages. Though it’s Silverlight-powered, OpsInsight wins the award for friction-free dashboarding.
  • Flexible Architecture: Do you like SCOM? Well then OpsInsight is a natural fit for you. I really appreciate how the System Center team re-structured OpsInsight late last year: you can deploy it at the tail end of your SCOM build, or you can forego SCOM altogether and attach agents directly to your servers. The latter offers you speed in deployment, the former allows you to essentially proxy events from your fleet, through your Management Group, and thence onto Azure. I chose the latter in both of my deployments. Let OpsInsight gate through SCOM, and let both do what they are good at doing.
  • It’s secure: The architecture for OpsInsight is Azure, so if you’re comfortable doing work in Azure Storage blobs, you should be comfortable with this. That + encrypted uploads of events, SCOM data and other data means less friction with the security/compliance guy on your team.

The Bad:

  • It’s silverlight, which makes me feel like I’m flowing my server events to Steve Ballmer: I’m sure this will be changed out at some point. I used to love Silverlight -and maybe there’s still room in my cold black heart for it- but it’s kind of an orphan media/web child at the moment.
  • There’s no app for iOS or Android…yet: I had to dig out my 2014 Lumia Icon just to try out the OpsInsight app for Windows phone. It’s decent, just what I’d like to see on my 2015 Droid Turbo. Alas there is no app for Android or IOS yet, but it’s the #1 and #2 most requested feature at the OpsInsight feedback page (add your vote, I did!)
  • It’s only Windows at the moment: I love what Microsoft is doing with Big Data crunching; Machine Learning, Stream Analytics and OpsInsight. But while you can point just about any flow or data at AzureML or Stream Analytics, OpsInsight only accepts Windows, IIS, SQL,Sharepoint, Exchange. Which is great, don’t get me wrong, but limited. SCOM at least can monitor SNMP traps, interface with Unix/Linux and such, but that is not available in OpsInsight. However, it’s still in Preview, so I’ll be patient.
  • It’s really only Windows/IIS/SQL/Exchange at the moment: Sadface for the lack of Office 365/Azure intelligence packs for OpsInsight, but SCOM will do for now.
  • Pricing forecast is definitely…cloudy: Every link I find takes me to the general Azure pricing page. On the plus side, you can strip this bad boy down to the bare essentials if you have cost pressures.

The Ugly:

  • Where are my cmdlets? My interface of choice with the world of IT these days is Powershell ISE. But when I typed get-help *opsinsight, only errors resulted. How’d this get past Snover’s desk? All kidding aside, SCOM cmdlets work well enough if you deploy OpsInsight following SCOM, and I’m sure it’s coming. I can wait.

All in all, this is shaping up to be a great service for your on-prem Windows infrastructure, which, let’s face it, is probably neglected.

System Center MVP Stanislav Zhelyazkov has a great 9-part deep dive on OpsInsight if you want to learn more.

Microsoft is the original & ultimate hyperconverged play

The In Tech We Trust Podcast has quickly became my favorite enterprise technology podcast since it debuted late last year. If you haven’t tuned into it yet, I advise you to get the RSS feed on your favored podcast player of choice ASAP.

The five gents ((Nigel Poulton, Linux trainer at Pluralsight, Hans De Leenheer,datacenter/storage and one of my secret crushes, Gabe Chapman, Marc Farley and Rick Vanover)) putting on the podcast are among the sharpest guys in infrastructure technology, have great on-air chemistry with each other, and consistently deliver an organized & smart format that hits my player on-time as expected every week. Oh, and they’ve equalized the Skype audio feeds too!

And yet….I can’t let the analysis in the two most recent shows slip by without comment. Indeed, it’s time for some tough love for my favorite podcast.

Guys you totally missed the mark discussing hyperconvergence & Microsoft over the last two shows!

For my readers who haven’t listened, here’s the compressed & deduped rundown of 50+ minutes of good stimulating conversation on hyperconvergence:

  • There’s little doubt in 2015 that hyperconverged infrastructure (HCI) is a durable & real thing in enterprise technology, and that that thing is changing the industry. HCI is real and not a fad, and it’s being adopted by customers.
  • But if HCI is a real, it’s also different things to different people; for Hans, it’s about scale-out node-based architecture, for others on the show, it’s more or less the industry definition: unified compute & storage with automation & management APIs and a GUI framework over the top.
  • But that loose definition is also evolving, as Rick Vanover sharply pointed out that EMC’s new offering, vSpex Blue, offers something more than what we’d traditionally (like two weeks ago) think of as hyperconvergence

Good stuff and good discussion.

And then the conversation turned to Microsoft. And it all went downhill. A summary of the guys’ views:

  • Microsoft doesn’t have a hyperconverged pony in the race, except perhaps Storage Spaces, which few like/adopt/bet on/understand
  • MS has ceded this battlefield to VMware
  • None of the cool & popular hyperconverged kids, save for Nutanix and Gridstore, want to play with Microsoft
  • Microsoft has totally blown this opportunity to remain relevant and Hyper-V is too hard. Marc Farley in particularly emphasized how badly Microsoft has blown hyperconvergence

I was, you might say, frustrated as I listened to this sentiment on the drive into my office today. My two cents below:

The appeal of Hyperconvergence is a two-sided coin. On the one side are all the familiar technical & operational benefits that are making it a successful and interesting part of the market.

  • It’s an appliance: Technical complexity and (hopefully) dysfunction are ironed out by the vendor so that storage/compute/network just work
  • It’s Easy: Simple to deploy, maintain, manage
  • It’s software-based and it’s evolving to offer more: As the guys on the show noted, newer HCI systems are offering more than ones released 6 months or a year ago.

The other side of that coin is less talked about, but no less powerful. HCI systems are rational cost centers, and the success of HCI marks a subtle but important shift in IT & in the market.

    • It’s a predictable check cut to fewer vendors: Hyperconvergence is also about vendor consolidation in IT shops that are under pressure to make costs predictable and smoother (not just lower).
    • It’s something other than best-of-breed: The success of HCI systems also suggests that IT shops may be shying away from best-of-breed purchasing habits and warming up to a more strategic one-throat-to-choke approach ((EMC & VMware, for instance, are titans in the industry, with best-in-class products in storage & virtualization, yet I can’t help but feel there’s more going on than the chattering classes realize. Step back and think of all the new stuff in vSphere 6, and couple it with all the old stuff that’s been rebranded as new in the last year or so by VMware. Of all that ‘stuff’, how much is best of breed, and how much of it is decent enough that a VMware customer can plausibly buy it and offset spend elsewhere?))
    • It’s some hybrid of all of the above: HCI in this scenario allows IT to have its cake and eat it too, maybe through vendor consolidation, or cost-offsets. Hard to gauge but the effect is real I think.

((As Vanover noted, EMC’s value-adds on the vSpex Blue architecture are potentially huge: if you buy vSpex Blue architecture, you get backup & replication, which means you don’t have to talk to or cut yearly checks to Commvault, Symantec or Veeam. I’ve scored touchdowns using that exact same play, embracing less-than-best Microsoft products that do the same thing as best-in-class SAN licenses))

And that’s where Microsoft enters the picture as the original -and ultimate- Hyperconverged play.

Like any solid HCI offering, Microsoft makes your hardware less important by abstracting it, but where Microsoft is different is that they scope supported solutions to x86. VMware, in contrast only hands out EVO:RAIL stickers to hardware vendors who dress x86 up and call it an appliance, which is more or less the Barracuda Networks model. ((I’m sorry. I know that was a a cheapshot,  but I couldn’t resist))

With your vanilla, Plain Jane whitebox x86 hardware, you can then use Microsoft’s Hyperconverged software system (or what I think of as Windows Server) to virtualize & abstract all the things from network (solid NFV & evolving overlay/SDN controller) to compute to storage, which features tiering, fault-tolerance, scale-out and other features usually found in traditional SAN systems.

But it doesn’t stop there. That same software powers services in an enormous IaaS/PaaS cloud, which works hand-in-hand with a federated productivity cloud that handles identity, messaging, data-mining, mail and more. The IaaS cloud, by the way, offers DR capabilities today, and you can connect to it via routing & ipsec, or you can extend your datacenter’s layer 2 broadcast domain to it if you like.

On the management/automation side, I understand/sympathize with ignorance of non-‘softies. Microsoft fans enthuse  about Powershell so much because it is -today-  a unified management system across a big chunk of the MS stack, either masked by GUI systems like System Center & Azure Pack or exposed as naked cmdlets. Powershell alone isn’t cool though, but Powershell & Windows Server aligned with truly open management frameworks like CIM, SMI-S and WBEM is very cool, especially in contrast to feature-packed but closed APIs.

On the cost side,there’s even more to the MS hyperconverged story:  Customers can buy what is in effect a single SKU (the Enterprise Agreement) and get access to most if not all of the MS stack.

Usually,organizations pay for the EA in small, easier-to-digest bites over a three year span, which the CFO likes because it’s predictable & smooth. (( Now, of course, I’m drastically simplifying Microsoft’s licensing regime and the process of buying an EA as you can’t add an EA to your cart & checkout, it’s a friggin negotiation. And yes I know everyone hates the true-up. And I grant that an EA just answers the software piece; organizations will still need the hardware, but I’d argue that de-coupling software from hardware makes purchasing the latter much, much easier, and how much hardware do you really need if you have Azure IaaS to fill in the gaps?))

Are all these Microsoft things you’ve bought best of breed? No, of course not. But you knew that ahead of time, if you did you homework.

Are they good enough in a lot of circumstances?

I’ll let you judge that for yourself, but, speaking from experience here, IT shops that go down the MS/EA route strategically do end up in the same magical, end-of-the-rainbow fairy-tale place that buyers of HCI systems are seeking.

That place is pretty great, let me tell you. It’s a place where the spend & costs are more predictable and bigger checks are cut to fewer vendors.  It’s a place where there are fewer debutante hardware systems fighting each other and demanding special attention & annual maintenance/support renewals in the datacenter. It’s also a place where you can manage things by learning verb-noun pairs in Powershell.

If that’s not the ultimate form of hyperconvergence, what is?

Containers! For Windows! Courtesy of Docker

DockerWithWindowsSrvAndLinux-1024x505 (1)

Big news yesterday for fans of agnostic cloud/on-prem computing.

Docker -the application virtualization stack that’s caught on like wildfire among the *nix set- is coming to Windows.

Yeah baby.

Mary Jo with the details:

Under the terms of the agreement announced today, the Docker Engine open source runtime for building, running and orchestrating containers will work with the next version of Windows Server. The Docker Engine for Windows Server will be developed as a Docker open source project, with Microsoft participating as an active community member. Docker Engine images for Windows Server will be available in the Docker Hub. The Docker Hub will also be integrated directly into Azure so that it is accessible through the Azure Management Portal and Azure Gallery. Microsoft also will be contributing to Docker’s open orchestration application programming interfaces (APIs).

When I first heard the news, emotion was mixed.

On the one hand, I love it. Virtualization of all flavors -OS, storage, network, and application- is where I want to be, as a blogger, at home in my lab, and professionally.

Yet, as a Windows guy (I dabble, of course), Docker was just a bit out of reach for me, even with my lab, which is 100% Windows.

On the other hand, I also remembered how dreadful it used to be to run Linux applications on Windows. Installing GTK+ Libraries on Windows isn’t fun, and the end-result often isn’t very attractive. In my world, keeping the two separate on the application & OS side/uniting them via Kerberos and/or https/rest has always been my preference.

But that’s old world thinking, ladies and gentlemen.

Because you see, this announcement from Microsoft & Docker Inc sounds deep, rich, functional. Microsoft’s going to contribute some of its Server code to the Docker folks, and the Docker crew will help build Container tech into Windows Server and Azure. I’m hopeful Docker will just be another Role in Server, and that Jeffrey Snover’s powershell cmdlets will hook deep into the Docker stuff.

This probably marks the death of App-V, which I wrote about in comparison to Docker just last month, but that’s fine with me.

Docker on Windows marks a giant step forward for Agnostic Computing…do we dare imagine a future in which our application stacks are portable? Today I’m running an application in a Docker Container on Azure, and tomorrow I move it to AWS?

Microsoft says that’s exactly the vision:

Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere. This partnership will enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. This level of interoperability is what we at MS Open Tech strive to deliver through contributions to open source projects such as Docker.

Full announcement.

Tales from the Hot Lane

A few brief updates & random thoughts from the last few days on all the stuff I’ve been working on.

Refreshing the Core at work: Summer’s ending, but at work, a new season is advancing, one rack unit at a time. I am gradually racking up & configuring new compute, storage, and network as it arrives; It Is Not About the Hardware™, but since you were wondering: 64 Ivy Bridge cores and about 512GB RAM, 30TB of storage, and Nexus 3k switching.

Cisco_logoAhh, the Nexus line. Never had the privilege to work on such fine switching infrastructure. Long time admirer, first-time NX-OS user. I have a pair of them plus a Layer 3 license so the long-term thinking involves not just connecting my compute to my storage, but connecting this dense stack northbound & out via OSPF or static routes over a fault-tolerant HSRP or VRRP config.

To do that, I need to get familiar with some Nexus-flavored acronyms that aren’t familiar to me: virtual port channels (VPC), Control Plane policy (COPP), VRF, and oh-so-many-more. I’ll also be attempting to answer the question once and for all: what spanning tree mode does one use to connect a Nexus switch to a virtualization host running Hyper-V’s converged switching architecture? I’ve used portfast in the lab on my Catalyst, but the lab switch is five years old, whereas this Nexus is brand new. And portfast never struck me as the right answer, just the easy one.

To answer those questions and more, I have TAC and this excellent tome provided gratis by the awesome VAR who sold us much of the equipment.

Into the vCPU Blender goes Lync: Last Friday, I got a call from my former boss & friend who now heads up a fast-growing IT department on the coast. He’s been busy refreshing & rationalizing much of his infrastructure as well, but as is typical for him, he wants more. He wants total IT transformation, so as he’s built out his infrastructure, he laid the groundwork to go 100% Microsoft Lync 2013 for voice.

Yeah baby. Lync 2013 as your PBX, delivering dial tone to your endpoints, whether they are Bluetooth-connected PC headsets, desk phones, or apps on a mobile.

Forget software-defined networking. This is software-defined voice & video, with no special server hardware, cloud services, or any other the other typical expensive nonsense you’d see in a VoIP implementation.

If Lync 2013 as PBX is not on your IT Bucket List, it should be. It was something my former boss & I never managed to accomplish at our previous employer on Hyper-V.

Now he was doing it alone. On a fast VMware/Nexus/NetApp stack with distributed vSwitches. And he wanted to run something by me.

So you can imagine how pleased I was to have a chat with him about it.

He was facing one problem which threatened his Go Live date: Mean Opinion Score, or MOS, a simple 0-5 score Lync provides to its administrators that summarizes call quality. MOS is a subset of a hugely detailed Media Quality Summary Report, detailed here at TechNet.

thMy friend was scoring a .6 on his MOS. He wanted it to be at 4 or above prior to go-live.

So at first we suspected QoS tags were being stripped somewhere between his endpoint device and the Lync Mediation VM. Sure enough, Wireshark proved that out; a Distributed vSwitch (or was it a Nexus?) wasn’t respecting the tag, resulting in a sort of half-duplex QoS if you will.

He fixed that, ran the test again, and still: .6. Yikes! Two days to go live. He called again.

That’s when I remembered the last time we tried to tackle this together. You see, the Lync Mediation Server is sort of the real PBX component in Lync Enterprise Voice architecture. It handles signalling to your endpoints, interfaces with the PSTN or a SIP trunk, and is the one server workload that, even in 2014, I’d hesitate making virtual.

My boss had three of them. All VMs on three different VMware hosts across two sites.

I dug up a Microsoft whitepaper on virtualizing Lync, something we didn’t have the last time we tried this. While Redmond says Lync Enterprise Voice on top of VMs can work, it’s damned expensive from a virtualization host perspective. MS advises:

  • You should disable hyperthreading on all hosts.
  • Do not use processor oversubscription; maintain a 1:1 ratio of virtual CPU to physical CPU.
  • Make sure your host servers support nested page tables (NPT) and extended page tables (EPT).
  • Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.

Talk about Harshing your vBuzz. Essentially, building Lync out virtually with Enterprise Voice forces you to go sparse on your hosts, which is akin to buying physical servers for Lync. If you don’t, into the vCPU blender goes Lync, and out comes poor voice quality, angry users, bitterness, regret and self-punishment.

Anyway, he did as advised, put some additional vCPU & memory reservations in place on his hosts, and yesterday, whilst I was toiling in the Hot Lane, he called me from Lync via his mobile.

He’s a married man just like me, but I must say his voice sounded damn sexy as it was sliced up into packets, sent over the wire, and converted back to analog on my mobile’s speaker. A virtual chest bump over the phone was next, then we said goodbye.

Another Go Live Victory (by proxy). Sweet.

Azure Outage: Yesterday’s bruising hours-long global Azure outage affected Virtual Machines, storage blobs, web services, database services and HD Insight, Microsoft’s service for big data crunching. As it unfolded, I navel-gazed, when I felt like helping. There was literally nothing I could do. Had I some crucial IaaS or PaaS in the Azure stack, I’d be shit out of luck, just like the rest. I felt quite helpless; refreshing Mary Jo’s pageyellow-exclamation-mark-in-triangle-md and the Azure dashboard didn’t help. I wondered what the problem was; it’s been a difficult week for Microsofties whether on-prem or in Azure. Had to be related to the update cycle, I thought.

On the plus side, Azure Active Directory services never went down, nor did several other services. Office 365 stayed up as well, though it is built atop separate-but-related infrastructure in my understanding.

Lastly, I pondered two thoughts: if you’re thinking of reducing your OpEx by replacing your DR strategy with an Azure Site Recovery strategy, does this change your mind? And if you’re building out Azure as your primary IaaS or PaaS, do you just accept such outages or do you plan a failback strategy?

Labworks : Towards a 100% Windows-defined Daisetta Lab: What’s next for the Daisetta Lab? Well, I have me an AMD Duron CPU, a suitable motherboard, a 1U enclosure with PSU, and three Keepin’ it RealTek NICs. Oh, I also have a case of the envies, envies for the VMware crowd and their VXLAN and NSX and of course VMworld next week. So I’m thinking of building a Network Virtualization Gateway appliance. For those keeping score at home, that would mean from Storage to Compute to Network Edge, I’d have a 100% Windows lab environment, infused with NVGRE which has more use cases than just multi-tenancy as I had thought.

Cloud Praxis lifehax: Disrupting the consumer cloud with my #Office365 E1 sub

E1, just like its big brothers E3 & E4, gives you real Microsoft Exchange 2013, just like the one at work. There’s all sorts of great things you can do with your own Exchange instance:

  • Practice your Powershell remoting skills
  • Get familiar with how Office 365 measures and applies storage settings among the different products
  • Run some decent reporting against device, browser and fat client usage

But the greatest of these is Exchange public-facing, closed-membership distribution groups.

Whazzat, you ask?

Well, it’s a distribution group. With you in it. And it’s public facing. Meaning you can create your own SMTP addresses that others can send to. And then you can create Exchange-based rules that drop those emails into a folder, deletes them after a certain time, runs scripts against them, all sorts of cool stuff before it hits your device or Outlook.

All this for my Enterprise of One, Daisetta Labs.net. For $8/month.

You might think it’s overkill to have a mighty Exchange instance for yourself, but your ability to create a public-facing distribution group is a killer app that can help you rationalize some of your cloud hassles at home and take charge & ownership of your email, which I argue, is akin to your birth certificate in the online services world.

My public facing distribution groups, por ejemplo:

distrogroups

 

There are others, like career@, blog@ and such.

The only free service that offers something akin to this powerful feature is Microosft’s own Outlook.com. If the prefixed email address is available @outlook.com, you can create aliases that are public-facing and use them in a similar way as I do.

But that’s a big if. @outlook.com names must be running low.

Another, perhaps even better use of these public-facing distribution groups: exploiting cloud offerings that aren’t dependent on a native email service like Gmail. You can use your public-facing distribution groups to register and rationalize the family cluster’s cloud stack!

app

It doesn’t solve everything, true, but it goes along way. In my case, the problem was a tough one to crack. You see, ever since the child partition emerged out of dev, into the hands of a skilled QA technician, and thence, under extreme protest, into production, I’ve struggled to capture, save & properly preserve the amazing pictures & videos stored on the Supervisor Module’s iPhone 5.

Until recently, Supe had the best camera phone in the cluster (My Lumia Icon outclasses it now). She, of course, uses Gmail so her pics are backed up in G+, but 1) I can’t access them or view them, 2) they’re downsized in the upload and 3) AutoAwesome’s gone from being cool & nifty to a bit creepy while iCloud’s a joke (though they smartly announced family sharing yesterday, I understand).

She has the same problems accessing the pictures I take of Child Partition on the Icon. She wants them all, and I don’t share much to the social media sites.

And neither one of us want to switch email providers.

So….

Consumer OneDrive via Microsoft account registered with general@mydomain.com with MFA. Checks all the Boxes. I even got 100GB just for using Bing for a month

Available on iPhone, Windows phone, desktop, etc? Check.

Easy to use, beautifully designed even? Check

Can use a public-facing distribution group SMTP address for account creation? Check

All tied into my E1 Exchange instance!

It works so well I’m using general@mydomain.com to sync Windows 8.1 between home, work & in the lab. Only thing left is to convince the Supe to use OneNote rather than Evernote.

I do the same thing with Amazon (caveat_emptor@), finance stuff, Pandora (general@), some Apple-focused accounts, basically anything that doesn’t require a native email account, I’ll re-register with an O365 public-facing distribution group.

Then I share the account credentials among the cluster, and put the service on the cluster’s devices. Now the Supe’s iPhone 5 uploads to OneDrive, which all of us can access.

So yeah. E1 & public facing distribution groups can help sooth your personal cloud woes at home, while giving you the tools & exposure to Office 365 for #InfrastructureGlory at work.

Good stuff!