Integrating Trendnet IP Cameras with my homelab

What do you get when you take an IT Systems Engineer with more time on his hands than usual and an unfinished home project list that isn’t getting any shorter?

You get this:

Daytime
My home automation/Internet of Things ‘play’

That’s right. I’ve stood-up some IP surveillance infrastructure at my home, not because I’m a creepy Big Brother type with a God-Complex, rather:

  1. Once my 2.5 year old son figured out how to unlock the patio door and bolt outside, well, game over boys and girls….I needed some ‘insight’ and ‘visibility’ into the Child Partition’s whereabouts pronto and chasing him while he giggles is fun for only so long
  2. My home is exposed on three sides to suburban streets, and it’s nice to be able to see what’s going on outside
  3. I have creepy Big Brother tendences and/or God complex

I had rather simple rules for my home surveillance project:

  • IP cameras: ain’t no CCTV/600 lines of resolution here, I wanted IP so I could tie it into my enterprise home lab
  • Virtual DVR, not physical: Already have enough pieces of hardware with 16 cores, 128GB of RAM, and about 16TB of storage at home.
  • No Wifi, Ethernet only: Wifi from the camera itself was a non-starter for me because 1) while it makes getting video from the cameras easier, it limits where I can place them both from a power & signal strength perspective 2) Spectrum & bandwidth is limited & noisy at distance-friendly 2.4GhZ, wide & open at 5ghz, but 5 has half the range of 2.4. For those reasons, I went old-school: Cat5e, the Reliable Choice of Professionals Evereywhere
  • Active PoE: 802.3af as I already own about four PoE injectors and I’ve already run Cat5e all over the house
  • Endpoint agnostic:  In the IP camera space, it’s tough to find an agnostic camera system that will work on any end-device with as little friction as possible. ONVIF is, I suppose, the closest “standard” to that, and I don’t even know what it entails. But I know what I have: Samsung GS6, iPhone 6, a Windows Tiered Storage box, four Hyper-V hosts, System Center, an XBox One and 100 megabit internet connection.
  • Directional, no omni-PTZ required: I could have saved money on at least one corner of my house by buying a domed, movable PTZ camera rather than use 2 directionals, but 1) this needed to work on any end-point and PTZ controls often don’t

And so, over the course of a few months, I picked up four of these babies:

TV-IP310PI_d02_2

Trendnet TV-IP310PI

Design

I liked these cameras from the start. They’re housed in a nice, heavyweight steel enclosure, have a hood to shade the lens and just feel solid and sturdy. Trendnet markets them as outdoor cameras, and I found no reason to dispute that.

My one complaint about these cameras is the rather finicky mount. The camera can rotate and pivot within the mount’s attachment system, but you need to be careful here as an ethernet cable (inside of a shroud) runs through the mount. Twist & rotate your camera too much, and you may tear your cable apart.

And while the mount itself is steel and needs only three screws to attach, the interior mechanism that allows you to move the camera once mounted is cheaper. It’s hard to describe and I didn’t take any pictures as I was cursing up a storm when I realized I almost snapped the cable, so just know this: be cognizant that you should be gentle with this thing as you mount it and then as you adjust it. You only have to do that once, so take your time.

Imaging and Performance:

ircam
Nighttime

Trendnet says the camera’s sensor & processing is capable of pushing out 1080p at 30 frames per second, but once you get into one of these systems, you’ll notice it can also do 2560×1440, or QHD resolutions. Most of the time, images and video off the camera are buttery smooth, and it’s great.

I’m not sharp enough on video and sensors to comment on color quality, whether F 1.2 on a camera like this means the same as it would on a still DSLR, or understand IR Lux, so let me just say this: These cameras produce really sharp, detailed and wide-enough (70 degrees) images for me, day or night. Color seems right too; my lawn is various hues of brown & green thanks to the heat and California drought, and my son’s colorful playthings that are scattered all over do in indeed remind me of a clown’s vomit. And at night, I can see far enough thanks to ambient light. Trendnet claims 100 foot IR-assisted viewing at night. I see no reason to dispute that.

Let the camera geeks geek out on teh camera; this is an enterprise tech blog, and I’ve already talked abou the hardware, so let’s dig into the software-defined & networking bits that make this expensive project worthwhile.

Power & Networking

These cameras couldn’t be easier to connect and configure, once you’ve got the power & cabling sorted out. The camera features a 10/100 ethernet port; on all four of my cameras, that connects to four of Trendnet’s own PoE injectors. All PoE injectors are inside my home; I’d rather extend ethernet with power than put a fragile PoE device outside. The longest cable run is approximately 75′, well within the spec. Not much more to say here other than Trendnet claims the cameras will use 5 watts maximum, and that’s probably at night when the IR sensors are on.

From each injector, a data cable connects to a switch. In my lab, I’ve got two enterprise-level switches.

One camera, the garage/driveway camera, is plugged into trunked, native vlan 410 port on my 2960s in the garage,

The other switch is a small CIsco SG-300 10p. The three other cameras connect to it. The SG-300 serves the role of access-layer switch and has a 3x1GbE port-channel back to the 2960s. This switch wasn’t getting used enough in my living room, so I moved it to my home office, where all ports are now used. Here’s my home lab environment, updated with cameras:

The Homelab as it stands today
The Homelab as it stands today

Like any other IP cam, the Trendnet will obtain an IP off your DHCP server. Trendnet includes software with the camera that will help you find/provision the camera on your network, but I just saved a few minutes and looked in my DHCP table. As expected, the cameras all received a routable IP, DNS, NTP and other values from my DHCP.

Once I had the IP, it was off to the races:

  • Set DHCP reservation
  • Verify an A record was created DNS so I could refer to the cameras by names rather than IP
  • Login, configure new password, update firmware, rename camera, turn-off UPNP, turn-off telnet
  • Adjust camera views

Software bits – Server Side

Trendnet is nice enough to include a fairly robust and rebadged version of Luxriot camera software, which has two primary components: Trendnet View Pro (Fat Client & Server app) and VMS Broacast server, an http server. Trendnet View Pro is a server-like application that you can install on your PC to view, control, and edit all your cameras. I say server-like because this is the free-version of the software, and it has the following limits:

  • Cannot run as a Windows Service
  • An account must be logged in to ‘keep it running’
  • You can install View Pro on as many PCs as you like, but only one is licensed to receive streaming video at a time

Upgrading the free software to a version that supports more simultaneously viewers is steep: $315 to be exact.

Smoking the airwaves with my beater kiosk PC in the kitchen. This is the TrendNet View client, limited to one viewer at a time
Smoking the airwaves with my beater kiosk PC in the kitchen. This is the TrendNet View client, limited to one viewer at a time

Naturally, I went looking for an alternative, but after dicking around with Zoneminder & VLC for awhile (both of which work but aren’t viewable on the XBox), I settled on VMS Broadcast server, the http component of the free software.

Just like View Pro, VMS Broadcast won’t run as a service, but, well, sysinternals!

So after deliberating a bit, I said screw it, and stood-up a Windows 8.1 Pro VM on a node in the garage. The VM is Domain-joined, which the Trendnet software ignored or didn’t flag, and I’ve provisioned 2 cores & 2GB of RAM to serve, compress, and redistribute the streams using the Trendnet fat client server piece as well as the VMS web server.

Client Side

On that same Windows 8.1 VM, I’ve enabled DLNA-sharing on VLAN 410, which is my trusted wireless & wired internal network. The thinking here was that I could redistribute via DLNA the four camera feeds into something the XBox One would be able to show on our family’s single 48″ LCD TV in the living room via the Media App. So far, no luck getting that to work, though IE on the XBox One will view and play all four feeds from the Trendnet web server, which for the purposes of this project, was good enough for me.

Additionally, I have a junker Lenovo laptop (Ideapad, 11″) that I’ve essentially built into a Kiosk PC for the kitchen/dining area, the busiest part of the house. This PC automatically logs in, opens the fat client and loads the file to view the four live feeds. And it does this all over wifi, giving instant home intel to my wife, mother-in-law, and myself as we go about our day.

Finally, both the iOS & Android devices in my house can successfully view the camera streams, not from the server, but directly (and annoyingly) from the cameras themselves.

The Impact of RTSP 1080p/30fps x 4 on Home Lab 

I knew going into this that streaming live video from four quality cameras 24×7 would require some serious horsepower from my homelab, but I didn’t realize how much.

From the compute side of things, it was indeed alot. The Windows 8.1 VM is currently on Node2, a Xeon E3-1241v3 with 32GB of RAM.

Typically Node2’s physical CPU hovers around 8% utilization as it hosts about six VMs in total.

With the 8.1 VM serving up the streams as well as compressing them with a variable bit rate, the tax for this DIY Home surveillance project was steep: Node2’s CPU now averages 16% utilized, and I’ve seen it hit 30%. The VM itself is above 90% utilization.hosts

More utilization = more worries about thermal as Node2 sits in the garage. In southern California. In the summertime.

Ambient air temperature in my garage over the last three weeks.
Ambient air temperature in my garage over the last three weeks.

Node2’s average CPU temperature varies between 22c and 36c on any given warm day in the garage (ambient air is 21c – 36c). But with the 8.1 VM, Node2 has hit as high as 48c. Good thing I used some primo thermal paste!

trsp

All your Part 15 FCC Spectrum are belong to me, on channel 10 at least
All your Part 15 FCC Spectrum are belong to me, on channel 10 at least

From the network side, results have been interesting. First, my Meraki is a champ. The humble MR-18 802.11n access point doesn’t break a sweat streaming the broadcast feed from the VM to the Lenovo Kiosk laptop in the kitchen. Indeed, it sustains north of 21mb/s as this graph shows, without interrupting my mother in law’s consumption of TV broadcasts over wifi (separate SSID & VLAN, from the SiliconDust TV tuner), nor my wife’s Facebooking & Instagramming needs, nor my own tests with the Trendnet application which interfaces with the cameras directly.

Meraki’s analysis says that this makes the 2.4ghz spectrum in my area over 50% utilized, which probably frustrates my neighbors. Someday perhaps I’ll upgrade the laptop to a 5ghz radio.

vSwitch, the name of my Converged SCVMM switch, is showing anywhere from 2megabits to 20 megabits of Tx/Rx for the server VM. Pretty impressive performance for a software switch!network

Storage-wise, I love that the Trendnets can mount an SMB share, and I’ve been saving snapshots of movement to one of the SMB shares on my WindowsSAN box.

I am also using Trendnet’s email alerting feature to take snapshots and email them to me whenever there’s motion in a given area. Which is happening a lot now as my 2 year old walks up to the cameras, smiles and says “Say cheeeese!”

All in all, a tidy & fun sub-$1000 project!

More than good hygiene : applying a proper cert to my Nimble array

So one of my main complaints about implementing a cost-effective Nimble Storage array at my last job was this:

Who is Jetty Mortbay and why does he want inside my root CA store?
Who is Jetty Mortbay and why does he want inside my root CA store?

I remarked back in April about this unfortunate problem in a post about an otherwise-flawless & easy Nimble implementation:

The SSL cert situation is embarrassing and I’m glad my former boss hasn’t seen it. Namely that situation is this: you can’t replace the stock cert, which, frankly looks like something I would do while tooling around with OpenSSL in the lab.

I understand this is fixed in the new 2.x OS version but holy shit what a fail.

Well, fail-file no more,  because my new Nimble array at my current job has been measured and validated by the CA Gods:

verified
Green padlocks. I want green padlocks everywhere

Oh yeah baby. Validated in Chrome, Firefox and IE. And it only cost me market rates for a SAN certificate from a respected CA, a few hours back ‘n forth with Nimble, and only a few IT McGuyver-style tricks to get this outcome.

Now look. I know some of my readers are probably seeing this and thinking…”But that proves nothing. A false sense of security you have.”

Maybe you’re right, but consider.

I take a sort of Broken Windows Theory approach to IT. The Broken Windows Theory, if you’re not familiar with it, states that:

Under the broken windows theory, an ordered and clean environment – one which is maintained – sends the signal that the area is monitored and that criminal behavior will not be tolerated. Conversely, a disordered environment – one which is not maintained (broken windows, graffiti, excessive litter) – sends the signal that the area is not monitored and that one can engage in criminal behavior with little risk of detection.

Now I’m not saying that adding a proper certificate to my behind-the-firewall Nimble array so that Chrome shows me Green Padlocks rather than scary warnings is akin to reducing violent crime in urban areas. But I am saying that little details, such as these, ought to be considered and fixed in your environment.

Why? Well, somehow fixing even little things like this amount to something more than just good hygiene, something more than just ‘best practice.’

Ultimately, we infrastructurists are what we build, are we not? Even little ‘security theater’ elements like the one above are a reflection on our attention to detail, a validation of our ability to not only design a resilient infrastructure on paper at the macro level, but to execute on that design to perfection at the micro level.

It shows we’re not lazy as well, that we care to repair the ‘broken windows’ in our environment.

And besides: Google (and Microsoft & Mozilla & Apple) are right to call out untrusted certificates in increasingly disruptive & work-impairing ways.

*If you’re reading this and saying: Why don’t you just access the array via IP address, well, GoFQDNorGoHomeSon.com

Going full Windows 10 Server in the Lab, part 1

So many new goodies in Windows Server 10.

So little time to enjoy them.

Highlights so far:

  • Command line transparency is awesome. Want the same in my Powershell windows
  • Digging the flat look of my Windows when they are piled atop one another. THere’s a subtle 3d effect (really muted shadows I think) that helps to highlight Window positions and focus. Nice work UI team
  • Server 10 without Desktop mode looks just about 100% like Server 2012 R2. So yeah, if you’re using your PC as a server, definitely install the Desktop mode

On the agenda for today:

  • Build what has to be one of the few Windows Server 10 Hyper-V clusters
  • Install the new VMM & System Center
  • Testing out the new Network Controller role on a 1U AMD-powered server I’ve had powered-off but ready for just this moment (never got around to building a Server 2012 R2 Network Virtualization Gateway server)
  • Maybe, just maybe, upgrading the two Domain Controllers and raising forest/domain functional level to “Technical Preview”, if it’s even possible.

What won’t be upgraded in the short term:

  • San.daisettalabs.net, the Tiered Storage box that hosts my SMB 3 shares as well as several iSCSI .vhdx drives
  • The VM hosting SQL 2012 SP2, IPAM, and other roles
  • The TV computer, which is running Windows 8.1 Professional with Media Center Edition. Yes, it’s a lab, but even in a lab environment, television access is considered mission critical

More later.

Tales from the Hot Lane

A few brief updates & random thoughts from the last few days on all the stuff I’ve been working on.

Refreshing the Core at work: Summer’s ending, but at work, a new season is advancing, one rack unit at a time. I am gradually racking up & configuring new compute, storage, and network as it arrives; It Is Not About the Hardware™, but since you were wondering: 64 Ivy Bridge cores and about 512GB RAM, 30TB of storage, and Nexus 3k switching.

Cisco_logoAhh, the Nexus line. Never had the privilege to work on such fine switching infrastructure. Long time admirer, first-time NX-OS user. I have a pair of them plus a Layer 3 license so the long-term thinking involves not just connecting my compute to my storage, but connecting this dense stack northbound & out via OSPF or static routes over a fault-tolerant HSRP or VRRP config.

To do that, I need to get familiar with some Nexus-flavored acronyms that aren’t familiar to me: virtual port channels (VPC), Control Plane policy (COPP), VRF, and oh-so-many-more. I’ll also be attempting to answer the question once and for all: what spanning tree mode does one use to connect a Nexus switch to a virtualization host running Hyper-V’s converged switching architecture? I’ve used portfast in the lab on my Catalyst, but the lab switch is five years old, whereas this Nexus is brand new. And portfast never struck me as the right answer, just the easy one.

To answer those questions and more, I have TAC and this excellent tome provided gratis by the awesome VAR who sold us much of the equipment.

Into the vCPU Blender goes Lync: Last Friday, I got a call from my former boss & friend who now heads up a fast-growing IT department on the coast. He’s been busy refreshing & rationalizing much of his infrastructure as well, but as is typical for him, he wants more. He wants total IT transformation, so as he’s built out his infrastructure, he laid the groundwork to go 100% Microsoft Lync 2013 for voice.

Yeah baby. Lync 2013 as your PBX, delivering dial tone to your endpoints, whether they are Bluetooth-connected PC headsets, desk phones, or apps on a mobile.

Forget software-defined networking. This is software-defined voice & video, with no special server hardware, cloud services, or any other the other typical expensive nonsense you’d see in a VoIP implementation.

If Lync 2013 as PBX is not on your IT Bucket List, it should be. It was something my former boss & I never managed to accomplish at our previous employer on Hyper-V.

Now he was doing it alone. On a fast VMware/Nexus/NetApp stack with distributed vSwitches. And he wanted to run something by me.

So you can imagine how pleased I was to have a chat with him about it.

He was facing one problem which threatened his Go Live date: Mean Opinion Score, or MOS, a simple 0-5 score Lync provides to its administrators that summarizes call quality. MOS is a subset of a hugely detailed Media Quality Summary Report, detailed here at TechNet.

thMy friend was scoring a .6 on his MOS. He wanted it to be at 4 or above prior to go-live.

So at first we suspected QoS tags were being stripped somewhere between his endpoint device and the Lync Mediation VM. Sure enough, Wireshark proved that out; a Distributed vSwitch (or was it a Nexus?) wasn’t respecting the tag, resulting in a sort of half-duplex QoS if you will.

He fixed that, ran the test again, and still: .6. Yikes! Two days to go live. He called again.

That’s when I remembered the last time we tried to tackle this together. You see, the Lync Mediation Server is sort of the real PBX component in Lync Enterprise Voice architecture. It handles signalling to your endpoints, interfaces with the PSTN or a SIP trunk, and is the one server workload that, even in 2014, I’d hesitate making virtual.

My boss had three of them. All VMs on three different VMware hosts across two sites.

I dug up a Microsoft whitepaper on virtualizing Lync, something we didn’t have the last time we tried this. While Redmond says Lync Enterprise Voice on top of VMs can work, it’s damned expensive from a virtualization host perspective. MS advises:

  • You should disable hyperthreading on all hosts.
  • Do not use processor oversubscription; maintain a 1:1 ratio of virtual CPU to physical CPU.
  • Make sure your host servers support nested page tables (NPT) and extended page tables (EPT).
  • Disable non-uniform memory access (NUMA) spanning on the hypervisor, as this can reduce guest performance.

Talk about Harshing your vBuzz. Essentially, building Lync out virtually with Enterprise Voice forces you to go sparse on your hosts, which is akin to buying physical servers for Lync. If you don’t, into the vCPU blender goes Lync, and out comes poor voice quality, angry users, bitterness, regret and self-punishment.

Anyway, he did as advised, put some additional vCPU & memory reservations in place on his hosts, and yesterday, whilst I was toiling in the Hot Lane, he called me from Lync via his mobile.

He’s a married man just like me, but I must say his voice sounded damn sexy as it was sliced up into packets, sent over the wire, and converted back to analog on my mobile’s speaker. A virtual chest bump over the phone was next, then we said goodbye.

Another Go Live Victory (by proxy). Sweet.

Azure Outage: Yesterday’s bruising hours-long global Azure outage affected Virtual Machines, storage blobs, web services, database services and HD Insight, Microsoft’s service for big data crunching. As it unfolded, I navel-gazed, when I felt like helping. There was literally nothing I could do. Had I some crucial IaaS or PaaS in the Azure stack, I’d be shit out of luck, just like the rest. I felt quite helpless; refreshing Mary Jo’s pageyellow-exclamation-mark-in-triangle-md and the Azure dashboard didn’t help. I wondered what the problem was; it’s been a difficult week for Microsofties whether on-prem or in Azure. Had to be related to the update cycle, I thought.

On the plus side, Azure Active Directory services never went down, nor did several other services. Office 365 stayed up as well, though it is built atop separate-but-related infrastructure in my understanding.

Lastly, I pondered two thoughts: if you’re thinking of reducing your OpEx by replacing your DR strategy with an Azure Site Recovery strategy, does this change your mind? And if you’re building out Azure as your primary IaaS or PaaS, do you just accept such outages or do you plan a failback strategy?

Labworks : Towards a 100% Windows-defined Daisetta Lab: What’s next for the Daisetta Lab? Well, I have me an AMD Duron CPU, a suitable motherboard, a 1U enclosure with PSU, and three Keepin’ it RealTek NICs. Oh, I also have a case of the envies, envies for the VMware crowd and their VXLAN and NSX and of course VMworld next week. So I’m thinking of building a Network Virtualization Gateway appliance. For those keeping score at home, that would mean from Storage to Compute to Network Edge, I’d have a 100% Windows lab environment, infused with NVGRE which has more use cases than just multi-tenancy as I had thought.

Stack Builders ‘R Us

This is a really lame but (IMHO) effective drawing of what I think of as a modern small/medium business enterprise ‘stack’:

stack

As you can see, just about every element of a modern IT is portrayed.

Down at the base of the pyramid, you got your storage. IOPS, RAID, rotational & ssd, snapshots, dedupes, inline compression, site to site storage replication, clones and oh me oh my…all the things we really really love are right here. It’s the Luntastic layer and always will be.

Above that, your compute & Memory. The denser the better, 2U Pizza Boxes don’t grow on trees and the business isn’t going to shell out more $$$ if you get it wrong.

Above that, we have what my networking friends would call the “Underlay network.” Right. Some cat 6, twinax, fiber, whatever. This is where we push some packets, whether to our storage from our compute, northbound out to the world, southbound & down the stack, or east/west across it. Leafs, spines, encapsulation, control & data planes, it’s all here.

And going higher -still in Infrastructure Land mind you- we have the virtualization layer. Yeah baby. This is what it’s all about, this is the layer that saved my career in IT and made things interesting again. This layer is designed to abstract all that is beneath it with two goals in mind: cost savings via efficiency gains & ease of provisioning/use.

And boy,has this layer changed the game, hasn’t it?

So if you’re a virtualization engineer like I am, maybe this is all you care about. I wouldn’t blame you. The infrastructure layer is, after all, the best part of the stack, the only part of the stack that can claim to be #Glorious.

But in my career, I always get roped in (willingly or not) into the upper layers of the stack. And so that is where I shall take you, if you let me.

Next up, the Platform layer. This is the layer where that special DBA in your life likes to live. He optimizes his query plans atop your Infrastructure layer, and though he is old-school in the ways of storage, he’s learned to trust you and your fancy QoS .vhdxs, or your incredibly awesome DRS fault-tolerant vCPUs.

Or maybe you don’t have a DBA in your Valentine’s card rotation. Maybe this is the layer at which the devs in your life, whether they are running Eclipse or Visual Studio, make your life hell. They’re always asking for more x (x= memory, storage, compute, IP), and though they’re highly-technical folks, their eyes kind of glaze over when you bring up NVGRE or VXLAN or Converged/Distributed Switching or whatever tech you heart at the layer below.

Then again, maybe you work in this layer. Maybe you’re responsible for building & maintaining session virtualization tech like RDS or XenApp, or maybe you maintain file shares, web farms, or something else.

Point is, the people at this layer are platform builders. To borrow from the automotive industry, platform guys build the car that travels on the road infrastructure guys build. It does no good for either of us if the road is bumpy or the car isn’t reliable, does it? The user doesn’t distinguish between ‘road’ and ‘car’, do they? They just blame IT.

Next up: software & service layer. Our users exist here, and so do we. Maybe for you this layer is about supporting & deploying Android & iPhone handsets and thinking about MDM. Or maybe you spend your day supporting old-school fat client applications, or pushing them out.

And finally, now we arrive to the top of the pyramid. User-space. The business.

This is where (and the metaphor really fits, doesn’t it?) the rubber meets the road ladies and gentlemen. It’s where the business user drives the car (platform) on the road (infrastructure). This is where we sink or swim, where wins are tallied and heros made, or careers are shattered and the cycle of failure>begets>blame>begets>fear>begets failure begins in earnest.

That’s the stack. And if you’re in IT, you’re in some part of that stack, whether you know it or not.

But the stack is changing. I made a silly graphic for that too. Maybe tomorrow.

Respect my Certificate Authoritah

Fellow #VFD3 Delegate and Chicago-area vExpert Eric Shanks has recently posted two great pieces on how to setup an Active Directory Certificate Authority in your home lab environment.

Say what? Why would you want the pain of standing up some certificate & security infrastructure in your home lab?

Eric explains:

Home Lab SSL Certificates aren’t exactly a high priority for most people, but they are something you might want to play with before you get into a production environment.

Exactly.

Security & Certificate infrastructure are a weak spot in my portfolio so I’ve been practicing/learning in the Daisetta Lab so that I don’t fail at work. Here’s how:

As I was building out my lab, I knew three things: I wanted a routable Fully Qualified Domain Name for my home lab, I was focused on virtualization but should also practice for the cloud and things like ADFS, and I wanted my lab to be as secure as possible (death to port 80 & NTLM!)

With those loose goals in mind, I decided I wanted Daisetta Labs.net to be legit. To have some Certificate Authority bonafides…to get some respect in the strangely federated yet authoritarian world of certificate authorities, browser and OS certificate revocations, and yellow Chrome browser warning screens.

dlabs
Too legit, too legit to quit

So I purchased a real wildcard SSL certificate from a real Certificate Authority back in March. It cost about $96 for one year, and I don’t regret it at all because I’m using it now to secure all manner of things in Active Directory, and I’ll soon be using it as Daisetta Labs.net on-prem begins interfacing with DaisettaLabs.net in Azure (it already is, via Office 365 DirSync, but I need to get to the next level and the clock is ticking on the cert).

Building on Eric’s excellent posts, I suggest to any Microsoft-focused IT Pros that you consider doing what I did. I know it sucks to shell out money for an SSL certificate, but labwork is hard so that work-work isn’t so hard.

So, go follow Eric’s outline, buy a cert, wildcard or otherwise (got mine at Comodo, there’s also an Israeli CA that gives SSL certs for free, but it’s a drawn-out process) and stand up a subordinate CA (as opposed to a on-prem only Root CA) and get your 443 on!

Man it sucks to get something so fundamentally wrong. Reader Chris pointed out a few inaccuracies and mistakes about my post in the comments below.

At first I was indignant, then thoughtful & reflective, and finally resigned. He’s right. I built an AD Root -not a subortinate as that’s absurd- Certificate Authority in the lab.

Admittedly, I’m not strong in this area. Thanks to Chris for his coaching and I regret if I mislead anyone.

Meet my new Storage Array

So the three of you who read this blog might be wondering why I haven’t been posting much lately.

Where’s Jeff, the cloud praxis guy & Hyper-V fanboy, who says IT pros should practice their cloud skills? you might have asked.

Well, I’ll tell you where I’ve been. One, I’ve been working my tail off at my new job where Cloud Praxis is Cloud Game Time, and two, the Child Partition, as adorable and fun as he is, is now 19 months old, and when he’s not gone down for a maintenance cycle in the crib, he’s running Parent Partition and Supervisor Module spouse ragged, consuming all CPU resources in the cluster. Wow that kid has some energy!

Yet despite that (or perhaps because of that), I found some time to re-think my storage strategy for the Daisetta Lab.

Recall that for months I’ve been running a ZFS array atop a simple NAS4Free instance, using the AMD-powered box as a multi-path iSCSI target for Cluster Shared Volumes. But continuing kernel-on-iscsi-target-service homicides, a desire to combine all my spare drives & resources into a new array, and a vacation-time cash-infusion following my exit from the last job lead me to build this for only about $600 all-in:

Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab
Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. There’s some serious storage utopia up in the Daisetta Lab

Here are some superlatives and other interesting curios about this new box:

  • WP_20140705_01_19_31_ProIt was born on the 4th of July, just like ‘Merica and is as big, loud, ostentatious and overbearing as ‘Merica itself
  • I would name it ‘Merica.daisettalabs.net if the OS would accept it
  • It’s a real server. With a real Supermicro X10SAT server/workstation board. No more hacking Intel .inf files to get server-quality drivers
  • It has a real server SAS card, an LSI 9218i something or other with SAS-SATA breakout cables
  • It doesn’t make me choose between file or block storage, and is object-storage curious. It can even do NFS or SMB 3…at the same time.
  • It does ex post facto dedupe -the old model- rather than the new hot model of inline dedupe and/or compression, which makes me resent it, but only a little bit
  • It’s combining three storage chipsets -the LSI card, the Supermicro’s Intel C226, and ASMedia 1061- into one software-defined logical system. It’s abstracting all that hardware away using pools, similar to ZFS, but in a different, more sublime & elegant way.
  • It doesn’t have the ARC –ie RAM AS STORAGE– which makes me really resent it, but on the plus side, I’m only giving it 12GB of RAM and now have 16GB left for other uses.
  • It has 16 Disks : 12 rotational drives (6x1TB 5400 RPM & 6x2TB 7200RPM) and four SSDs (3x256GB Samsung 840 EVO & 1x128GB Samsung 830) and one boot drive (1x32GB SanDisk ReadyCache drive re-purposed as general SSD)
  • Total capacity RAW: nearly 19TB. Usable? I’ll let you know. Asking
    “Do I need that much?” is like asking “Does ‘Merica need to stretch from Sea to Shining Sea?” No I don’t, but yes ‘Merica does. But I had these drives in stock, as it were, so why not?
  • It uses so much energy & power that it has, in just a few days, erased any greenhouse gas savings I’ve made driving a hybrid for one year. Sorry Mother Earth, looks like I’m in your debt again
  • But seriously, under load, it’s hitting about 310 watts. At idle, 150w. Not bad all things considered. Haswell + full C states & PCIe power management work.
  • It’s built as veritable wind-tunnel as it lives the garage. In Southern California. And it’s summer. Under load, the CPU is hitting about 65C and the south-bridge flirts with 80c, but it’s stable.
  • It has six, yes, six, 1GbE Intel NICs. Two are on the motherboard, and I’m using a 4 port PCIe 2 card. And of course, I’ve enabled Jumbo Frames. I mean do you have to even ask at this point?
  • It uses virtual disks. Into which you can put other virtual disks. And even more virtual disks inside those virtual disks. It’s like Christopher Nolan designed this storage archetype while he wrote Inception…virtual disk within virtual disk within virtual disk. Sounds dangerous, but in the Daisetta Lab, Who Dares Wins!

So yeah. That’s what I’ve been up to. Geeking out a little bit like a gamer, but simultaneously taking the next step in my understanding, mastery & skilled manipulation of a critical next-gen storage technology I’ll be using at work soon.

Can you guess what that is?

Stay tuned. Full reveal & some benchmarks/thoughts tomorrow.

 

 

Cloud Praxis lifehax: Disrupting the consumer cloud with my #Office365 E1 sub

E1, just like its big brothers E3 & E4, gives you real Microsoft Exchange 2013, just like the one at work. There’s all sorts of great things you can do with your own Exchange instance:

  • Practice your Powershell remoting skills
  • Get familiar with how Office 365 measures and applies storage settings among the different products
  • Run some decent reporting against device, browser and fat client usage

But the greatest of these is Exchange public-facing, closed-membership distribution groups.

Whazzat, you ask?

Well, it’s a distribution group. With you in it. And it’s public facing. Meaning you can create your own SMTP addresses that others can send to. And then you can create Exchange-based rules that drop those emails into a folder, deletes them after a certain time, runs scripts against them, all sorts of cool stuff before it hits your device or Outlook.

All this for my Enterprise of One, Daisetta Labs.net. For $8/month.

You might think it’s overkill to have a mighty Exchange instance for yourself, but your ability to create a public-facing distribution group is a killer app that can help you rationalize some of your cloud hassles at home and take charge & ownership of your email, which I argue, is akin to your birth certificate in the online services world.

My public facing distribution groups, por ejemplo:

distrogroups

 

There are others, like career@, blog@ and such.

The only free service that offers something akin to this powerful feature is Microosft’s own Outlook.com. If the prefixed email address is available @outlook.com, you can create aliases that are public-facing and use them in a similar way as I do.

But that’s a big if. @outlook.com names must be running low.

Another, perhaps even better use of these public-facing distribution groups: exploiting cloud offerings that aren’t dependent on a native email service like Gmail. You can use your public-facing distribution groups to register and rationalize the family cluster’s cloud stack!

app

It doesn’t solve everything, true, but it goes along way. In my case, the problem was a tough one to crack. You see, ever since the child partition emerged out of dev, into the hands of a skilled QA technician, and thence, under extreme protest, into production, I’ve struggled to capture, save & properly preserve the amazing pictures & videos stored on the Supervisor Module’s iPhone 5.

Until recently, Supe had the best camera phone in the cluster (My Lumia Icon outclasses it now). She, of course, uses Gmail so her pics are backed up in G+, but 1) I can’t access them or view them, 2) they’re downsized in the upload and 3) AutoAwesome’s gone from being cool & nifty to a bit creepy while iCloud’s a joke (though they smartly announced family sharing yesterday, I understand).

She has the same problems accessing the pictures I take of Child Partition on the Icon. She wants them all, and I don’t share much to the social media sites.

And neither one of us want to switch email providers.

So….

Consumer OneDrive via Microsoft account registered with general@mydomain.com with MFA. Checks all the Boxes. I even got 100GB just for using Bing for a month

Available on iPhone, Windows phone, desktop, etc? Check.

Easy to use, beautifully designed even? Check

Can use a public-facing distribution group SMTP address for account creation? Check

All tied into my E1 Exchange instance!

It works so well I’m using general@mydomain.com to sync Windows 8.1 between home, work & in the lab. Only thing left is to convince the Supe to use OneNote rather than Evernote.

I do the same thing with Amazon (caveat_emptor@), finance stuff, Pandora (general@), some Apple-focused accounts, basically anything that doesn’t require a native email account, I’ll re-register with an O365 public-facing distribution group.

Then I share the account credentials among the cluster, and put the service on the cluster’s devices. Now the Supe’s iPhone 5 uploads to OneDrive, which all of us can access.

So yeah. E1 & public facing distribution groups can help sooth your personal cloud woes at home, while giving you the tools & exposure to Office 365 for #InfrastructureGlory at work.

Good stuff!

Cloud Praxis #4 : Syncing our Dir to Office 365

praxis4dirsync
The Apollo-Soyuz metaphor is too rich to resist. With apologies to NASA, astronauts & cosmonauts everywhere

Right. So if you’ve been following me through Cloud Praxis #1-3 and took my advice, you now have a simple Active Directory lab on your premises (Wherever that may be) and perhaps you did the right thing and purchased a domain name, then bought an Office 365 Enterprise E1 subscription for yourself. Because reading about contoso.com isn’t enough.

What am I talking about “if”. I know you did just what I recommended you do. I know because you’re with me here, working through the Cloud Praxis Program because you, like me, are an IT Infrastructurist who likes to win! You are a fellow seeker of #InfrastructureGlory, and you will pursue that ideal wherever it is, on-prem, hybrid, in the cloud, buried in a signed cmdlet, on your hybrid iSCSI array or deep inside an NVGRE-encapsulated packet, somewhere up in the Overlay.

Right. Right?

Someone tell me I’m not alone here.

You get there through this thing.
You get there through this thing.

So DirSync. Or Directory Synchronization. In the grand Microsoft tradition of product names, DirSync has about the least sexy name possible. Imagine yourself as a poor Microsoft technology reseller; you’ve just done the elevator pitch for the Glories that are to be had in Office 365 Enterprise & Azure, and your mark is interested and so he asks:

Mark: “How do I get there?”

Sales guy: “DirSync”

Mark: “Pardon me?”

Sales Guy: “DirSync.”

Mark: Are you ok? Your voice is spasming or something. Is there someone I can call?

DirSync has been around for a long, long time. I hadn’t even heard of it or considered the possibility of using it until 2012 or 2013, but while prepping the Daisetta Lab, I realized this goes back to 2008 & Microsoft Online Services.

But today, in 2014, it’s officially called Windows Azure Active Directory Sync, and though I can’t wait to GifCam you some cool powershell cmdlets that show it in action, we’ve got some prep work to do first.

Lab Prep for DirSync

As I said in Cloud Praxis #3, to really simulate your workplace, I recommend you build your on prem lab AD with a fully-routable domain name, then purchase that same name from a registrar on the internet. I said in Cloud Praxis #2 that you should have a lab computer with 16GB of RAM and you should expect to build at least two or three VMs using Client Hyper-V at the minimum.

Now’s the time to firm this all up, prep our lab. I know you’re itching to get deep into some O365, but hang on and do your due dilligence, just like you would at work.

  • Lab DHCP : What do you have as your DHCP server? If it’s a consumer-level wifi router that won’t let you assign an FQDN to your devices, consider ditching it for DHCP and stand-up a DHCP instance in your Lab Domain Controller. Your wife will never know the difference and you can ensure 1) that your VMs (whether 1 or 2 or several) get the proper FQDN suffix assigned, and 2) you can disable NetBIOS via MS DHCP
  • Get your on-prem DNS in order: This is the time to really focus on your lab DNS. I want you to test everything; make some A-records, ensure your PTRs are created automatically. Create some C-Names and test forwarding. Download a tool like Steve Gibson’s DNS Benchmark to see which public name servers are the closest to you and answer the quickest. For me, it’s Level 3. Set your forwarders appropriately. Enable logging & automatic testing
  • Build a second DC: Not strictly required, but best practice & wisdom dictates you do this ahead of DirSync. Do what I did; go with a Windows core VM for your second DC. That VM will only need 768mb of ram or so, and a 15GB .vhdx. But with it, you will have a healthier domain on-prem

Now over to O365 Enterprise portal. Read the official O365 Induction Process as I did, then take a look at the steps/suggestions below. I went through this in April; it’s easy, but the official guides leave out some color.

Office 365 Prep & Domain Port ahead of DirSync

  • Go to your registrar and assign and verify to Microsoft you own the domain via TXT record: Process here
  • Pick from the following options for DNS and read this:
    • Easy but not realistic: Just handover DNS to O365. I took the easy way admittedly. Daisetta Labs.net DNS is hosted by O365. It’s decent as DNS hosting goes, but I wouldn’t have chosen this option for my workplace as I use an Anycast DNS service that has fast CDN Propagation globally
    • More realistic: Create the required A Records, C Names, TXT and SRV records at your registrar or DNS host and point them where Microsoft says to point them
    • Balls of Steel Option: Put your Lab VM in your DMZ, harden it up, point the registrar at it and host your own DNS via Windows baby. Probably not advisable from a residential internet connection.
  • Keep your .onmicrosoft.com account for a week or two: Whether you’re starting out in O365 at work or just to learn the system like I did, you’ll need your first O365 account for a few days as the domain name porting process is a 24-36 hour process. Don’t assign your E1 licenses to your @domain.com account just yet.
  • I wouldn’t engage MFA just yet…let things settle before you turn on Multifactor authentication. Also be sure your backup email account (The oh shit account Microsoft wants you to use that’s not associated with O365) is accessible and secure.
  • Fresh start cause I couldn't build out an Exchange lab :sadface:
    Fresh start cause I couldn’t build out an Exchange lab :sadface:

    If you are simulating Exchange on-prem to hybrid for this exercise, you’ll have more steps than I did. Sadly, I had to give O365 the easy way out and selected “Fresh Start” in the process.

  • Proceed with the standard O365 wizard setups, but halt at OnRamp: I’m happy to see the Wizard configuration method is surviving in the cloud. Setting all this up won’t take long; the whole portal is pretty easy & obvious until you get to Sharepoint stuff.

Total work here is a couple of hours. I can’t stress how important your lab DNS & AD health are. You need to be rock solid in replication between your DCs, your DNS should be fast & reliably return accurate results, and you should have a good handle on your lab replication topology, a proper Sites & Services setup, and dial in your Group Policy and OU structure.

Daisetta Labs.net looks like this:

daisettalabsad

 

and dcdiag /e & repadmin show no errors.

Final Steps before DirSync Blastoff

  • With a healthy Domain on-prem, you need now to create some A Records, C-Names and TXT records so Lync, Outlook, and all your other fat clients dependent Exchange, Sharepoint and such know where to go. This is quite important; at work, you’ll run into this exact same situation.  Getting this right is why we chose to use routable domain, it’s a big chunk of the reason why we’re doing this whole Cloud Praxis thing in the first place. It’s so our users have an enjoyable and hassle-free transition to O365
  • Follow the directions here. Not as hard as it sounds. For me it went very smoothly. In fact, the O365 Enterprise portal gives you everything you need in the Domain panel, provided you’ve waited about 36 hours after porting your domain. Here’s what mine looks like on-prem after manually creating the records.

dns

And that’s it. We’re ready to Sync our Dirs to O365s Dirs, to get a little closer to #InfrastructureGlory. On one side: your on-prem AD stack, on the launch pad, in your lab ready for liftoff.

Sure, it’s a little hair-brained, admittedly, but if you’re like me, this is how you learn. And I’m learning. Aren’t you?

On the other launch pad, Office 365. Superbly architected by some Microsoft engineers, no longer joke-worthy like it was in the BPOS days, a place your infrastructure is heading to whether you like it or not.

I want you to be there ahead of all the other guys, and that’s what Cloud Praxis is all about: staying sharp on this cloud stack so we can keep our jobs and find #InfrastructureGlory.

DirSync is the first step here, and I’ll show you it on the next Cloud Praxis. Thanks for reading!

Cloud Praxis #3 : Office 365, Email and the best value in tech

Email.

What’s the first thought that comes to your mind when you read that word?

Exchange_2013-logo
I seek #ExchangeGlory !! said no IT blogger ever

If you’re in IT in the Microsoft space, maybe you think of huge mailbox stores, Exchange, Outlook, legal discovery requirements, spam headaches and the pressure & demand that stack places on your infrastructure. Terabytes and terabytes of the stuff, going back years. All up in your stack, DAG on your spindles, CAS on your edge, all load balanced at Layer 4/7 behind a physical or virtual device & wrapped up in a nice legitimate, widely-recognized CA-issued SSL cert. The stuff is everywhere.

I almost forgot. You have to back all that stuff up too. To tape in my case.

Oh, and perhaps you also recall the cold chills & instant sense of dread & fear you’ve felt just about every time an end user has asked (sometimes via email no less) “Is our email down?” I know the feeling.

Like a lot of Microsoft IT pros, I have my share of email war stories. I think email is one of those things in technology that lends itself to a sort of dualism, a sort of Devil on this shoulder, Angel on that shoulder . You can’t say something positive about email without adding a “but….” at the end, and that’s ok. Cognitive dissonance is allowed here; you can believe contrary ideas about email at the same time.

I know I do:

[table]

I love Email because, I hate email because

SMTP is last great agnostic open communication protocol, SMTP is too open and prone to abuse

Email is democratic and foundational to the internet, Email is fundamentally broken

Email will be around in some form forever, There’s no Tread Left on this Tire

Email is your online identity, Messaging applications are all the rage and so much richer

It’s how businesses communicate and thrive, One man’s business communication is another man’s spam

It’s always there, It goes down sometimes

Spam fighters and blacklists, Spam fighters and blacklists

It justifies Infrastructure Spend, It uses so much of my stack

Exchange is awesome and flexible, I broke Exchange once and fear it

[/table]

Whatever your thoughts on email are, one thing is clear: for Microsoft Infrastructure guys pondering the Microsoft cloud, the path to #InfrastructureGlory clearly travels through Exchange Country. In fact, it’s like the first step we’re supposed to take via Office 365.

daisettalabs large logoI don’t know about you, but I worry about the bandits in Exchange Country. Bandits that may break mail flow, or allow the tidal wave of spam in, prompt my users excessively for passwords, engage in various SSL hijinks, or otherwise change any of the finely-tuned ingredients in the delicate recipe that is my Exchange 2010 stack.

And yet, I bet if you polled Microsoft IT guys like me, you would find that of all the things they want to stick up in the Microsoft Cloud, Exchange & the email stack is probably at the top of the list. Just take it off our plate Microsoft as Exchange and email are in a sort of weird place in IT; it’s mission-critical and extremely important to have a durable Exchange infrastructure, yet raise your hand if you think Exchange Administration/Engineering are good career paths to take in 2014.

Didn’t think so.

So how do we get there?

I don’t have all the answers yet, but I at least have a good picture of the project, some hands-on experience, and some optimism, all of which means I’m one step closer to #InfrastructureGlory in the cloud.

Hard to build a realistic Exchange Lab 

First of all, recognize this. While it’s easy to build out a lab infrastructure (Cloud Praxis #2) for Active Directory, it’s quite another thing to build out an Exchange lab as I found out. You can’t do SMTP from home anymore (the spammers ruined that) which means you need resources at work, which might or might not be available. They aren’t in my case, so I struggled for awhile.

Maybe you have some resources at work (a few extra public IPs, a walled-off virtual network, some storage) with which you can build out an Exchange lab. If so, evaluate whether that’s going to benefit you and your organization. It might be a black hole of wasted time; it might pay off in a huge way as you wargame your way from on-prem to hybrid then to cloud and finally #InfrastructureGlory

Office 365 Praxis with the E1 Plan

For me and Daisetta Labs.net, I decided I couldn’t adequately simulate my workplace Exchange. So I did the next best thing.

I bought an Office 365 Enterprise E1 subscription.

That’s right baby. Daisetta Labs.net is on the O365 Enteprise E1 plan. It’s an Enteprrise of 1 (me!) but an Enterprise-scaled O365 account nonetheless.

And it’s fantastically cheap & easy to do, less than $100 a year for all this:

o365e1

For that measly amount, you can be an Enterprise of one in O365 and get all this:

  • A real Office 365 Enterprise account with Exchange 2013 and all of its incredibly rich features & options, including Powershell remoting, which you’ll need in your real O365 migration
  • That’s private email too...no ad bots gathering data against your profile. Up to you, but I moved my personal stack to O365 (more on that later)
  • Lync 2013. Forget Skype and all the other messengers. You get Lync service! Which interfaces with Skype and many others and makes you look like a real pro. Also useful if you have on-prem Lync, though I’m sad to report to you that, as of this month, Lync 2013 in O365 can’t kill your PBX off…yet.
  • Sharepoint & OneDrive for Business : I’ll admit it, I’ve done my fair share of Sharepoint hating but IT Infrastructurists need to realize Sharepoint is the gateway drug to many things businesses are interested in, like Business Intelligence & SQL, data visualizations and more. Besides, Sharepoint 2013 is not your daddy’s Sharepoint; it can do some neat stuff (not that I can show you, yet).
  • OneDrive for Business, again: If you’re in a Microsoft shop that’s still mostly on-prem, you probably experience Dropbox creep, where your users share documents via dropbox or other personal online storage solutions. With E1, you can get familiar with OneDrive for Business,within the context of Sharepoint & O365 management, dirsync, and all the rest.
  • One Terabyte of OneDrive for Business Storage. Outstanding. This was a recent announcement. It tickles me to think that my data is being deduped by a Windows storage spaces VM somewhere, just like I do on my storage at work.
  • Office Online : full on WAC server baby, with Excel in your Chrome or IE browser. Better, and better looking, than Google Docs.
  • With this plan, you can really test out Office for the iPad. You’ll get read and write to your O365 documents via an iPad, which can help you at work with that one C-level who loves his iPad as much as he loves Excel.
  • DirSync: The very directory synchronization tool you have stressed over at work is available to you with this simple, cheap E1 subscription. And it’s working. I’ve done it. Daisetta Labs.net is dirsynced to O365 from my home lab and I have SSO between my on-prem AD & Office 365. I deliberately kept my passwords separate between the two, but now they are in sync.

Anyway you cut it O365 E1 is an amazingly affordable and a very effective way to confront your cloud angst and get comfortable with Office 365. Even if you can’t fully simulate your workplace Exchange stack, you should consider doing this; you will use these same tools (particularly Powershell Remoting, the wizards in O365 & dirsync) at some point; best to get familiar with them now.

I could have hosted my Daisetta Labs.net domain anywhere; but I have zero regrets putting it in O365 on the E1 plan and committing for 12 months. If you’re an IT pro like me trying to get your infrastructure to the Microsoft cloud, you’d be well-served by doing the same thing I did. You may even want to ditch your personal email account and just go full Office 365…to eat the same dog food we’re going to serve to our users soon.

More to come on this tomorrow, suffice it to say, DaisettaLabs.net is dirsyncing as I write this. I’ll have screenshots, wizard processes and more to show.