Integrating Trendnet IP Cameras with my homelab

What do you get when you take an IT Systems Engineer with more time on his hands than usual and an unfinished home project list that isn’t getting any shorter?

You get this:

Daytime
My home automation/Internet of Things ‘play’

That’s right. I’ve stood-up some IP surveillance infrastructure at my home, not because I’m a creepy Big Brother type with a God-Complex, rather:

  1. Once my 2.5 year old son figured out how to unlock the patio door and bolt outside, well, game over boys and girls….I needed some ‘insight’ and ‘visibility’ into the Child Partition’s whereabouts pronto and chasing him while he giggles is fun for only so long
  2. My home is exposed on three sides to suburban streets, and it’s nice to be able to see what’s going on outside
  3. I have creepy Big Brother tendences and/or God complex

I had rather simple rules for my home surveillance project:

  • IP cameras: ain’t no CCTV/600 lines of resolution here, I wanted IP so I could tie it into my enterprise home lab
  • Virtual DVR, not physical: Already have enough pieces of hardware with 16 cores, 128GB of RAM, and about 16TB of storage at home.
  • No Wifi, Ethernet only: Wifi from the camera itself was a non-starter for me because 1) while it makes getting video from the cameras easier, it limits where I can place them both from a power & signal strength perspective 2) Spectrum & bandwidth is limited & noisy at distance-friendly 2.4GhZ, wide & open at 5ghz, but 5 has half the range of 2.4. For those reasons, I went old-school: Cat5e, the Reliable Choice of Professionals Evereywhere
  • Active PoE: 802.3af as I already own about four PoE injectors and I’ve already run Cat5e all over the house
  • Endpoint agnostic:  In the IP camera space, it’s tough to find an agnostic camera system that will work on any end-device with as little friction as possible. ONVIF is, I suppose, the closest “standard” to that, and I don’t even know what it entails. But I know what I have: Samsung GS6, iPhone 6, a Windows Tiered Storage box, four Hyper-V hosts, System Center, an XBox One and 100 megabit internet connection.
  • Directional, no omni-PTZ required: I could have saved money on at least one corner of my house by buying a domed, movable PTZ camera rather than use 2 directionals, but 1) this needed to work on any end-point and PTZ controls often don’t

And so, over the course of a few months, I picked up four of these babies:

TV-IP310PI_d02_2

Trendnet TV-IP310PI

Design

I liked these cameras from the start. They’re housed in a nice, heavyweight steel enclosure, have a hood to shade the lens and just feel solid and sturdy. Trendnet markets them as outdoor cameras, and I found no reason to dispute that.

My one complaint about these cameras is the rather finicky mount. The camera can rotate and pivot within the mount’s attachment system, but you need to be careful here as an ethernet cable (inside of a shroud) runs through the mount. Twist & rotate your camera too much, and you may tear your cable apart.

And while the mount itself is steel and needs only three screws to attach, the interior mechanism that allows you to move the camera once mounted is cheaper. It’s hard to describe and I didn’t take any pictures as I was cursing up a storm when I realized I almost snapped the cable, so just know this: be cognizant that you should be gentle with this thing as you mount it and then as you adjust it. You only have to do that once, so take your time.

Imaging and Performance:

ircam
Nighttime

Trendnet says the camera’s sensor & processing is capable of pushing out 1080p at 30 frames per second, but once you get into one of these systems, you’ll notice it can also do 2560×1440, or QHD resolutions. Most of the time, images and video off the camera are buttery smooth, and it’s great.

I’m not sharp enough on video and sensors to comment on color quality, whether F 1.2 on a camera like this means the same as it would on a still DSLR, or understand IR Lux, so let me just say this: These cameras produce really sharp, detailed and wide-enough (70 degrees) images for me, day or night. Color seems right too; my lawn is various hues of brown & green thanks to the heat and California drought, and my son’s colorful playthings that are scattered all over do in indeed remind me of a clown’s vomit. And at night, I can see far enough thanks to ambient light. Trendnet claims 100 foot IR-assisted viewing at night. I see no reason to dispute that.

Let the camera geeks geek out on teh camera; this is an enterprise tech blog, and I’ve already talked abou the hardware, so let’s dig into the software-defined & networking bits that make this expensive project worthwhile.

Power & Networking

These cameras couldn’t be easier to connect and configure, once you’ve got the power & cabling sorted out. The camera features a 10/100 ethernet port; on all four of my cameras, that connects to four of Trendnet’s own PoE injectors. All PoE injectors are inside my home; I’d rather extend ethernet with power than put a fragile PoE device outside. The longest cable run is approximately 75′, well within the spec. Not much more to say here other than Trendnet claims the cameras will use 5 watts maximum, and that’s probably at night when the IR sensors are on.

From each injector, a data cable connects to a switch. In my lab, I’ve got two enterprise-level switches.

One camera, the garage/driveway camera, is plugged into trunked, native vlan 410 port on my 2960s in the garage,

The other switch is a small CIsco SG-300 10p. The three other cameras connect to it. The SG-300 serves the role of access-layer switch and has a 3x1GbE port-channel back to the 2960s. This switch wasn’t getting used enough in my living room, so I moved it to my home office, where all ports are now used. Here’s my home lab environment, updated with cameras:

The Homelab as it stands today
The Homelab as it stands today

Like any other IP cam, the Trendnet will obtain an IP off your DHCP server. Trendnet includes software with the camera that will help you find/provision the camera on your network, but I just saved a few minutes and looked in my DHCP table. As expected, the cameras all received a routable IP, DNS, NTP and other values from my DHCP.

Once I had the IP, it was off to the races:

  • Set DHCP reservation
  • Verify an A record was created DNS so I could refer to the cameras by names rather than IP
  • Login, configure new password, update firmware, rename camera, turn-off UPNP, turn-off telnet
  • Adjust camera views

Software bits – Server Side

Trendnet is nice enough to include a fairly robust and rebadged version of Luxriot camera software, which has two primary components: Trendnet View Pro (Fat Client & Server app) and VMS Broacast server, an http server. Trendnet View Pro is a server-like application that you can install on your PC to view, control, and edit all your cameras. I say server-like because this is the free-version of the software, and it has the following limits:

  • Cannot run as a Windows Service
  • An account must be logged in to ‘keep it running’
  • You can install View Pro on as many PCs as you like, but only one is licensed to receive streaming video at a time

Upgrading the free software to a version that supports more simultaneously viewers is steep: $315 to be exact.

Smoking the airwaves with my beater kiosk PC in the kitchen. This is the TrendNet View client, limited to one viewer at a time
Smoking the airwaves with my beater kiosk PC in the kitchen. This is the TrendNet View client, limited to one viewer at a time

Naturally, I went looking for an alternative, but after dicking around with Zoneminder & VLC for awhile (both of which work but aren’t viewable on the XBox), I settled on VMS Broadcast server, the http component of the free software.

Just like View Pro, VMS Broadcast won’t run as a service, but, well, sysinternals!

So after deliberating a bit, I said screw it, and stood-up a Windows 8.1 Pro VM on a node in the garage. The VM is Domain-joined, which the Trendnet software ignored or didn’t flag, and I’ve provisioned 2 cores & 2GB of RAM to serve, compress, and redistribute the streams using the Trendnet fat client server piece as well as the VMS web server.

Client Side

On that same Windows 8.1 VM, I’ve enabled DLNA-sharing on VLAN 410, which is my trusted wireless & wired internal network. The thinking here was that I could redistribute via DLNA the four camera feeds into something the XBox One would be able to show on our family’s single 48″ LCD TV in the living room via the Media App. So far, no luck getting that to work, though IE on the XBox One will view and play all four feeds from the Trendnet web server, which for the purposes of this project, was good enough for me.

Additionally, I have a junker Lenovo laptop (Ideapad, 11″) that I’ve essentially built into a Kiosk PC for the kitchen/dining area, the busiest part of the house. This PC automatically logs in, opens the fat client and loads the file to view the four live feeds. And it does this all over wifi, giving instant home intel to my wife, mother-in-law, and myself as we go about our day.

Finally, both the iOS & Android devices in my house can successfully view the camera streams, not from the server, but directly (and annoyingly) from the cameras themselves.

The Impact of RTSP 1080p/30fps x 4 on Home Lab 

I knew going into this that streaming live video from four quality cameras 24×7 would require some serious horsepower from my homelab, but I didn’t realize how much.

From the compute side of things, it was indeed alot. The Windows 8.1 VM is currently on Node2, a Xeon E3-1241v3 with 32GB of RAM.

Typically Node2’s physical CPU hovers around 8% utilization as it hosts about six VMs in total.

With the 8.1 VM serving up the streams as well as compressing them with a variable bit rate, the tax for this DIY Home surveillance project was steep: Node2’s CPU now averages 16% utilized, and I’ve seen it hit 30%. The VM itself is above 90% utilization.hosts

More utilization = more worries about thermal as Node2 sits in the garage. In southern California. In the summertime.

Ambient air temperature in my garage over the last three weeks.
Ambient air temperature in my garage over the last three weeks.

Node2’s average CPU temperature varies between 22c and 36c on any given warm day in the garage (ambient air is 21c – 36c). But with the 8.1 VM, Node2 has hit as high as 48c. Good thing I used some primo thermal paste!

trsp

All your Part 15 FCC Spectrum are belong to me, on channel 10 at least
All your Part 15 FCC Spectrum are belong to me, on channel 10 at least

From the network side, results have been interesting. First, my Meraki is a champ. The humble MR-18 802.11n access point doesn’t break a sweat streaming the broadcast feed from the VM to the Lenovo Kiosk laptop in the kitchen. Indeed, it sustains north of 21mb/s as this graph shows, without interrupting my mother in law’s consumption of TV broadcasts over wifi (separate SSID & VLAN, from the SiliconDust TV tuner), nor my wife’s Facebooking & Instagramming needs, nor my own tests with the Trendnet application which interfaces with the cameras directly.

Meraki’s analysis says that this makes the 2.4ghz spectrum in my area over 50% utilized, which probably frustrates my neighbors. Someday perhaps I’ll upgrade the laptop to a 5ghz radio.

vSwitch, the name of my Converged SCVMM switch, is showing anywhere from 2megabits to 20 megabits of Tx/Rx for the server VM. Pretty impressive performance for a software switch!network

Storage-wise, I love that the Trendnets can mount an SMB share, and I’ve been saving snapshots of movement to one of the SMB shares on my WindowsSAN box.

I am also using Trendnet’s email alerting feature to take snapshots and email them to me whenever there’s motion in a given area. Which is happening a lot now as my 2 year old walks up to the cameras, smiles and says “Say cheeeese!”

All in all, a tidy & fun sub-$1000 project!

#StorageGlory Achieved : 30 Days on a Windows SAN

 Behold, these three remain. File. Block. Object. And the greatest of these is block.  – Sr. Systems Engineer St. Paul, in a letter to confused storage engineers in Thessalonika

Right. So a couple weeks back I teased the hardware specs of the new storage array I built for the Daisetta Lab at home.

Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab
Software-defined. x86. File and block. Multipath. Intel. And some Supermicro. Storage utopia up in the Daisetta Lab

My idea was to combine all types of disks -rotational 3.5″ & 2.5″ drives, SSDs, mSATAs, hell, I considered USB- into one tight, well-built storage box for my lab and home data needs. A sort of Storage Ark, if you will; all media types were welcome, but only if they came in twos (for mirroring & Parity sake, of course) and only if they rotated at exactly 7200 RPM and/or leveled their wears evenly across the silica.

And onto this unholy motley crue of hard disks I slapped a software architecture that promised to abstract all the typical storage driver, interface, and controller nonsense away, far, far away in fact, to a land where the storage can be mixed, the controllers diverse, and by virtue of the software-definition bits, network & hypervisor agnostic. In short, I wanted to build an agnostic #StorageGlory box in the Daisetta Lab.

Right. So what did I use to achieve this? ZFS and Zpools?

Hell no, that’s so January.

VSAN? Ha! I’m no Chris Wahl.

I used Windows, naturally.

That’s right. Windows. Server 2012 R2 to be specific, running Core + Infrastructure GUI with 8GB of RAM, and some 17TB of raw disk space available to it. And a little technique developed by the ace Microsoft server team called Tiered Storage Spaces.

Was a #StorageGlory Achievement Unlocked, or was it a dud?

Here’s my review after 30 days on my Windows SAN: san.daisettalabs.net.

The Good

It doesn’t make you pick a side in either storage or storage-networking: Do you like abstracted pools of storage, managed entirely by software? Put another way, do you hate your RAID controller and crush on your old-school NetApp filer, which seemingly could do everything but object storage?

When I say block, do you instinctively say file? Or vice-versa?

Well then my friend, have I got a storage system for your lab (and maybe production!) environment: Windows Storage Spaces (now with Tiering!) offers just about everything guys like you or me need in a storage system for lab & home media environments. I love it not just because it’s Microsoft, but also because it doesn’t make me choose between storage & storage-networking paradigms. It’s perhaps the ultimate agnostic storage technology, and I say that as someone who thinks about agnosticism and storage.

A lot.

You know what I’m talking about. Maybe today, you’ll need some block storage for this VM or that particular job. Maybe you’re in a *nix state of mind and want to fiddle with NFS. Or perhaps you’re feeling bold & courageous and decide to try out VMware again, building some datastores on both iSCSI LUNs and NFS shares. Then again, maybe you want to see what SMB 3.0 3.0 is all about, the MS fanboys sure seem to be talking it up.

The point is this: I don’t care what your storage fancy is, but for lab-work (which makes for excellence in work-work) you need a storage platform that’s flexible and supportive of as many technologies as possible and is, hopefully, software-defined.

And that storage system is -hard to believe I’ll grant you- Windows Server 2012 R2.

I love storage and I can’t think of one other storage system -save for maybe NetApp- that let’s me do crazy things like store .vmdks inside of .vhdxs (oh the vIrony!), use SMB 3 multichannel over the same NICs I’m using for iSCSI traffic, create snapshots & clones just like big filers all while giving me the performance-multiplier benefits of SSDs and caching and a reasonable level of resiliency.

File this one under WackWackStorageGloryAchievedWindows boys and girls.

I can do it all with Storage Spaces in 2012 R2.

As I was thinking about how to write about Storage Spaces, I decided to make a chart, if only to help me keep it straight. It’s rough but maybe you’ll find it useful as you think about storage abstraction/virtualization tech:

Storage-Compared
And yes. Ex post facto dedupe is a made up term. By me. It’s latin for “After the fact, dedupe,” because I always scheduled my dedupes for Saturday night, when the IO load on the filer was low. Ex post facto dedupe is in contrast to some newer storage companies that offer inline compression & dedupe, but none of the ones above offer this, sadly.

It’s easy to build and supports your disks & controllers: This is a Microsoft product. Which means it’s easy to deploy & build for your average server guy. Mine’s running on a very skinny, re-re-purposed SanDisk Ready Cache SSD. With Windows 2012 R2 server running the Infrastructure Management GUI (no explorer.exe, just Server Manager + your favorite snap-ins), it’s using about 6GB of space on the boot drive.

And drivers for the Intel C226 SATA controller, the LSI 9218si SAS card, and the extra ASMedia 1061 controller were all installed automagically by Windows during the build.

The only other system that came close to being this easy to install -as a server product- was Oracle Solaris 11.2 Beta. It found, installed drivers for, and exposed all controllers & disks, so I was well on my way to going the ZFS route again, but figured I’d give Windows a chance this time around.

Nexenta 4, in contrast, never loaded past the Install Community Edition screen.

It’s improved a lot over 2012: Storage Spaces almost two years ago now, and I remember playing with it at work a bit. I found it to be a mind-f*** as it was a radically different approach to storage within the Windows server context.

I also found it to be slow, dreadfully slow even, and not very survivable. Though it did accept any disk disk I gave it, it didn’t exactly like it when I removed a USB drive during an extended write test. And it didn’t take the disk back at the conclusion of the test either.

Like everything else in Microsoft’s current generation, Storage Spaces in 2012 R2 is much better, more configurable, easier to monitor, and more tolerant of disk failures.

It also has something for the IOPS speedfreak inside all of us.

Storage Spaces, abstract this away
Storage Spaces, abstract this away

Tiered Storage Spaces & Adjustable write cache: Coming from ZFS & the Adaptive Replacement Cache, the ZFS Intent Log, the SLOG, and L2ARC, I was kind of hooked on the idea of using massive amounts of my ECC RAM to function as a sort of poor-mans NVRAM.

Windows can’t do that, but with Tiered Storage Spaces, you can at least drop a few SSDs in your array (in my case three x 256GB 840 EVO & one 128GB Samsung 830), mix them into your disk pool, and voila! Fast read-cache, with a Microsoft-flavored MRU/LFU algorithm of some type keeping your hottest data on the fastest disks and your old data on the cheep ‘n deep rotationals.

What’s more, going with Tiered Storage Spaces gives you a modest 1GB write cache, but as I found out, you can increase that up to 10GB.

Which i naturally did while building this guy out. I mean, who wouldn’t want more write-cache?

But there’s a huge gotcha buried in the Technet and blogposts I found about this. I wanted to pool all my disks together into as large of a single virtual disk as possible, then pack iSCSI-connected .vhdxs, SMB 3 shares, and more inside that single, durable & tiered virtual disk.What I didn’t want was several virtual disks (it helped me to think of virtual disks as a sort of Aggregate) with SMB 3 shares and vhdx files stored haphazardly between them.

Which is what you get when you adjust the write-cache size. Recall that I have a capacity of about 17TB raw among all my disks. Building a storage pool, then a virtual disk with a 10GB write cache gave me a tiered virtual disk with a maximum size of about 965GB.  More on that below.

It can be wicked fast, but so is RAID 0: Check out my standard SQLIO benchmark routine, which I run against all storage technologies that come my way. The 1.5 hour test is by no means comprehensive -and I’m not saying the IOPS counter is accurate at all (showing max values across all tests by the way)- but I like this test because it lets me kick the tires on my array, take her out for a spin, and see how she handles.

And with a “Simple” layout (no redundancy, probably equivalent to RAID 0), she handles pretty damn well, but even I’m not crazy enough to run tiered storage spaces in a simple layout config:

storage spaces
These three tests (1.5 hours each, identical setup against multiple configs) were done locally on the array, not over my home network

What’s odd is how poorly the array performed with 10GB of “Write Cache.” Not sure what happened here, but as you can see, latency spiked higher during the 10GB write cache write phase of the test than just about every other test segment.

Something to do with parity no doubt.

For my lab & home storage needs, I settled on a Mirror 2-way parity setup that gives me moderate performance with durability in mind, though not much as you’ll see below.

Making the most of my lab/home network and my NICs: Recall that I have six GbE NICs on this box. Two are built into the Supermicro board itself (Intel), and the other four come by way of a quad-port Intel I350-T4 server NIC.

Anytime you’re planning to do a Microsoft cluster in the 1GbE world, you need lots of NICs. It’s a bit of a crutch in some respects, especially in iSCSI. Typically you VLAN off each iSCSI NIC for your Hyper-V hosts and those NICs do one thing and one thing only: iSCSI, or Live Migration, or CSV etc. Feels wasteful.

But on my new storage box at home, I can use them for double-duty: iSCSI (or LM/CSV) as well as SMB 3. Yes!

Usually I turn off Client for Microsoft Networks (the SMB file sharing toggle in NIC properties) on each dedicated NIC (or vEthernet), but since I want my file cake & my block cake at the same time, I decided to turn SMB on on all iSCSI vEthernet adapters (from the physical & virtual hosts) and leave SMB on the iSCSI NICs on san.daisettalabs.net as well.

The end result? This:

[table caption=”Storage Networking-All of the Above Approach” width=”500″ colwidth=”20|100|50″ colalign=”left|left|center|left|right”]
nic,Name,VLAN,IP,Function
1,MGMT,100,192.168.100.15,MGMT & SMB3
2,CLNT,102,192.168.102.15,Home net & SMB3
3,iSCSI-10,10,172.16.10.x,iSCSI & SMB3
4,iSCSI-11,10,172.16.11.x,iSCSI & SMB3
5,iSCSI-12,10,172.16.12.x,iSCSI & SMB3
[/table]

That’s five, count ’em five NICs (or discrete channels, more specifically) I can use to fully soak in the goodness that is SMB 3 multichannel, with the cost of only a slightly unsettling epistemological question about whether iSCSI NICs are truly iSCSI if they’re doing file storage protocols.

Now SMB 3 is so transparent (on by default) you almost forget that you can configure it, but there’s quite a few ways to adjust file share performance. Aidan Finn argues for constraining SMB 3 to certain NICs, while Jose Barreto details how multichannel works on standalone physical NICs, a pair in a team, and multiple teams of NICs.

I haven’t decided which model to follow (though on san.daisettalabs.net, I’m not going to change anything or use Converged switching…it’s just storage), but SMB 3 is really exciting and it’s great that with Storage Spaces, you can have high performance file & block storage. I’ve hit 420MB/sec on synchronous file copies from san to host and back again. Outstanding!

I Finally got iSNS to work and it’s…meh: One nice thing about san.daisettalabs.net is that that’s all you need to know…the FQDN is now the resident iSCSI Name Server, meaning it’s all I need to set on an MS iSCSI Initiator. It’s a nice feature to have, but probably wasn’t worth the 30 minutes I spent getting it to work (hint: run set-wmiinstance before you run iSNS cmdlets in powershell!) as iSNS isn’t so great when you have…

SMI-S, which is awesome for Virtual Machine Manager fans: SMI-S, you’re thinking, what the the hell is that? Well, it’s a standardized framework for communicating block storage information between your storage array and whatever interface you use to manage & deploy resources on your array. Developed by no less an august body than the Storage Networking Industry Association (SNIA), it’s one of those “standards” that seem like a good idea, but you can’t find it much in the wild as it were. I’ve used SMI-S against a NetApp Filer (in the Classic DoT days, not sure if it works against cDoT) but your Nimbles, your Pures, and other new players in the market get the same funny look on their face when you ask them if they support SMI-S.

“Is that a vCenter thing?” they ask.

Sigh.

Microsoft, to its credit, does. Right on Windows Server. It’s a simple feature you install and two or three powershell commands later, you can point Virtual Machine Manager at it and voila! Provision, delete, resize, and classify iSCSI LUNS on your Windows SAN, just like the big boys do (probably) in Azure, only here, we’re totally enjoying the use of our corpulent.vhdx drives, whereas in Azure, for some reason, they’re still stuck on .vhds like rookies. Haha!

Single Pane o' glass in VMM with SMI-S for the Hyper-V set
Single Pane o’ glass in VMM with SMI-S, GUIDs galore and more for the Hyper-V set

It’s a very stable storage platform for Microsoft Clustering: I’ve built a lot of Microsoft Hyper-V clusters. A lot. More than half a dozen in production, and probably three times that in dev or lab environments, so it’s like second nature to me. Stable storage & networking are not just important factors in Microsoft clusters, they are the only factors.

So how is it building out a Hyper-V cluster atop a Windows SAN? It’s the same, and different at the same time, but, unlike so many other cluster builds, I passed the validation test on the first attempt with green check marks everywhere. And weeks have gone by without a single error in the Failover Clustering snap-in; it’s great.

The Bad

It’s expensive and seemingly not as redundant as other storage tech: When you build your storage pool out of offlined disks, your first choice is going to involve (just like other storage abstraction platforms) disk redundancy. Microsoft makes it simple, but doesn’t really tell you the cost of that redundancy until later in the process.

Recall that I have 17TB of raw storage on san.daisettalabs.net, organized as follows:

[table]

Disk Type, Quantity, Size, Format, Speed, Function

WD Red 2.5″ with NASWARE, 6, 1TB, 4KB AF, SATA 3 5400RPM, Cheep ‘n deep

Samsung 840 EVO SSD, 3, 256GB, 512byte, 250MB/read, Tiers not fears

Samsung 830 SSD, 1, 128GB, 512byte, 250MB/read, Tiers not fears

HGST 3.5″ Momentus, 6, 2TB, 512byte, 105MB/r/w, Cheep ‘n deep

[/table]

Now, according to my trusty IOPS Excel calculator, if I were to use traditional RAID 5 or RAID 6 on that set of spinners, I’d get about 16.5TB usable in the former, 15TB usable in the latter (assuming RAID penalty of 5 & 6, respectively)

For much of the last year, I’ve been using ZFS & RAIDZ2 on the set of six WD Red 2.5″ drives. Those have a raw capacity of 6TB. In RAIDZ2 (roughly analogous to RAID 6), I recall getting about 4.2TB usable.

All in all, traditional RAID & ZFS’ RAIDZ cost me between 12% and  35% of my capacity respectively.

So how much does Windows Storage Spaces resiliency model (Mirrored, 2-way parity) cost me? A lot. We’re in RAID-DP territory here people:

 

storagespaces5

Ack! With 17TB of raw storage, I get about 5.7TB usable, a cost of about 66%!

And for that, what kind of resieliency do I get?

I sure as hell can’t pull two disks simultaneously, as I did live during prod in my ZFS box. I can suffer the loss of only a single disk. And even then, other Windows bloggers point to some pain as the array tries to adjust.

Now, I’m not the brightest on RAID & parity and such, so perhaps there’s a more resilient, less costly way to use Storage Spaces with Tiering, but wow…this strikes me as a lot of wasted disk.

Not as easy to de-abstract the storage: When a disk array is under load, one of my favorite things to do is watch how the IO hits the physical elements in the array. Modern disk arrays make what your disks are doing abstract, almost invisible, but to truly understand how these things work, sometimes you just want the modern equivalent of lun stats.

In ZFS, I loved just letting gstat run, which showed me the load my IO was placing on the ARC, the L2ARC and finally, the disks. Awesome stuff:

In this Gifcam, watch ada0-6 as they struggle under load with the "Always Sync" option enabled.
In this Gifcam, watch ada0-6 as they struggle under load with the “Always Sync” option enabled.

As best as I can tell, there’s no live powershell equivalent to gstat for Storage Spaces. There are teases though; you can query your disks, get their SMART vitals, and more, but peeling away the onion layers and actually watching how Windows handles your IO would make Storage Spaces the total package.

Bottom line

So that’s about it: this is the best storage box I’ve built in the Daisetta Lab. No regrets going with Windows. The platform is mature, stable, offers very good performance, and decent resiliency, if at a high disk cost.

I’m so impressed I’ve checked my Windows SAN skepticism at the door and would run this in a production environment at a small/medium business (clustered, in the Scaled Out File Server role). Cost-wise, it’s a bargain. Check out this array: it’s the same exact Hardware a certain upstart Storage vendor I like (that rhymes with Gymbal Porridge) sells, but for a lot less!

#StorageGlory achieved. At home. In my garage.