Storage Live Migration is something we Hyper-V guys only got in Server 2012 and it was one of the features I wanted the most after watching jealously as VMware guys storage vMotion .vhdks since George Bush was in office (or was it Clinton?).
I use Live Migration all the time during maintenance cycles and such, but pushing .vhdx hard drives around is more of a rare event for me.
Until now. See, I’ve got a new, moderately-performing array, a Nimble CS260 + an EL-125 add-on SAS shelf. It’s the same Nimble I abused in my popular January bakeoff blog post, and I’m thrilled to finally have some decent hybrid storage up in my datacenter.

However before I can push the button or press enter at the end of a cmdlet and begin the .vhdx parade from the land of slow to the promised land of speed, I’ve gotta worry about my switch.
You see, I’ve got another dinosaur in the rack just below the Nimble: a Cisco 6509e with three awful desktop-class blades, two Sup-720 mods in layer 3 and with HSRP, and then, the crown jewels of the fleet: two WS-X6748-GE-TX, where all my Hyper-V hosts & two SAN arrays are plugged in, each of them with two port-groups each with 20Gb/s fabric capacity.
Ahhh, the 6509: love it or hate it, you can’t help but respect it. I love it because it’s everything the fancy-pants switches of today are not: huge, heavy, with shitty cable management, extremely expensive to maintain (TAC…why do you hurt me even as I love you?), and hungry for your datacenter’s amperage.
I mean look at this thing, it gobbles amps like my filer gobbles spindles for RAID-DP:
325 watts per 6748, or, just about 12 watts less than my entire home lab consumes with four PCs, two switches, and a pfSense box. The X6748s are like a big block V8 belching out smoke & dripping oil in an age of Teslas & Priuses…just to get these blades into the chassis forced me to buy a 220v circuit & to achieve PSU redundancy required heavy & loud 3,000 watt supplies.
The efficiency nerd in me despises this switch for its cost & its Rush Limbaugh attitude toward the environment, yet I love it because even though it’s seven or eight years old, it’s only just now (perhaps) hitting the downward slope on my cost/performance bell curve. Even with those spendy power supplies and with increasing TAC costs, it still gives me enough performance thanks to this design & Hyper-V’s converged switching model:

Now MIke Laverick, all-star VMware blogger & employee, has had a great series of posts lately on switching and virtualization. I suggest you download them to your brain stat if you’re a VMware shop; especiallythe ones enabling netflow on your vSwitch & installing a vApp Scrutinizer, the new distributed switch features offered in ESXi 5.5 and migrating from standard to distributed switches. Great stuff there.
But if you’re at all interested in Hyper-V and/or haven’t gone to 10/40Gig yet and want to wring some more out of your old 5e patch cables, Hyper-V’s converged switching model is a damned fine option. Essentially a Hyper-V converged switch is a L2/L3 virtual switch fabricated on top of a Microsoft multiplexor driver that does GigE NIC teaming on the parent partition of your Hyper-V host.
This is something of a cross between a physical and logical diagram and it’s a bit silly and cartoonish, but a fair representation of the setup:

So this is the setup I’ve adopted in all my Hyper-V instances….it’s the setup that changed the way we do things in Hyper-V 3.0, it’s the setup that allows you to add/subtract physical NIC adapters or shutdown Cisco interfaces on the fly, without any effect on the vNICs on the host or the vNICs on the guest. It is one of the chief drivers for my continuing love affair with LACP, but you don’t need an LACP-capable switch to use this model; that’s what’s great about the multiplexor driver.
It’s awesome and durable and scalable and, oh yeah, you can make it run like a Supercharged V-6. This setup is tunable!
Distributed Switching & Enhanced LACP got nothing on converged Hyper-V switching, and that is all the smack I shall talk.
Now sharp readers will notice two things: #1 I’ve oversubscribed the 6748 blades (the white spaces on the switch diagram are standard virtual switches, iSCSI HBAs for host/guests and these switches function just like the unsexy Standard switch in ESXi) and #2 just because you team it doesn’t mean you can magically make 8 1GbE prots into a single 8 Gb interface.
Correct on both counts, which is why I have to at least give the beastly old 6509 some consideration. It’s only got 20Gb/s of fabric bandwidth per 24 port port-group. Best to test before I move my .vhdxs.
In part 2, I’ll show you in detail some of those tests. In the meantime, here’s some of my Netflows & results from some tests I”m running ahead of moves this weekend.



One thought on “Wargamming a mass Storage Live Migration with a 6509e, part 1”