Moving vSphere to Cisco UCS


Moving vSphere from a rack-mounted server solution to Cisco UCS? That’s smart. It won’t take long to find out why so many organizations have embraced the technology. Log into UCS Manager and poke around a bit. Once you get over the mental hurdle of managing Service Profiles instead of managing servers, you’ll be ready to go. You’ll create Service Profiles that are nearly identical to your rack-mounted servers: a pair of vHBAs, and between 6 and 10 vNICs. Then you’ll install ESXi, add the new hosts to the dvSwitch of your choice (you are using a dvSwitch, aren’t you?), and you’re in business.

Well, not quite. Of course, that approach would certainly work. If you’re using UCS blades with an M81KR or newer VIC, you can create over 100 vNICs per blade, so there’s no limitation there. But in this case, it’s not a limitation that you need to be concerned with. It’s the opportunity to simplify your vSphere host networking a bit.

UCS blades benefit from 10Gbit ethernet connectivity, which is provided by a combination of IOMs and Fabric Interconnects. In an HA configuration (with two Fabric Interconnects, and two IOMs in each 5108 chassis), you’ll have between 20Gbit and 160Gbit of bandwidth available to each chassis. And that connectivity will be fully fault tolerant and redundant (with some careful planning and design, of course).

So, about that vSphere host network config. On your rack-mounted servers, you probably had two NICs for vSwitch0, where you carried your mgmt and vMotion traffic. Maybe you had some IP storage, so another 2 NICs for vSwitch1 that served your vmk ports for storage. And then you probably had 2-4 NICs for your VM traffic. Sound about right? UCS can consolidate that configuration, without sacrificing redundancy, and give you more bandwidth at the same time. Why not create only two vNICs for each Service Profile, and make them both uplinks to your dvSwitch?

You’ll want to look at using QoS to mitigate any congestion problems that may arise during bursty network operations, like vMotion. And you’ll want to monitor your links to make sure you’re not hitting the 50% utilization mark on either regularly (if so, you don’t have sufficient failover capability). But for many, if not most, vSphere deployments, two vNICs for UCS host will do the trick.

Chime in via the Comments if you’ve got a question for me!

Leave a Reply