Why vPC UCS to Nexus?

Author
Peter Welcher
Architect, Operations Technical Advisor

For a change of pace, this blog poses a thought exercise (teaser?), a small item I ran into while considering whether or not to use a vPC to connect a UCS chassis to a pair of Nexus switches. I’ll give you the diagram and scenario, and then my answer. I’m hoping this will give your Layer 2 and Layer 3 skills a little exercise along the way. For me, it confirms that Cisco validated designs usually have a reason for how they do things!

The Setting

The consulting customer had two UCS chassis and two NetApp devices to connect to two Nexus switches. Each of the four systems to connect up had 4 x 10 Gbps interfaces. The UCS chassis would be doing NFS to the NetApp devices for storage, since NFS has worked well for the site historically. 

Testing showed that single-homing the NetApp boxes worked well, with a NetApp rapid failover mechanism protecting against loss of a single Nexus. However, there was some desire to align the design with the Cisco FlexPod validated design. So the design evolved to dual-homing the 2 UCS and the 2 NetApp chassis via two links to each Nexus switch. Small detail: the design was a collapsed core, and Nexus 7K was playing the role of the Nexus 5K’s in the standard FlexPod design. 

The UCS devices and the NetApp boxes will be in different VLANs (hardly necessary, but planned). 

The Diagram

The following diagram illustrates the proposed design (using vPC for the uplinks).

The Question

The question came up of whether it was necessary / appropriate / useful to vPC the 4 links from each chassis. Using end host mode or MAC pinning mode could more or less imitate the traffic patterns that would occur with vPC. And not doing vPC would be a bit less technically complex, avoiding using the new / recent vPC functionality. Yet the FlexPod approach calls for using vPC.

Aside from high level “design philosophy”, is there a technical advantage to using vPC in this setting? What does using vPC add? 

Peeking

No peeking! My answer is below. 


The Answer

What’s your FHRP going to be? Whether HSRP, VRRP, or GLBP, your traffic will be sent to a virtual MAC address associated with the FHRP default gateway for the end system.

In a non-vPC setting, end host mode (or whatever) hashing randomness is likely to send about 50% of your traffic up the “wrong” link to get to the virtual MAC address. GSLB will just do that with more vMAC addresses involved. Such mis-guided traffic will then have to cross the peer link (which might be just a trunk if there’s no vPC being used). 

When you  do vPC, FHRP spoofing means that whichever uplink gets used, the receiving Nexus will forward it. Less / no cross-link traffic. That’s a win.

Bonus Round

What changes if we put the UCS and the NetApp devices into the same VLAN? Is vPC useful? 

Answer: I’m coming up with “maybe not for UCS to NetApp traffic, and back again”. But: what about traffic to users, who will be in a different VLAN? Then the same argument applies. 

Hope you enjoyed this teaser!

If I missed something,  I’m sure you’ll be delighted to tell the world all about it in a comment. Be polite or I won’t let it see the light of day! 🙂

Leave a Reply