Being Clear about Virtual Switching

Author
Peter Welcher
Architect, Operations Technical Advisor

Virtual switching refers the the several ways in which switching has been and is evolving to adapt to and provide suitable functionality for virtual machines and blade server chassis. Virtual switching is also potentially confusing, even for those who have been working in the space for a while. So I’d like to describe some terms, while passing along some good links that I’ve come across. (Thanks to those who shared some of them with me.) 

Our starting point is a very lucid blog by Joe Onisick, titled “Access Layer Network Virtualization: VN-Tag and VEPA”, at http://www.definethecloud.net/access-layer-network-virtualization-vn-tag-and-vepa.

Joe explains Virtual Ethernet Port Aggregator (VEPA), 802.1Qbg as allowing a Virtual Ethernet Bridge (e.g. VMware vswitch etc.) to hand off internal switching between VM’s to an external physical switch. Multi-channel (802.1Qbc) is use of QinQ to provide multiple separate VEB and VEPA uplinks on a single physical link.

VN-Tag is Cisco’s approach to this, and the 802.1Qbh Bridge Port Extension standards work. Cisco VIC in UCS and the Nexus 2000 us VN-Tag technology. The key difference is VN-Tag adding a tag to the frame, requiring software and hardware support for encapsulation/de-encapsulation.

Thanks, Joe, I’ve been looking for a lucid summary for quite a while! Be sure to see his article for diagrams and more words of explanation. 

On the server front, I seem to end up working with HP blade chassis admins a lot, and their pet tool is HP VirtualConnect. So I’ve had to read up on HP VC and re-assure network admins it isn’t going to cause a STP loop. I read HP VC as similar to VMWare vswitch (but for physical blades) so perhaps more like the Cisco UCS Fabric Manager — it provides an interconnect between the mezzanine interface (NIC) cards on the blade servers and the external physical ports. HP VC is in effect an internal patch panel for Ethernet and Fibre Channel, that can apply tags and forms of teaming externally. It also switches internally. And if you work with the HP admins, they might appreciate your configuration LACP on the relevant Cisco ports, since HP VC will try to negotiate and then will do active/passive links if it does not find LACP. 

There’s now a fairly lucid document titled “HP VirtualConnect for the Cisco Network Administrator” from HP. It can be found at http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01386629/c01386629.pdf. Yes, it is a bit long. But worth reading (skip the server stuff you don’t care about?). Must-read for the network datacenter person in a HP server shop. I do see a couple of consulting customers doing HP VC plus Cisco 1000v, since HP VC is more about physical interconnect and “plumbing”, and 1000v is a large distributed switch handling the logical VM plumbing.

Last but not least, Ivan Pepelnjak has a blog characterizing the vendor strategies for the datacenter. See http://blog.ioshints.info/2011/03/data-center-fabric-architectures.html. He describes the strategies as Business as Usual (Multi-Chassis EtherChannel or Link Aggregation, TRILL, Service Provider Bridging, FabricPath), the Borg (forms of stacking, usually two switches), and Big Brother. Juniper’s QFabric sounds like it might be a Big Brother strategy to me (central controller doing MAC over MPLS or whatever) for their fabric / TRILL clustering approach.
All good stuff! Happy reading!

Leave a Reply