In this CMUG session, I discussed “Data Center Virtualization” at the April C-MUG session. We looked at the many aspects of virtualization that are relevant to the data center, sharing many ways in which the data center is changing, and presenting some design approaches that you may find useful in your own data center. Major portions of the talk looked at the following topics:
- Server virtualization and its impacts and opportunities (some VMware topics, Nexus 1000v virtual switch)
- Network virtualization (Cisco 6500 VSS, Cisco Nexus 7000 vDC, Nexus 7000 and 5000 vPC, etc.)
- SAN virtualization techniques and topics (including NPV and NPIV, why SAN virtualization matters)
I also presented an overview of data center services modules and data center interconnect (“DCI”) techniques, including the powerful new Cisco Overlay Transport Virtualization (OTV) technology. (We hope to give a full presentation on OTV details early in the Fall of 2010.)
For a PDF of the presentation, please download Data Center Virtualization . (13.1 MB — big! All those graphics!)
2 responses to “CMUG: Data Center Virtualization”
I found a VSS Best Practices document which may have been the one you described.
The VSS link is good if you’re doing VSS, but with Nexus vPC, there’s a totally different set of issues. VSS works pretty much as you’d (well, I’d) expect. With vPC, there are some failure situations or design situations where traffic cannot go across the vPC peer link and then back out, where you’d best have a non-vPC trunk between the vPC switches.
vPC basics: [url]http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572835-00_NX-OS_vPC_DG.pdf[/url]
Hmm, I have a copy of a great Cisco presentation by Roberto Mori I hoped to find online, but can’t find it on cisco.com. Suffice it to say (for now) that you want to carefully read the Nexus 7000 or 5000 chapter on configuring vPC, especially the part about dual attaching everything that’s doing vPC or involves a VLAN crossing the vPC peer link, or you may have cases of connectivity loss in failure situations, despite redundant links.