CMUG: Designing for VPC, FEX, and Datacenter Virtualization

Author
Peter Welcher
Architect, Operations Technical Advisor
In this CMUG session I primarily discussed datacenter access layer virtualization. The content drew slides from several CiscoLive 2011 presentations, accompanied by some slides of my own. The intent was to try to look at datacenter virtualization from a slightly different perspective.

One goal of the talk was to provide fairly solid coverage of VPC and FEX (Nexus 2000) designs and best practices. Another theme was that the VN-Tags (former Cisco term) and VN-Link technology present in the Nexus 2K FEX are logically present in the Cisco 1000v, and literally present in the Cisco NIC adapter technologies named Adapter-FEX and VM-FEX. These provide per-host and per-VM virtualization of a NIC, allowing the logical interfaces to then be configured on the attached Nexus 5500 switch. In effect the NIC (or “VIC”) behaves somewhat  like a hardware-based version of the Nexus 1000v, with the Nexus 5K doing the switching in hardware.

The talk also briefly touched on somewhat related topics such as OpenFlow and VXLAN, mostly via discussion and whiteboarding not reflected in the slides. Active questions from the audience covered a lot of ground as well. (Great turnout — I’m honored that so many chose to spend their morning listening!)

For a PDF of the presentation, please download Designing for VPC, FEX and Datacenter Virtualization . (14.1 MB BIG download due to all the graphics!)

One response to “CMUG: Designing for VPC, FEX, and Datacenter Virtualization

  1. [b]Here’s a comment received by email:[/b]

    If it does not take to much time for you I have another question.
    I had several reading about e vpc-peerkeepalive, in terms of design.
    Can you tell me what is the best between the 2 possibilities below

    – to use one physical interface as a L3 peer-keepalive link between too N5K ?
    or
    – to install a new specific vlan just for a point to point L3 peer-keepalive that use an SVI

    Personnally I prefer the first scenario since I do not have to configure a new vlan just for that and
    be sure that this vlan is excluded from the vpc peer link.

    PRO/CONS ? I do not see the added value for the second solution and the different forun do not help

    Many thanks

    [b]MY REPLY:
    [/b]

    If you have a separate port channel for non VPC VLANs, I’d use approach #2 since that would be robust. Or such a link for routing between the peers.

    If you dont have that, then I’d use a 1 g routed link dedicated to keepalives.

Leave a Reply