I got in some lab time working with vPC+ last week. I was teaching the week-long Nexus class DCUFI in Indianapolis. I teach it about once a month via FireFly (enjoy teaching, change of scene, different activity from consulting, like the sound of own voice, get good evals, etc.). The class was sharp and a couple of folks were very interested in FabricPath and the vPC+ feature.
vPC+ is the feature whereby a pair of vPC switches at the edge of FabricPath are virtualized as a single switch in terms of the FabricPath topology, meaning that you get load-balanced behavior to the pair. There are configuration samples at the following URL’s:
- Configuring a vPC+ Switch ID
- Nexus 7000 FabricPath
- Cisco Nexus 5000 Series NX-OS FabricPath Configuration Guide
I also liked the Nexus 5000 FabricPath Operations Guide but it seems to have vanished from Cisco’s site since last week! I hope it is just temporarily AWOL.
Due to an upcoming migration, the site wants to run vPC on core/aggregation Nexus 7000’s supporting the old pairs of 6500 switches in the datacenters, and be ready to run FabricPath to Nexus 5500s as future pairs of Nexus 5500s are rolled out. Saying that differently, the idea was to start with FabricPath solely between the spine N7K’s, then grow the FabricPath cloud “down” to N5K’s using the newly added FabricPath feature for connectivity. Thus the initial topology would be running FabricPath solely on the vPC peer-link between the N7K’s. Doing that at the time of initial N7K rollout would minimize subsequent disruptive changes (or so we conjectured).
We tried this, it doesn’t seem to work. Watching debug, you get a lot of churn on the vPC peer-link, but it ends up with the vPC peer-link being down. The problem is ultimately that ISIS cannot come up with a link ID (LID) for the peer-link. My conjecture is that this is due to there being outside the design range contemplated or tested by Cisco, i.e. there is no other FabricPath network present. The next step I suggested would have been to restructure with a separate link other than the vPC peer-link running FabricPath, but we ran out of time and energy (and had no flexibility to recable the remote lab). We didn’t find (or perhaps recognize) any comment one way or the other about this in the documentation.
We did learn some things along the way, one of which I’d like to pass along since it might save you some frustration. We had some real problems getting the vPC peer-link to stay up initially, and kept getting an error message about switch id: for vPC+ you put the virtual switch switch-ID into the vpc domain block of the configuration. The somewhat cryptic comment in TFM (The Fine Manual) was that vPC and vPC+ cannot co-exist in the same VDC. What we saw was that some fabricpath state was persisting somewhere, even though we had removed all the fabricpath commands.
Eventually we figured out the it lurked in the vPC domain block. So if you set up a working vPC and then want to convert it to vPC+, you have to configure “no vpc domain” and then re-create the domain. That gets rid of vPC mode so you can configure vPC+. What’s in the examples and configuration guides works fine as long as you do it from scratch — just not if your peer-link is already up and running. This is something the manual could have been a lot clearer about. Anyway, hope this helps you when the time comes for you to vPC+.
7 responses to “Working with FabricPath and vPC+”
Cisco FabricPath Design Guide:
1. I have 4 Pair of Nexus 7Ks .
2. 2 Pair of N7Ks in Core ( C-N7K1 & C-N7K2,C-N7K3 &C-N7K4)
3. 2 Pair of N7Ks in Agg. ( A-N7K1 & A-N7K2,A-N7K3 &A-N7K4)
4. ( C-N7K1 & C-N7K2,C-N7K3 &C-N7K4)all these four boxes are full mesh with L3 Links & L2 Links . L3 Links ( M1 Module Ports) are in the Default VDC & L2 Links ( F1 Module Ports) are in Fabric Path VDC.
5. ( A-N7K1 & A-N7K2 ) are full mesh with( C-N7K1 & C-N7K2 ).all these four boxes are full mesh with L3 Links & L2 Links . L3 Links ( M1 Module Ports) are in the Classical Ethernet VDC & L2 Links ( F1 Module Ports) are in Fabric Path VDC.
6. ( A-N7K3 & A-N7K4 ) are full mesh with( C-N7K3 & C-N7K4 ).all these four boxes are full mesh with L3 Links & L2 Links . L3 Links ( M1 Module Ports) are in the Classical Ethernet VDC & L2 Links ( F1 Module Ports) are in Fabric Path VDC.
7. ( A-N7K1 & A-N7K2 ) we have vPC+ from Classical Ethernet VDC Ports to Fabric Path VDC.
8. ( A-N7K3 & A-N7K4 ) we have vPC+ from Classical Ethernet VDC Ports to Fabric Path VDC.
9. 1 Pair of Nexus 5K ( N5K1,N5K2) with FEX are connected to ( A-N7K1 & A-N7K2 ) in Classical Ethernet VDC Ports with CE VLAN 35 .
10. 1 Pair of Nexus 5K ( N5K3,N5K4) with FEX are connected to ( A-N7K3 & A-N7K4 )in Classical Ethernet VDC Ports with CE VLAN 35 .
11.The SVI for CE VLAN 35 on A-N7K1 & A-N7K2 use the 184.108.40.206/25
12.The SVI for CE VLAN 35 on A-N7K3 & A-N7K4 use the 220.127.116.11/25
13. However the FPV Vlan is with range of 2000-3000 which is in Fabricpath VDC
A. in this Scenario , as Seen above the vlan is same i.e vlan 35 , however Subnets are 18.104.22.168/25 & 22.214.171.124/25 , will there be Layer 2 adjancency and conversational Mac learning between the Servers i.e 126.96.36.199/25 & 188.8.131.52/25 ?
B. how do we verify/ use Show Command that both 184.108.40.206/25 & 220.127.116.11/25 are performing the conversational MAC learning ?
Aron was kind enough to provide the following diagram upon request, to clarify the above comment/questions:
The diagram certainly helps.
My first question is: what are you trying to do and why? This looks like either a lab scenario or a early stage in migration to FabricPath, although I’m not sure I’d migrate this way. One consideration: generally, with inter-VDC links, I worry about bandwidth bottlenecks compared to the internal fabric.
Assumption: you don’t state it, but I assume the links labelled vPC+ are trunks carrying all VLANs? Technically I would quibble that the ones on the right are not vPC+ since they are classic Ethernet ports, hence doing vPC.
To answer your questions:
(A) I’m understanding that VLAN 35 re-uses the same VLAN number for each n5K access pair (pod). Since the VLAN is not contiguous at L2, each instance has its own SVI and subnet. Since you tell me the FP VLANs are the range 2000-3000, the FP side does not indirectly interconnect the instances of VLAN 35. So as far as I can see, there is no L2 connection between the two VLAN 35 instances nor their SVIs.
(B) "show mac address-table dynamic VLAN 35". Look for entries of the form #.#.# (dotted triple of numbers = switch ID.subswitch ID.LID). Local L2 switching entries will show ports (interfaces) as normally, FabricPath-reachable MACs will show this other form of entry.
(C) Anticipating a further question, if you make VLAN 35 a FabricPath VLAN, then you’ve merged the instances of VLAN 35. If the two SVI’s are re-addressed to be in the same subnet (so ARP will work), then I’d think they’d have a L2 path to each other via the FP side and in that case MAC learning would occur.
A comment about the topology. For FP, I like having all the access switches connected to all the core/spine switches. Otherwise, you need a lot of connectivity (full mesh plus?) between the spine switches to make sure you don’t have a bottleneck. Putting the connectivity between the access and core layer makes it more useful. In the above diagram, the top LAN link looks to me like a bottleneck between the left and right halves.
The little issue remaining with FP is gateways out of the FP cloud. The obvious solution is to extend the FHRP protocol(s) to allow say up to 4 to 16 active forwarders for a given FP VLAN. Cisco rarely misses the obvious. 🙂
I just was reviewing [url]http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/fabricpath/configuration/guide/fp_interfaces.html#wp1674221[/url]. I came across something relevant to this blog article … a perhaps gentler way to migrate vpc to vpc+. I have not lab-tested it, but it is printed in a Cisco document so it must be true 🙂
[quote]If you are changing an existing vPC configuration to a vPC+ on an F Series module, follow these steps:
1. On each vPC peer link port channel, enter the shutdown command.
2. In the vPC domain configuration mode, enter the fabricpath switch-id switch-id command.
3. On each of the vPC+ peer link interfaces in interface configuration mode, enter the switchport mode fabricpath command.
4. On each vPC+ peer link port channel, enter the no shutdown command.
Take the FabricPath out and draw your picture just with VLANs and trunks, wherever they extend to. If you can do it with VLANs, you can do it with FabricPath. And vice versa.
See also my new posting [b]Designing FabricPath[/b], at [url]http://www.netcraftsmen.net/resources/blogs/designing-fabricpath.html[/url]
From one of the Cisco Networkers talks, I get the impression you can convert vPC to vPC+ a little more easily than indicated in this article. One can shut down the vPC peer link ports, which brings down the vPC pairing (and likely some member links). Then go into vpc domain mode and add the virtual switch switch ID. Then bring the vPC peer link back up. That’s a little bit faster than deleting the vpc domain and re-creating it.
Caution: I have not tested this.