Some Comments on Network Virtualization

Author
Peter Welcher
Architect, Operations Technical Advisor

I have some comments about a recent article on SDN Central, specifically the article Using Network Virtualization in Campus Networks. I appreciate the author’s attempt to compare. The article omitted some relevant points. Or the author and I may be thinking along similar lines and would express ourselves differently. Anyway, I’d like to add a couple of thoughts to what the author, Suresh Katukam, wrote.

In reading the article, I see the idea that SDN (the OpenFlow variant, that is) is allowed to configure flow information in switches for a flow-based version of VRF Lite, but somehow the comparison doesn’t also allow SDN (in the configuration automation sense) to similarly configure VRF Lite. So let’s see, I could configure VRF Lite in automated fashion by pushing some amount of information or CLI configuration to devices, whereas managing flows entails pushing several orders of magnitude more information out. How is that preferable? Which is more feasible right now? Three years from now?

The second aspect I’d like to touch upon is that I keeping seeing a lot of “we’ll just tunnel”, not just in this article. I’ve wanted to “just tunnel” for years. Unfortunately, it usually isn’t viable at the performance levels the design requires. In particular, right now GRE / L3 tunneling is a major performance hit on almost every platform I know of. Ok, fine, future platforms will likely be smarter and do tunneling in hardware, or faster CPU. (Seen any good specs on LISP Gbps of performance lately?) Vendors get pretty coy about revealing actual real tunneling specs for hardware. The article contains a pretty good list of the other objections to tunneling — opaqueness for troubleshooting, security, and QoS purposes being high among them. L2 tunnels per-port, not a big deal for current hardware supporting QinQ. Per-flow assignment of VLAN tags is also something I’d want hardware support for, however.

The hop by hop discussion is also fairly bang on, and still makes me want to say “use MPLS labels except at the edge”. Ok, others better known then I have also said that. And repeating it is probably pointless until and unless the core players in OpenFlow start liking the idea. I just think OpenFlow-ready chipsets ought to be able to match on the standard stuff (“12-tuple”), and/or MPLS label.

How hard would that be to engineer? And it perhaps lets the central controller leverage Forwarding Equivalency Classes to reduce the amount of flow state in any given switch. The article mentions MPLS, but as a separate domain. If we could shift the whole classification problem to the NIC and have it apply a MPLS label (with switches just doing labels), wouldn’t the whole system scale better from a hardware perspective? Of course we’d then need OpenFlow control of NICs. We’d then truly have a complex edge (in the NIC, not the switches) with simpler core network. That’s a topic for a future blog.

Update. I was skimming the OpenFlow 1.1 spec and guess what? MPLS label matching is in there. Great! I still haven’t seen much discussion about using it to date, so I went off and searched. Summary of what I found = a future blog. I’ll also note that Martin Casado and others have written about an MPLS-like Edge/Core approach, in Fabric: A Retrospective on Evolving SDN.

The discussion in the article around segmentation is one interesting point. Yes, VRF Lite is per-edge port, not per-flow. I’ve been noticing for a while, with 802.1x then NAC now ISE, you can get some context into a campus network fairly easily. All can “slam” a port into a VLAN which is tied to a specific VRF. So if you log in and belong to the group “doctor”, you get put into the “medstaff” VRF, as compared to the med student or visitor / patient VRFs or clinical VRFs, in a hospital. Similarly for other settings. The Cisco Security Group Tag concept in effect gets us out of the source-IP address business as far as security groupings, which is a creative twist. In principle, that really simplifies datacenter ACLs.

Where this user to VRF mapping approach gets limited is precisely around flexibility as Suresh’s article notes. You can’t have packets from one device being segmented differently.

My question: do we even want or need this? Having PCI or HIPAA data on the same device as non-segmented traffic certainly makes it easier for the data to escape DLP boundaries. Isn’t the purpose of segmentation to isolate devices requiring a higher security standard, so that (a) devices with less stringent security can’t form the basis for an attack on the more secure devices, and (b) so that you can do heavier traffic monitoring and security for the more-secure devices? So if you suddenly start doing things per-application rather than per-device, you’ve really created a lot of security boundary issues for the security team.

Another way of getting at this: why should traffic from two different applications be segmented differently? Can’t the traffic travel along a common path, it is not going to interact in any fashion. The real goal would be to control which applications (internal or on the Internet) the device can reach, which are both things a firewall can process at the Internet edge or somewhere in the datacenter. Some sites now wish to segment visitor/guest traffic, either logically or physically. I call it “escorting guest packets to the exit”. The extreme (which I’ve seen) allows different VLANs in the same switch but requires physically different cables upstream. That has always rather puzzled me. Neither of these approach makes any sense to me. IP traffic can’t “wander” in the network the way people might wander in hallways. If you’re worried about machine to machine attack, well, give them different flavors of addressing and use ACLs to prevent visitor block to internal IP address traffic. Done!

I also wish to note the complexity price to this. With Cisco ISE, they’ve gotten us out of the IP source address business, so we can write rules for which rules can send data on which ports to which servers (or object groups). If we start mixing in which device type, where the user is, the time of day, and other factors we start adding complexity back in. Add in which application is sending the data and you’ve added 1-2 orders of magnitude on top of that to the complexity. At some point, the rulesets become too complex to handle. There’s a human factor here — I keep seeing security groups biting off too much, or grappling with unsustainable object oriented ACLs they inherited. I’m trying to translate that very real present experience into what the flow-related considerations are.

I’d like to see a good discussion (complete with some numeric estimates) of how security granularity like this translates into the number of flows that need to be programmed into the OpenFlow network. And what the implications are as far as cost and timeline to availability / engineering feasibilty. After all, part of engineering isn’t just “can we do it” but “at what cost”. I keep looking for that discussion. I’m not holding my breath, since it’s probably highly proprietary engineering team data for the various vendors. And it’s a rapidly moving target too, no doubt!

Life Log

I’m on vacation this week. Rainy day, just had a couple things to say in reaction, so a quick blog seemed like fun. Hope you enjoy it too!

Disclosure

The vendors for Network Field Day 5 (#NFD5) paid for my travel expenses and perhaps small items, so I wish to disclose that in my blogs now. The vendors in question are: Cisco, Brocade, Juniper, Plexxi, Ruckus, and SolarWinds. I’d like to think that my blogs aren’t influenced by that. Yes, the time spent in presentations and discussion gets me and the other attendees looking at and thinking about the various vendors’ products, marketing spin, and their points of view. I intend to try to remain as objective as possible in my blogs. I’ll concede that cool technology gets my attention!

Stay tuned!

Twitter: @pjwelcher

Leave a Reply