I’ve been trying to blog about NSX, DFA, and ACI for a while. I’ll blame Cisco — they’re now coming out with additional info. And I kept coming up with more Things I Don’t Know. Hence, deferred blogging, to avoid putting an ignorant foot in my mouth. I’m now equipped to place a smart foot in my mouth instead! That is, this will be my best effort to describe the new technology. This particular blog will necessarily be an brief overview. In subsequent blogs I hope to describe and contrast aspects of the three technologies in the title: NSX, DFA, and ACI. If you’re not familiar with those three acronyms, well, read on. That’s one goal for this particular blog, providing an overview of those three technologies.
The timing seems perfect. I can now react to Greg Ferro’s (@etherealmind, PacketPushers.net) 12/12/2013 Network Computing blog Cisco ACI: Proceed At Your Peril, at http://www.networkcomputing.com/data-center/cisco-aci-proceed-at-your-peril/240164699, and Joe Onisick’s long response. (And no, we are not now calling Greg “Mr. Grouchy”, he has some somewhat valid concerns, although I have a somewhat different perspective. I’m generally positive and excited about ACI. I can’t wait to see how this plays out and how well Cisco executes on it.)
There’s a recent Cisco blog, triggered by Greg (or not), claiming lots of early interest: Cisco ACI: In the market, building momentum, at http://blogs.cisco.com/datacenter/cisco-aci-in-the-market-building-momentum-full-suite-on-schedule-for-q2-2014/. I’ve heard anecdotal information about one large firm I was a bit surprised to find exhibiting strong early interest.
For those not tracking my history with this, the Tech Field Day #TFD brought me to the Cisco ACI launch, at which we podcasted new raw opinions. Since then I (and many others, I suspect) have been searching for crumbs of information about ACI. I’ve also been looking for serious details about DFA since CiscoLive in June. Gideon Tam (@mfmahler) was kind enough to tweet links to relevant Cancun CiscoLive presentations. They provided some good initial information.
Last week I had the good fortune to get a full day of ACI info, plus some DFA info, via Cisco datacenter PVT webexes for partners. A *LOT* of info. No, I can’t explain about 6 hours of presentations in this blog. And there are still some areas where I think Cisco has yet to provide some of the details. Lack of info tends to make all of us nervous. I’m extending some trust here, because I see some very bright people and a lot of R&D energy pointed in one direction.
In this case, we seem to have an orchestrated increasing flow of information as ACI delivery gets closer. And it sounds like a lot of related items will be falling into place over the first half of 2014. Exciting! Cisco may be doing this to avoid overloading media with details, it may be because there are some surprises left to be announced.
My guess: Cisco is gradually laying its cards on the table, saving some hole cards. (But I’m terrible at poker.) I do get the sense that there is a long-orchestrated set of steps coming to fruition here, with some pieces from inside Cisco as well as from the former Insieme. OnePK may be a general play by Cisco that is also an enabler for what the ACI team needs to do. One way of perceiving DFA is as an evolutionary step in thinking about spine and leaf forwarding, well on the path towards ACI. It might also be a hedge that can serve a different market than ACI (see below).
One of the parts that Greg was concerned about was several parts that have to all work together — software and hardware integration. Well, yes, there’s still a chance of problems there. And there will inevitably be some early bugs, since software is involved. But well-defined API’s can certainly help with system integration. Any chip-related delays Insieme had might (or might not) have had, might also have allowed more testing and refinement of GUI, everything but the chips and OS code that have yet to appear. The Cisco blog above mentions an APIC simulator becoming available in 2014. It sounds similar to the UCS emulator Cisco has made to developers. Getting your GUI into a lot of hands is one way to (a) test it thoroughly, and (b) rapidly build peoples’ skills at using the GUI.
I do agree with Greg that few will jump right in and flip their datacenter to ACI. There will be pilots and incremental migration. However, some of my initial questions vanished when it became clear that ACI supports doing traditional networking, or all ACI policy-based networking, or a hybrid model.
And yes, there will need to be some cultural changes. Some of the biggest power of ACI might tie to things like meaningful DNS names for servers and VMs. So datacenters will need to be doing some planning to take full advantage of ACI moving forward. Joe Onisick had a good description of the UCS equivalent: people starting by naming their profiles for specific server instances, and then starting to abstract and generalize (my words, not his).
One of the gaps, for me, is that I still haven’t gotten a good description of how configuration details get instantiated in devices. Although there is a clear mechanism that has been described for 3rd party policy to configuration template add-ons, something that sounded a bit Puppet-like. The point may be to focus discussion on the policy discussion rather than on the details of how devices actually get configured. So we’ll just quietly defer that topic for a later blog.
A Useful Picture
To describe the three virtualization schemes, we have to recognize they have different goals and perspectives. My analogy for this is a well-known New Yorker cartoon/poster, the View of the World from 9th Avenue. A low-res version is at http://upload.wikimedia.org/wikipedia/en/4/4d/Steinberg_New_Yorker_Cover.png (link only: potential copyright concerns).
We’ll see how this is relevant in a moment — bear with me.
What is NSX?
For those who haven’t been following along, NSX is VMware / Nicira’s virtualization play. It offers the ability to instantiate and connect included virtual router / firewall / NAT functionality, as well as virtual machines (VMs). NSX uses VXLAN overlays between hypervisors to connect virtual devices at L2, overlaying a L2 or L3 physical network. It also provides routing to the external physical world.
For more info, see my blog Good Links, 10/14/2013 at https://netcraftsmen.com/blogs/entry/good-links-10-14-2013.html. Ivan Pepelnjak, Brad Hedlund, and Scott Lowe’s presentation on ipspace.net is one of the best resources I’ve seen so far. It is at http://demo.ipspace.net/get/NSX%20Architecture.pdf. The NSX Virtualization Design Guide is also interesting reading, at http://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-virtualization-design-guide.pdf.
NSX seems to me to be a very VM-centric approach. It makes it fairly easy to deploy virtual devices and connect them. That’s one of the aspects I don’t like as much, see VMware Product Walkthroughs, at http://www.vmwarewalkthroughs.com/. Great idea, by the way. But I came away with the reaction: cool, I can stand up a router, I have to create a virtual switch to connect it up — oh, I’m doing the virtual equivalent of mounting a router in a rack, then cabling it to a switch, and creating VLANs. That’s faster than physical. But not automated (yet?). And arguably a lot more clicking and details than I think we ought to have to do. Maybe templates will be coming to expedite things? It’s early days yet, and integration with cloud management tools is another piece of the puzzle.
The flavor of NSX came across to me as a bit like: NSX can connect all your VMs and virtual appliances, with the physical network relegated to just providing L2 or L3 connectivity. And oh, by the way, if you have physical appliances, NSX can connect them up too. (And I hear a little “but why would you want to?” in the back of my head.)
And here’s an (attempted) picture of that:
What is DFA?
DFA stands for Dynamic Fabric Automation. It appears to be a “semi-classic Nexus” centric approach, perhap intended as a hedge should ACI have problems, perhaps intended for those who don’t buy into ACI, or whose hardware doesn’t support it. DFA appears to be a bit more of a software integration play, leveraging DCNM, UCS Director, and 1000v if available. It also however leverages some new code / protocols in the Nexus 7700 hardware to offload some of the work. Specifically, FabricPath as tunneling (more or less), plus BGP and an address family for host routes, informing leaf switches how to do L3 forwarding on a per-host and context basis. More about this in later blogs.
I currently view DFA as a mostly Cisco datacenter hardware-centric approach, with some potential for automation and integration with VMware. And by the way, it may remove the need for the overlays NSX uses. Simplifying the potential issues of managing overlays and underlays and trying to tie problems in the overlay back to problems in the underlay.
This sort of looks like:
What is ACI?
ACI is the big launch from Cisco and Insieme as of 11/6/2013. ACI stands for Application Centric Infrastructure.
ACI is a policy-oriented approach, described as trying to bridge the language gap between developers and networking / datacenter technical staff. With automated provisioning per policy and templates. I see this as a way for Cisco to escape from “MQC hell”, where every time you wanted to do security, QoS, or other service functions, you’d have to describe essentially the same traffic flows for each function. Instead, ACI will let you define an application relationship and all the services it needs in one place. With RBAC (Role Based Access) so that different teams can configure their parts of the policy, if that is desired. And (eventually), policy based on DNS name or VM name awareness. (Will we need structured naming conventions? Stay tuned!)
ACI also has some aspects intended to provide very finely grained manageability, including the idea of activating counters and being able to see which hop is losing packets (or duplicating them). ACI also appears to be intended to work with orchestration software that drives the vSphere side of things. (Note to self: fill in some of the details on that!)
ACI is going to require writing policy rules so as to leverage pattern recognition and abstraction. Those with microscopic CLI and ACL vision are going to hate it. Those who want to lighten the configuration burden may come to appreciate the approach.
The hardware in ACI can terminate VLANs and VXLAN at the edge switches. Although ACI will apparently use a variant form of VXLAN internally in its switching fabric. The impact? It looks to me like this means ACI can play well with VMware NSX as well as vSphere, since every edge switch can be a hardware-based high performance VXLAN gateway (VTEP).
Maybe ACI provides more balance? And via API’s, a management product teaming approach to managing the datacenter?
You can probably see why I have not considered a career in graphic arts.