If there’s one thing a networking person should probably know about containers, it is of course container networking. I’ve been self-educating on the topic, and I’m finding container networking a complex topic, with variations depending on the tools in use (I’d like a scorecard summary!). For what it’s worth, some blogs I’ve seen indicate that as use of containers grows and scales up, knowledge of container networking is becoming increasingly critical.
Manageability and performance may be two related factors. DevOps teams may or may not coordinate well. In the best case, they may ask for the networking team’s design advice. Will you be ready should that happen?
As a result, I’ll try to give the flavor of container networking, and provide lots of reading references. There are entire books dedicated to this topic. I’m not prepared to write one, and if you’re reading this, you’re probably not looking for a blog that long. For more info, please use the References listed at the end of this blog.
Having said that much, and in the interest of keeping the length reasonable, this blog will focus on Docker Networking.
Need to Know Checklist
For me, networking, especially network troubleshooting, is all about the flows. I’ve often enough been in situations where server and hypervisor admins can’t tell me what’s going on in sufficient detail to troubleshoot a problem. Most recently, forms of teaming in VMware mis-matched to a Nexus VPC.
I (and the reader!) have no reason to expect working with containers to be any different.
Here’s my current list of things one might want to know:
- What sort of addressing design is used with various container mechanisms? Are containers reached via the node address and TCP port, or which entities running on a node get addresses?
- Containers may be long-lived but in principle they can come and go, particularly when workload monitoring is being used. So, one must expect addresses to change.
- In response to that, service discovery and / or load balancing is likely being used. How does user traffic reach containers? How do various containerized micro-services reach each other?
- Where and how is NAT used?
- Which addresses are or are not externally reachable?
- What sort of overlay, if any, does the service mesh use between hosts / nodes?
- How does traffic get routed to the Kubernetes or other cluster in the first place?
- How is load balancing being done? Is there a physical / classic load balancer (if you insist: “application delivery controller”) front-ending a software load balancer?
This blog will try to address some of those items. Subsequent blogs may tackle more of them.
Note: we won’t worry about the actual implementation mechanism, iptables or whatever. TMI for getting started, we’ll need the big picture first.
For more details, see the O’Reilly book and other references below.
Docker has several “network drivers” or modes: bridge, host, overlay, and macvlan modes.
Terminology: in the following, “ports” will refer to TCP or UDP ports.
Bridge mode acts like a virtual switch (single VLAN) for the containers connected to it. The bridge is associated with a private IP subnet, and attached containers are automatically assigned addresses in that subnet. The host that the containers are running on itself is also connected to the bridge.
By default, containers on a host all connect to the default bridge docker0, and all ports on them are exposed (published) to the outside world.
User-defined bridge networks are basically private subnets, private to the containers connected to them (and the host). However, container names can be used as well as IP addresses. All container ports on a user-defined bridge are accessible to the other attached containers. By default, no container ports are exposed to the outside world.
Think private back-end network. Personally, I somewhat prefer servers to have exactly one network connection. I can live with directly-connected back-end networks. By extension, the same for containers. The concern is that servers should not be doing routing, as troubleshooting that is a nightmare. Another concern is that back-end networks are implicit security, not explicit, so hard to audit.
Containers can be connected and disconnected from user-define bridges on the fly by the administrator / owner.
In host mode, a container interface in host mode shares the interface namespace with the host. That means it is publicly exposed — same IP as the host, different ports. You need to manage the ports.
Host mode is only available on Linux hosts, not Docker for Mac.
Overlay networks use an overlay to securely interconnect the connected containers across multiple Docker hosts.
A Docker swarm is a cluster of Docker engines (and presumably nodes). A swarm provides declarative services, with scaling, state reconciliation, service discovery, load balancing, and other features.
Overlays are also used for multiple applications interacting via swarm services.
There are differences in behavior for standalone containers and Docker swarms. See the link below for more information. Avoiding overlay encapsulation may help performance and appears to be one differentiating factor for service mesh tools (subject for a later blog).
Macvlan mode lets you assign a MAC address to a container virtual network interface, so that it appears to be a separate device directly on the physical network. You have to configure which host physical interface to use, also the subnet and default gateway.
The documentation notes that too many MAC addresses might make the network unhappy. (I’ll second that!) It also notes that the physical port needs to support promiscuous mode, i.e. multiple MAC addresses. I’ll note that likely increases the host CPU workload.
You can use 802.1q VLAN tags with macvlan mode, if you wish: “trunk bridge mode”.
IPVLAN mode is a variant where multiple IP addresses share the same MAC address. This mode may be useful, for example, when switch port security is deployed, imposing a limit to how many different MAC addresses are seen on the physical switch port.
IPVLAN has L2 and L3 modes, see the Container Networking: A Breakdown, Explanation, and Analysis link (References, below) for more details.
You can disable networking on a container, for no outside access, or to add your own customer network driver (advanced topic).
This is an older approach that may still be useful, e.g. for troubleshooting. Container-mapped has a container sharing the IP, MAC, and networking of another container.
You can expose ports, to make them available on the same container network.
In Docker, you have to publish a port for it to be externally accessible. The publicly exposed ports can differ from the container port, with re-mapping done for you.
That’s important to remember. External-facing ports are not necessarily the same as the mapped internal ports.
- O’Reilly book, Container Networking From Docker to Kubernetes (free download via NGINX) by Michael Hausenblas
- Docker Networking Cookbook by Jon Langemak
- Docker Networking and Service Discovery by Michael Hausenblas
- What is Docker Networking? by Michael Hausenblas as well
- Docker Networking Overview
- Docker Bridge Networks
- User Defined Bridges in Docker
- Host Mode in Docker
- Overlay Mode in Docker
- Macvlan Mode in Docker
- Understanding Container Networking
- Container Networking: A Breakdown, Explanation, and Analysis
There is a very helpful series of 12 blogs plus some good hands-on labs provided by Cisco. They cover Docker, Kubernetes, and various steps along the way to effectively using them and Istio service mesh. Highly recommended!
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!
Hashtags: #CiscoChampion #TechFieldDay #TheNetCraftsmenWay
Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at email@example.com.