At the closing session of Tech Field Day 12 (#TFD12), Docker presented on the significant new features in Docker networking. I and some of the #NFD13 delegates attended, as Docker was the closing presentation the day before NFD13. Hey, they had me at the word “networking!”
I’m not going to claim to have wrapped my brain around Docker networking — yet. But learning about it is in my reading queue, and probably should be in yours. I’m writing this quick blog to connect you with some good resources.
Docker started with the houses-versus-apartments analogy. A house has its own heating/cooling, water heater, infrastructure. (And mine has a yard, with flower beds that need mulch — spring sore back once again!) Apartments share infrastructure. A virtual machine (VM) is like a house, a Docker container is like an apartment. Works for me!
One goal of Docker networking was to de-couple network instantiation from the container internals, providing modularity between DevOps development and the eventual Ops deployment. Here’s the point: clean division of labor between teams to recognize that communication across boundaries is hard; separate implementation details from app design and architecture.
I’m all over that. I’ve seen way too many recent ad hoc (a polite word for “DevOps” derived mess?) apps deployments, where changing the networking or addressing would be very helpful (e.g. in summarizable routes, consistent addressing blocks for security zones, etc.).
Docker also talked about containers for Microsoft OS’s, although note it is for server and non-GUI applications only (cf. “Microsoft nano server”).
Some Thoughts
My big question lately — be it bare metal servers, VMware “application pods” (the group of VMs providing a service/application), or containers — is how do I manage it? In particular, I don’t just want the user experience data, but micro-services or container-to-container performance data. That seems particularly important when containers with micro-services are being spun up and shut down in very short timeframes. How do I detect that containers on host A talking to those on B are slower than usual, and correlate that with high error rate/dirty optics on one of the links in between?
Aside from managing things, I have the feeling there ought to be a balance between decomposing a program into components and network impact, including latency. It’s good programming to build single-purpose container-based app components, simplifying coding and fixing bugs. I get that. My concern is from watching some SOA apps, and getting the feeling there was a lot more “passing the buck” to another server than actually getting the work done. I can imagine even greater chattiness happening with container-based micro-services. I’d like data on that (e.g., time to get a response, how much of it is network, how much the containerized service). I’m not sure what the right answer is. Human coding is costly, network not so much. Having data (actual facts!) has to be useful.
Links
If you want to learn more, or see the demos (NFD is big on demos), be sure to check out the streaming videos and #NFD12 Docker blogs.
Books:
- Docker Networking Cookbook by Jon Langemak of the Tech Field Day delegate community
- Learning Docker Networking from Packt
Some related blogs:
- Tech Field Day 12 Primer: Docker
- InfoWorld How to Get Started with Docker
- Docker is the New Twitter
- A VMware Guy’s Perspective on Containers
Container Hardening with Docker Bench for Security
Comments
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!