AboutBlogContact
DevOpsJune 12, 2014 2 min read 20

Docker 1.0+: Deep Dive into Overlay Networking and VXLAN (2014)

AunimedaAunimeda
📋 Table of Contents

Docker 1.0+: Deep Dive into Overlay Networking and VXLAN

It's 2014, and Docker has officially reached 1.0. We've mastered running containers on a single laptop, but now we're facing the "Multi-Host Problem." How do we get a container on Host A to talk to a container on Host B as if they were on the same switch, without messy port mapping? The answer is Overlay Networking.

The VXLAN Encapsulation

At the heart of Docker's multi-host strategy (and tools like Flannel or Weave) is VXLAN (Virtual Extensible LAN). VXLAN is an encapsulation protocol that wraps Layer 2 Ethernet frames inside Layer 4 UDP packets.

When Container A (10.0.0.1) tries to send a packet to Container B (10.0.0.2), the Docker engine on Host A intercepts it. It knows that 10.0.0.2 lives on Host B (IP: 192.168.1.50). It wraps the internal packet in a UDP header destined for 192.168.1.50:4789.

Configuring an Overlay (The Early Way)

Before Docker Swarm was fully integrated, we often used etcd or consul as a key-value store to keep track of where every container was located.

# Starting the docker daemon with a KV store for networking
docker daemon --cluster-store=etcd://127.0.0.1:2379 \
              --cluster-advertise=eth0:2375

# Creating the overlay network
docker network create --driver overlay my-multi-host-net

# Running a container on this network
docker run -itd --name=web --net=my-multi-host-net nginx

The Beauty of the Virtual Bridge

Each host has a virtual bridge (usually docker_gwbridge) and an endpoint in the overlay network. From the perspective of the application inside the container, it just has a standard eth0 interface with a private IP. It doesn't know—and doesn't care—that its packets are being tunneled across a physical network.

Performance Overhead

Of course, there's no free lunch. Encapsulating every packet adds bytes to the header, which can lead to MTU (Maximum Transmission Unit) issues. If your physical network has an MTU of 1500, your internal container network must be set to 1450 to account for the VXLAN overhead. Furthermore, there's a slight CPU cost for the encapsulation and decapsulation, but in 2014, the flexibility of a programmable network is well worth the 5-10% performance hit.

Read Also

Beyond Zero-Downtime: Mastering State Persistence in Distributed Deploymentsaunimeda
DevOps

Beyond Zero-Downtime: Mastering State Persistence in Distributed Deployments

Zero-downtime deployment was the goal in 2018. In 2026, the challenge is 'State Continuity.' We explore how to manage database migrations and persistent WebSocket connections without dropping a single user session.

Docker Multi-Stage Builds: Slimming Down Your Production Images (2019)aunimeda
DevOps

Docker Multi-Stage Builds: Slimming Down Your Production Images (2019)

Shipping a 1GB Node.js image is so 2017. In 2019, we use multi-stage builds to separate our build environment from our runtime environment, resulting in tiny, secure images.

The Rise of Containerization: Why We are Moving Our Production to Dockeraunimeda
DevOps

The Rise of Containerization: Why We are Moving Our Production to Docker

The 'it works on my machine' era is over. In 2018, we are standardizing our development and production environments using Docker to solve the environment parity problem once and for all.

Need IT development for your business?

We build websites, mobile apps and AI solutions. Free consultation.

Get Consultation All articles