InfraRunBook
    Back to articles

    Docker Networking Bridge Overlay Host Explained

    Docker
    Published: Apr 7, 2026
    Updated: Apr 7, 2026

    A deep-dive into Docker's three core network drivers — bridge, overlay, and host — covering how each works at the kernel level, when to use them, and how to avoid the mistakes that cause production outages.

    Docker Networking Bridge Overlay Host Explained

    What Docker Networking Actually Is

    Docker networking trips up a lot of engineers the first time they seriously dig into it. The docs give you the names — bridge, overlay, host, macvlan, none — but they don't always tell you why you'd pick one over the other or what actually happens at the kernel level when a container starts. I've seen misconfigured networks cause production outages, latency spikes in microservice clusters, and containers that mysteriously can't talk to each other despite being on the same host. Let me break down the three drivers you'll encounter most: bridge, overlay, and host.

    When you run a container, Docker creates a separate network namespace for it — an isolated view of the network stack with its own interfaces, routing table, and iptables rules. Docker's job is to connect that namespace to the outside world, or to other namespaces, in a controlled way. The network driver you choose determines the mechanism. That choice has real consequences for performance, isolation, and operational complexity.


    Bridge Networking: The Default You Should Understand Before Relying On

    Bridge networking is what Docker uses when you don't specify a network. It's the workhorse of single-host container networking and it works well — but only if you understand its two distinct flavors.

    How It Works at the Kernel Level

    When Docker starts, it creates a virtual Linux bridge interface called

    docker0
    on the host. Think of it as a virtual switch. Every container connected to the default bridge plugs into that switch via a
    veth
    pair — a virtual Ethernet cable with one end inside the container's network namespace and the other end attached to
    docker0
    . The bridge gets an RFC 1918 subnet, typically
    172.17.0.0/16
    , and acts as the default gateway for all containers connected to it.

    # Inspect the default bridge network
    docker network inspect bridge
    
    # See the veth pairs Docker created on the host
    ip link show type veth
    
    # Watch traffic on the bridge
    tcpdump -i docker0 -n

    That covers the default bridge. But here's the part that burns engineers: there's a critical difference between the default bridge (

    docker0
    ) and a user-defined bridge network. On the default bridge, containers can communicate by IP address but not by name. Docker's embedded DNS does not serve the default bridge. The moment you need container A to resolve container B by hostname, the default bridge fails you silently — the DNS query just never resolves.

    User-defined bridges fix this. When you create your own bridge network with

    docker network create
    , Docker runs an embedded DNS resolver at
    127.0.0.11
    inside every container on that network. Containers find each other by name automatically. I've seen development environments where engineers were hardcoding container IPs in config files because they were on the default bridge and didn't know why DNS wasn't working. Those IPs change every restart. User-defined bridges should be the default for anything beyond a one-off test run.

    # Create a user-defined bridge with a deliberate subnet
    docker network create \
      --driver bridge \
      --subnet 10.10.0.0/24 \
      --gateway 10.10.0.1 \
      app-net
    
    # Containers on app-net can resolve each other by name
    docker run -d --name web --network app-net nginx:alpine
    docker run -d --name api --network app-net myapi:latest
    
    # Verify DNS resolution from inside web container
    docker exec web ping -c 2 api

    Outbound Traffic and iptables

    Containers on a bridge network reach external IPs through NAT. Docker adds iptables MASQUERADE rules so outbound traffic from

    10.10.0.0/24
    appears to come from the host's external IP. Inbound traffic requires port mapping (
    -p 8080:80
    ), which creates a DNAT rule in the
    DOCKER
    chain redirecting host port 8080 into the container.

    # View Docker's NAT rules on sw-infrarunbook-01
    iptables -t nat -L DOCKER -n --line-numbers
    
    # View the FORWARD chain rules Docker manages
    iptables -L DOCKER-ISOLATION-STAGE-1 -n -v

    In my experience, Docker's iptables management is a double-edged sword. It works perfectly on a clean host. The moment you have ufw, firewalld, or custom chains already in place, Docker's rules can interact badly. I've seen containers that could reach the internet but couldn't be reached via port mapping — almost always a chain ordering issue or a restrictive FORWARD policy on the host. Docker inserts rules into a

    DOCKER-USER
    chain specifically so you can add rules that take precedence without Docker overwriting them. Use it.

    # Rules in DOCKER-USER persist across Docker restarts
    # Allow only your management subnet to reach containers
    iptables -I DOCKER-USER -s 10.0.1.0/24 -j ACCEPT
    iptables -I DOCKER-USER -j DROP
    
    # Persist the rules
    iptables-save > /etc/iptables/rules.v4

    Overlay Networking: Layer 2 Across Physical Hosts

    Once you scale beyond a single host, bridge networking hits a hard wall. Containers on different hosts can't see each other at layer 2 — there's no bridge connecting them. Overlay networks solve this, and they're the backbone of Docker Swarm multi-host deployments.

    VXLAN Encapsulation Under the Hood

    An overlay network creates a virtual layer-2 segment that spans multiple physical hosts using VXLAN encapsulation. When container A on host-1 sends a packet to container B on host-2, the Docker overlay driver wraps that Ethernet frame in a UDP datagram — the VXLAN header — and sends it to host-2's physical IP on UDP port 4789. Host-2 unwraps it and delivers the inner frame to container B's network namespace. From inside the containers, it looks like they're on the same LAN. From the physical network's perspective, it's just UDP traffic between two host IPs.

    Docker maintains a distributed mapping of which container IPs live on which hosts. In Swarm mode, this state lives in the Raft-based distributed store that Swarm managers maintain. Each Docker daemon uses this to populate the VXLAN forwarding database so it knows where to send encapsulated frames.

    # Initialize Swarm on sw-infrarunbook-01 (10.0.1.10)
    docker swarm init --advertise-addr 10.0.1.10
    
    # Create an overlay network
    docker network create \
      --driver overlay \
      --subnet 10.20.0.0/24 \
      --attachable \
      overlay-prod
    
    # Deploy a service across the cluster
    docker service create \
      --name frontend \
      --network overlay-prod \
      --replicas 3 \
      nginx:alpine
    
    # Check service placement across hosts
    docker service ps frontend

    The

    --attachable
    flag is worth understanding. Without it, only Swarm-managed service containers can attach to the overlay — standalone
    docker run
    containers are excluded. Adding
    --attachable
    lets you connect individual containers for debugging purposes, which is invaluable when you need to run a network probe or test tool from inside the overlay without wrapping it in a service definition.

    Prerequisites That Will Ruin Your Day If Ignored

    For overlay to work, every host in the cluster must be able to reach every other host on UDP port 4789 (VXLAN data plane) and TCP port 7946 (gossip protocol for endpoint discovery). I've spent hours on clusters where a host firewall was silently dropping UDP 4789. Containers would deploy and report healthy, but cross-host communication would be completely dead — packets vanishing with no error. Always validate these ports before blaming Docker or the overlay driver.

    # Test from sw-infrarunbook-01 to the second node
    nc -vzu 10.0.1.11 4789
    nc -vz 10.0.1.11 7946
    
    # Check VXLAN interfaces Docker created
    ip -d link show type vxlan
    
    # Inspect the overlay's VXLAN forwarding database
    bridge fdb show dev vxlan0

    Overlay also introduces measurable latency overhead. The VXLAN encapsulation and decapsulation adds CPU cycles on both ends, and you're growing every frame by the size of the VXLAN and UDP headers. For most web application workloads this is completely irrelevant. For services doing millions of small RPCs per second — message brokers, service meshes, high-frequency trading infrastructure — it's worth profiling. I once moved a latency-sensitive internal messaging component off overlay and onto a dedicated host network, and P99 latency dropped by roughly 30%. Know your workload before deciding the overhead doesn't matter.


    Host Networking: Bypass Everything

    Host networking is conceptually the simplest of the three. The container shares the host's network namespace entirely. No virtual interfaces, no NAT, no bridge, no encapsulation. The container binds directly to the host's physical NICs and sees the full routing table as-is.

    When Host Networking Is the Right Call

    Host networking makes sense when performance is the top priority and you're willing to accept reduced isolation. High-throughput applications benefit most: network monitoring agents, packet capture tools, certain databases that saturate NICs, and infrastructure exporters like Prometheus node exporter are all natural fits. Removing the veth and NAT layers eliminates overhead that simply doesn't need to exist for these workloads.

    # Run node exporter with host networking on sw-infrarunbook-01
    docker run -d \
      --name node-exporter \
      --network host \
      --pid host \
      --path.rootfs=/host \
      -v /:/host:ro,rslave \
      prom/node-exporter:latest
    
    # The container binds directly to host port 9100
    # No -p flag needed, and -p flags are silently ignored if supplied
    ss -tlnp | grep 9100

    That last point surprises engineers the first time they hit it. Port mapping flags (

    -p
    ) are completely ignored in host mode — there's no mapping layer to configure. If something else is already bound to the port your container wants, the container starts but the bind fails at the application level. Docker won't warn you. Port management is entirely your responsibility.

    The Linux-Only Limitation

    Host networking only works as expected on Linux. On Docker Desktop for Mac or Windows, containers run inside a lightweight Linux VM. The

    --network host
    flag connects the container to that VM's network stack, not your actual laptop's interfaces. This means behavior on your Mac dev machine will differ significantly from production Linux behavior. I've seen this catch teams off-guard when they test locally with host networking and then wonder why the same image behaves differently on their CI runners and production nodes.


    A Real-World Network Segmentation Example

    Consider a three-tier application: an Nginx frontend, a Python API, and a PostgreSQL database, all deployed as Swarm services on a two-node cluster — sw-infrarunbook-01 at

    10.0.1.10
    and a second node at
    10.0.1.11
    . The goal is to ensure the database is never reachable from the frontend tier directly.

    # Two separate overlay networks — one per tier boundary
    docker network create --driver overlay --subnet 10.20.1.0/24 frontend-net
    docker network create --driver overlay --subnet 10.20.2.0/24 backend-net
    
    # Nginx: only on frontend-net, publishing with host-mode for performance
    docker service create \
      --name nginx-frontend \
      --network frontend-net \
      --publish published=443,target=443,mode=host \
      --replicas 2 \
      nginx:alpine
    
    # API: straddles both networks — the only bridge between tiers
    docker service create \
      --name python-api \
      --network frontend-net \
      --network backend-net \
      --replicas 4 \
      registry.solvethenetwork.com/myapi:v2.1
    
    # PostgreSQL: backend-net only, completely invisible to frontend
    docker service create \
      --name postgres \
      --network backend-net \
      --replicas 1 \
      postgres:15

    This is a pattern I reach for consistently. The database has no interface on

    frontend-net
    whatsoever — it's not reachable from Nginx even if you know its IP. The API service is attached to both networks and acts as the only conduit between tiers. If an attacker compromises the Nginx container, they're on
    frontend-net
    and can only reach the API. They still can't touch Postgres directly.

    Notice the

    mode=host
    publish on the frontend service. This binds port 443 directly to each host's NIC rather than routing through Docker's ingress routing mesh. The routing mesh is convenient for load balancing, but it adds an extra network hop. For TLS termination where you want the lowest possible latency, host-mode publishing removes that hop entirely.


    Common Misconceptions Worth Addressing Directly

    Containers on the same host can always talk to each other. They can't. Two containers on different user-defined bridge networks on the same physical host are completely isolated from each other. Co-location does not imply connectivity. You need a shared network for containers to communicate, regardless of where they're physically running.

    Overlay is just bridge stretched across hosts. It isn't. Bridge operates at layer 2 inside a single kernel using the Linux bridge subsystem. Overlay operates by encapsulating layer-2 frames inside UDP at layer 3, using VXLAN. The abstraction presented to containers looks similar, but the failure modes, performance characteristics, and troubleshooting tools are completely different. You debug a bridge problem with

    bridge
    and
    ip link
    ; you debug an overlay problem by checking VXLAN reachability, inspecting the Raft store, and tracing UDP on the physical interface.

    Host networking is inherently dangerous. It's a trade-off, not a mistake. For infrastructure tooling — monitoring agents, log collectors, network analyzers — host networking is often the correct choice. The risk is real but manageable: you're accepting that a compromised container can interact with the host network stack directly. For dedicated infrastructure tools running trusted images, that risk is well within acceptable bounds.

    Docker handles iptables so you don't need to think about it. Docker manages its own chains. It doesn't manage yours. If you're running ufw or firewalld alongside Docker, the interaction between their rules and Docker's is a known source of pain. Understand how Docker's chains fit into the overall iptables structure — particularly

    DOCKER-USER
    , which exists precisely so you can add rules that Docker won't overwrite on restart.

    Getting comfortable with the kernel primitives behind these drivers —

    ip netns
    ,
    tcpdump
    on veth interfaces, VXLAN forwarding databases, iptables chain ordering — is what separates engineers who can diagnose production network problems from those who restart containers and wait. These tools aren't exotic. They're standard Linux networking, and Docker is built on top of them.

    Frequently Asked Questions

    What is the difference between Docker bridge and overlay networking?

    Bridge networking operates at layer 2 within a single host using Linux kernel bridging and veth pairs. Overlay networking spans multiple hosts by encapsulating Ethernet frames inside UDP using VXLAN. Bridge is for single-host container communication; overlay is for multi-host deployments, typically in Docker Swarm clusters.

    When should I use Docker host networking?

    Use host networking when performance is the priority and you can accept reduced container isolation. Infrastructure tooling like monitoring agents, packet capture tools, and log shippers are natural fits. Avoid host networking for application workloads where isolation matters, and note that it behaves differently on Docker Desktop for Mac and Windows versus Linux.

    Why can't my containers find each other by name on the default bridge?

    The default bridge network (docker0) does not support Docker's embedded DNS. Containers on the default bridge can only communicate by IP address. Create a user-defined bridge network with docker network create and attach your containers to it — Docker's DNS resolver then lets containers resolve each other by container name automatically.

    What ports need to be open for Docker overlay networking to work?

    Docker overlay networking requires UDP port 4789 (VXLAN data plane) and TCP port 7946 (container network discovery gossip) to be open between all hosts in the cluster. Blocking either of these, even silently via a host firewall, will cause cross-host container communication to fail without obvious error messages.

    Does Docker overlay networking add significant latency?

    For most web application workloads, the overhead is negligible — sub-millisecond differences. For high-throughput, low-latency services doing millions of small RPCs per second (message brokers, high-frequency internal APIs), the VXLAN encapsulation overhead can become measurable. Profile your specific workload before assuming overhead doesn't matter.

    Related Articles