docker-notebook

My Docker notebook

View the Project on GitHub kyhau/docker-notebook

Docker Networking

Container Network Model (CNM)

REF: https://success.docker.com/article/networking

Alt text

(Image source: docs.docker.com)

  1. A Sandbox contains the configuration of a container’s network stack. This includes management of the container’s interfaces, routing table, and DNS settings. An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail, or other similar concept. A Sandbox may contain many endpoints from multiple networks.

  2. An Endpoint joins a Sandbox to a Network. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability so that a service can use different types of network drivers without being concerned with how it’s connected to that network.

  3. The CNM does not specify a Network in terms of the OSI model. An implementation of a Network could be a Linux bridge, a VLAN, etc. A Network is a collection of endpoints that have connectivity between them. Endpoints that are not connected to a network do not have connectivity on a network.

CNM provides the following contract between networks and containers

  1. All containers on the same network can communicate freely with each other.
  2. An endpoint is added to a network sandbox to provide it with network connectivity.
  3. Multiple endpoints per container are the way to join a container to multiple networks.
  4. Multiple networks are the way to segment traffic between containers and should be supported by all drivers.

CNM Driver Interfaces

  1. Network Drivers

    Docker Network Drivers provide the actual implementation that makes networks work. They are pluggable so that different drivers can be used and interchanged easily to support different use cases.

    Multiple network drivers can be used on a given Docker Engine or Cluster concurrently, but each Docker network is only instantiated through a single network driver.

    1. Native Network Drivers
      1. bridge: The bridge driver creates a Linux bridge on the host that is managed by Docker. By default containers on a bridge can communicate with each other. External access to containers can also be configured through the bridge driver. User-defined bridge network is the best network driver type when you need multiple containers to communicate on the same Docker host.

      2. host: With the host driver, a container uses the networking stack of the host. There is no namespace separation, and all interfaces on the host can be used directly by the container. Host network is the best network driver type when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.

      3. overlay: The overlay driver creates an overlay network that supports multi-host networks out of the box. It uses a combination of local Linux bridges and VXLAN to overlay container-to-container communications over physical network infrastructure. Overlay network is the best network driver type when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.

      4. macvlan: The macvlan driver uses the MACVLAN bridge mode to establish a connection between container interfaces and a parent host interface (or sub-interfaces). It can be used to provide IP addresses to containers that are routable on the physical network. Additionally VLANs can be trunked to the macvlan driver to enforce Layer 2 container segmentation. Macvlan network is the best network driver type when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.

      5. none: The none driver gives a container its own networking stack and network namespace but does not configure interfaces inside the container. Without additional configuration, the container is completely isolated from the host networking stack.

    2. Remote Network Drivers
      1. contiv: An open source network plugin led by Cisco Systems to provide infrastructure and security policies for multi-tenant microservices deployments. Contiv also provides integration for non-container workloads and with physical networks, such as ACI. Contiv implements remote network and IPAM drivers.
      2. weave: A network plugin that creates a virtual network that connects Docker containers across multiple hosts or clouds. Weave provides automatic discovery of applications, can operate on partially connected networks, does not require an external cluster store, and is operations friendly.
      3. calico: An open source solution for virtual networking in cloud datacenters. It targets datacenters where most of the workloads (VMs, containers, or bare metal servers) only require IP connectivity. Calico provides this connectivity using standard IP routing. Isolation between workloads — whether according to tenant ownership or any finer grained policy — is achieved via iptables programming on the servers hosting the source and destination workloads.
      4. kuryr: A network plugin developed as part of the OpenStack Kuryr project. It implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. Kuryr includes an IPAM driver as well.
  2. IPAM Drivers (IP Address Management Drivers)

    Docker has a native IP Address Management Driver that provides default subnets or IP addresses for networks and endpoints if they are not specified.

    1. Native IPAM Drivers

    2. Remote IPAM Drivers

      1. infoblox: An open source IPAM plugin that provides integration with existing Infoblox tools.

Default bridge network

  1. On any host running Docker Engine, there is, by default, a local Docker network named bridge. This network is created using a bridge network driver which instantiates a Linux bridge called docker0; i.e.

    1. bridge is the name of the Docker network.
    2. bridge is the network driver, or template, from which this network is created.
    3. docker0 is the name of the Linux bridge that is the kernel building block used to implement this network.
  2. docker0 is the network interace that functions as both:

    1. the ‘gateway’ to the private network on the host, which is used for Docker container communication,
    2. defining the network range available for container IP assignments.

User-defined bridges vs. default bridges

  1. Each user-defined network creates a configurable bridge.
  2. Containers can be attached and detached from user-defined networks on the fly.
  3. User-defined bridges provide automatic DNS resolution between containers.
  4. User-defined bridges provide better isolation and interoperability between containerised applications.
  5. Linked containers on the default bridge network share environment variables.

Overlay networks

REF: https://docs.docker.com/network/overlay/

  1. The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely. Docker transparently handles routing of each packet to and from the correct Docker daemon host and the correct destination container.

  2. When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:

    1. an overlay network called ingress, which handles control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
    2. a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.
  3. Overlay networks require some pre-existing conditions before you can create one. These conditions are:
    1. Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores.
    2. A cluster of hosts with connectivity to the key-value store.
    3. A properly configured Engine daemon on each host in the cluster.
  4. The dockerd options that support the overlay network are:
    1. --cluster-store
    2. --cluster-store-opt
    3. --cluster-advertise
  5. Overlay Network allows Docker Trusted Registry (DTR) components running on different nodes to communicate and replicate Docker Trusted Registry data.
    dtr-ol
    
  6. Which of the built-in network types has ‘swarm’ level scope?

    The overlay network handles routing of services for the swarm and thus has swarm level scope across all nodes.

Host networks

  1. Which of the built-in network drivers is often referred to as the ‘Host Only’ network driver?

    The host network driver is referred to as the ‘host only’ network driver because the host is the only entity that that will have network connectivity to the resources on it.

IP address and hostname

  1. When the container starts, it can only be connected to a single network, using --network.

  2. However, you can connect a running container to multiple networks using docker network connect.

  3. When you start a container using the --network flag, you can specify the IP address assigned to the container on that network using the --ip or --ip6 flags.

  4. When you connect an existing container to a different network using docker network connect, you can use the --ip or --ip6 flags on that command to specify the container’s IP address on the additional network.

  5. In the same way, a container’s hostname defaults to be the container’s name in Docker. You can override the hostname using --hostname.

  6. When connecting to an existing network using docker network connect, you can use the --alias flag to specify an additional network alias for the container on that network.

docker network

#############################################################################################################
# Create a network
docker network create [OPTIONS] NETWORK

# OPTIONS:
# --attachable		Enable manual container attachment
# --aux-address		Auxiliary IPv4 or IPv6 addresses used by Network driver
# --config-from		The network from which copying the configuration
# --config-only		Create a configuration only network
# --driver|-d     Driver to manage the Network; default is `bridge`
# --gateway		  IPv4 or IPv6 Gateway for the master subnet
# --ingress		  Create swarm routing-mesh network
# --internal		Restrict external access to the network
# --ip-range		Allocate container ip from a sub-range
# --ipam-driver	IP Address Management Driver
# --ipam-opt		Set IPAM driver specific options
# --ipv6		    Enable IPv6 networking
# --label		    Set metadata on a network
# --opt , -o		Set driver specific options
# --scope		    Control the network’s scope
# --subnet		  Subnet in CIDR format that represents a network segment

# The 'docker network create' command can take a network, subnet and gateway as arguments for either bridge
# or overlay drivers.
# --driver|-d:  accepts `bridge` or `overlay` (built-in network drivers); `bridge` if not specified

# Create a bridge network "my-bridge-network"
docker network create -d bridge my-bridge-network
# or
docker network create my-bridge-network

# Create a new overlay network "dev_overlay" to the cluster with a particular network range and gateway. 
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway 192.168.1.250 dev_overlay

# One way to guarantee that the IP address is available is to specify an --ip-range when creating the
# network, and choose the static IP address(es) from outside that range. This ensures that the IP address
# is not given to another container while this container is not on the network.
docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network

# Encrypt traffic on an overlay network (--opt|-o)
docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network


#############################################################################################################
# Connect a container to a network
# options: --alias, --ip, --ip6, --link, --link-local-ip 
docker network connect [OPTIONS] NETWORK CONTAINER

# To connect a running container "my-nginx" to an existing user-defined bridge "my-net"
docker network connect my-net my-nginx
# OR
docker create --name my-nginx --network my-net --publish 8080:80 nginx:latest

# You can specify the IP address you want to be assigned to the container’s interface.
docker network connect --ip 10.10.36.122 multi-host-network container2

# --alias option can be used to resolve the container by another name in the network being connected to.
docker network connect --alias db --alias mysql multi-host-network container2

# You can use --link option to link another container with a preferred alias
docker network connect --link container1:c1 multi-host-network container2

#############################################################################################################
# Disconnect a container from a network; options: --force|-f
docker network disconnect	 [OPTIONS] NETWORK CONTAINER

# To disconnect a running container "my-nginx" from an existing user-defined bridge "my-net"
docker network disconnect my-net my-nginx

#############################################################################################################
# Display detailed information on one or more networks; options: --format|-f, --verbose|-v 
docker network inspect [OPTIONS] NETWORK [NETWORK...]

#############################################################################################################
# List networks; options: --filter|-f, --format, --no-trunc, --quiet|-q
docker network ls [OPTIONS]   

# The 'ls' command for the 'docker network' object will list all Docker networks and their drivers installed.
docker network ls
> NETWORK ID          NAME                DRIVER              SCOPE
> aa075c363cae        bridge              bridge              local
> 84bba7e0b175        host                host                local
> 926c02ac0dc5        none                null                local

#############################################################################################################
# Remove all unused networks; options: --filter, --force|-f
docker network prune [OPTIONS]

#############################################################################################################
# Remove one or more networks
docker network rm	 NETWORK [NETWORK...] 

Port mapping and publishing

  1. The following docker command can be used to find out all the ports mapped:
    1. docker inspect
    2. docker port
    3. docker ps
  2. docker run --publish is to publish a port so that an application is accessible externally

  3. When publishing a container/service’s service ports (like HTTP port 80) to the underlying host(s) with the -P option, Docker will map the container ports to port numbers above port 32768 on the host.

    The -P option will map the ports in a container that is EXPOSED during its build to ports on a host with a port number higher than 32768.

Routing Mesh

  1. Publishing a service’s port using the Routing Mesh makes the service accessible at the published port on every swarm node.

  2. The Routing Mesh allows all nodes that participate in a Swarm for a given service to be aware of and capable of responding to any published service port request even if a node does not have a replica for said service running on it.

  3. The ability for any node in a cluster to answer for an exposed service port even if there is no replica for that service running on it, is handled by Routing Mesh.

DNS Services

  1. By default, a container inherits the DNS settings of the Docker daemon, including the /etc/hosts and /etc/resolv.conf.

  2. Set the DNS server for all Docker containers.

     # To set the DNS server for all Docker containers, use:
     $ sudo dockerd --dns 8.8.8.8
        
     # To set the DNS search domain for all Docker containers, use:
     $ sudo dockerd --dns-search example.com
    
  3. You can override these settings on a per-container basis.

     # Use the --dns option to override the default DNS server when creating a container.
     docker container create --dns=IP_ADDRESS ...
        
     # The 'docker run' command uses the --dns option to override the default DNS servers for a container.
     docker run -d --dns=8.8.8.8 IMAGE_NAME
        
     --dns IP_ADDRESS
         The IP address of a DNS server. To specify multiple DNS servers, use multiple --dns flags. If the
         container cannot reach any of the IP addresses you specify, Google’s public DNS server 8.8.8.8 is
         added, so that your container can resolve internet domains.
        
     --dns-search
         A DNS search domain to search non-fully-qualified hostnames. To specify multiple DNS search
         prefixes, use multiple --dns-search flags.
        
     --dns-opt
         A key-value pair representing a DNS option and its value. See your operating system’s documentation
         for resolv.conf for valid options.
        
     --hostname
         The hostname a container uses for itself. Defaults to the container’s name if not specified.