My Docker notebook
REF: https://success.docker.com/article/networking
(Image source: docs.docker.com)
A Sandbox contains the configuration of a container’s network stack. This includes management of the container’s interfaces, routing table, and DNS settings. An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail, or other similar concept. A Sandbox may contain many endpoints from multiple networks.
An Endpoint joins a Sandbox to a Network. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability so that a service can use different types of network drivers without being concerned with how it’s connected to that network.
The CNM does not specify a Network in terms of the OSI model. An implementation of a Network could be a Linux bridge, a VLAN, etc. A Network is a collection of endpoints that have connectivity between them. Endpoints that are not connected to a network do not have connectivity on a network.
Network Drivers
Docker Network Drivers provide the actual implementation that makes networks work. They are pluggable so that different drivers can be used and interchanged easily to support different use cases.
Multiple network drivers can be used on a given Docker Engine or Cluster concurrently, but each Docker network is only instantiated through a single network driver.
bridge
: The bridge
driver creates a Linux bridge on the host that is managed by Docker.
By default containers on a bridge can communicate with each other.
External access to containers can also be configured through the bridge
driver.
User-defined bridge network is the best network driver type when you need multiple containers to
communicate on the same Docker host.
host
: With the host
driver, a container uses the networking stack of the host.
There is no namespace separation, and all interfaces on the host can be used directly by the container.
Host network is the best network driver type when the network stack should not be isolated from the
Docker host, but you want other aspects of the container to be isolated.
overlay
: The overlay
driver creates an overlay network that supports multi-host networks out of the box.
It uses a combination of local Linux bridges and VXLAN to overlay container-to-container communications over
physical network infrastructure.
Overlay network is the best network driver type when you need containers running on different Docker
hosts to communicate, or when multiple applications work together using swarm services.
macvlan
: The macvlan
driver uses the MACVLAN bridge mode to establish a connection between container
interfaces and a parent host interface (or sub-interfaces). It can be used to provide IP addresses to
containers that are routable on the physical network. Additionally VLANs can be trunked to the macvlan
driver to enforce Layer 2 container segmentation.
Macvlan network is the best network driver type when you are migrating from a VM setup or need your
containers to look like physical hosts on your network, each with a unique MAC address.
none
: The none
driver gives a container its own networking stack and network namespace but does not
configure interfaces inside the container. Without additional configuration, the container is completely
isolated from the host networking stack.
contiv
: An open source network plugin led by Cisco Systems to provide infrastructure and security policies
for multi-tenant microservices deployments. Contiv also provides integration for non-container workloads and
with physical networks, such as ACI. Contiv implements remote network and IPAM drivers.weave
: A network plugin that creates a virtual network that connects Docker containers across multiple
hosts or clouds. Weave provides automatic discovery of applications, can operate on partially connected
networks, does not require an external cluster store, and is operations friendly.calico
: An open source solution for virtual networking in cloud datacenters. It targets datacenters where
most of the workloads (VMs, containers, or bare metal servers) only require IP connectivity. Calico provides
this connectivity using standard IP routing. Isolation between workloads — whether according to tenant
ownership or any finer grained policy — is achieved via iptables programming on the servers hosting the
source and destination workloads.kuryr
: A network plugin developed as part of the OpenStack Kuryr project. It implements the Docker
networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. Kuryr
includes an IPAM driver as well.IPAM Drivers (IP Address Management Drivers)
Docker has a native IP Address Management Driver that provides default subnets or IP addresses for networks and endpoints if they are not specified.
Native IPAM Drivers
Remote IPAM Drivers
infoblox
: An open source IPAM plugin that provides integration with existing Infoblox tools.On any host running Docker Engine, there is, by default, a local Docker network named bridge
.
This network is created using a bridge network driver which instantiates a Linux bridge called docker0
; i.e.
bridge
is the name of the Docker network.bridge
is the network driver, or template, from which this network is created.docker0
is the name of the Linux bridge that is the kernel building block used to implement this network.docker0
is the network interace that functions as both:
REF: https://docs.docker.com/network/overlay/
The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely. Docker transparently handles routing of each packet to and from the correct Docker daemon host and the correct destination container.
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
ingress
, which handles control and data traffic related to swarm services. When you
create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress
network by default.docker_gwbridge
, which connects the individual Docker daemon to the other daemons
participating in the swarm.dockerd
options that support the overlay network are:
--cluster-store
--cluster-store-opt
--cluster-advertise
dtr-ol
Which of the built-in network types has ‘swarm’ level scope?
The overlay network handles routing of services for the swarm and thus has swarm level scope across all nodes.
Which of the built-in network drivers is often referred to as the ‘Host Only’ network driver?
The host network driver is referred to as the ‘host only’ network driver because the host is the only entity that that will have network connectivity to the resources on it.
When the container starts, it can only be connected to a single network, using --network
.
However, you can connect a running container to multiple networks using docker network connect
.
When you start a container using the --network
flag, you can specify the IP address assigned to the container on
that network using the --ip
or --ip6
flags.
When you connect an existing container to a different network using docker network connect
, you can use the --ip
or --ip6
flags on that command to specify the container’s IP address on the additional network.
In the same way, a container’s hostname defaults to be the container’s name in Docker. You can override the hostname
using --hostname
.
When connecting to an existing network using docker network connect
, you can use the --alias
flag to specify an
additional network alias for the container on that network.
docker network
#############################################################################################################
# Create a network
docker network create [OPTIONS] NETWORK
# OPTIONS:
# --attachable Enable manual container attachment
# --aux-address Auxiliary IPv4 or IPv6 addresses used by Network driver
# --config-from The network from which copying the configuration
# --config-only Create a configuration only network
# --driver|-d Driver to manage the Network; default is `bridge`
# --gateway IPv4 or IPv6 Gateway for the master subnet
# --ingress Create swarm routing-mesh network
# --internal Restrict external access to the network
# --ip-range Allocate container ip from a sub-range
# --ipam-driver IP Address Management Driver
# --ipam-opt Set IPAM driver specific options
# --ipv6 Enable IPv6 networking
# --label Set metadata on a network
# --opt , -o Set driver specific options
# --scope Control the network’s scope
# --subnet Subnet in CIDR format that represents a network segment
# The 'docker network create' command can take a network, subnet and gateway as arguments for either bridge
# or overlay drivers.
# --driver|-d: accepts `bridge` or `overlay` (built-in network drivers); `bridge` if not specified
# Create a bridge network "my-bridge-network"
docker network create -d bridge my-bridge-network
# or
docker network create my-bridge-network
# Create a new overlay network "dev_overlay" to the cluster with a particular network range and gateway.
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway 192.168.1.250 dev_overlay
# One way to guarantee that the IP address is available is to specify an --ip-range when creating the
# network, and choose the static IP address(es) from outside that range. This ensures that the IP address
# is not given to another container while this container is not on the network.
docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network
# Encrypt traffic on an overlay network (--opt|-o)
docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network
#############################################################################################################
# Connect a container to a network
# options: --alias, --ip, --ip6, --link, --link-local-ip
docker network connect [OPTIONS] NETWORK CONTAINER
# To connect a running container "my-nginx" to an existing user-defined bridge "my-net"
docker network connect my-net my-nginx
# OR
docker create --name my-nginx --network my-net --publish 8080:80 nginx:latest
# You can specify the IP address you want to be assigned to the container’s interface.
docker network connect --ip 10.10.36.122 multi-host-network container2
# --alias option can be used to resolve the container by another name in the network being connected to.
docker network connect --alias db --alias mysql multi-host-network container2
# You can use --link option to link another container with a preferred alias
docker network connect --link container1:c1 multi-host-network container2
#############################################################################################################
# Disconnect a container from a network; options: --force|-f
docker network disconnect [OPTIONS] NETWORK CONTAINER
# To disconnect a running container "my-nginx" from an existing user-defined bridge "my-net"
docker network disconnect my-net my-nginx
#############################################################################################################
# Display detailed information on one or more networks; options: --format|-f, --verbose|-v
docker network inspect [OPTIONS] NETWORK [NETWORK...]
#############################################################################################################
# List networks; options: --filter|-f, --format, --no-trunc, --quiet|-q
docker network ls [OPTIONS]
# The 'ls' command for the 'docker network' object will list all Docker networks and their drivers installed.
docker network ls
> NETWORK ID NAME DRIVER SCOPE
> aa075c363cae bridge bridge local
> 84bba7e0b175 host host local
> 926c02ac0dc5 none null local
#############################################################################################################
# Remove all unused networks; options: --filter, --force|-f
docker network prune [OPTIONS]
#############################################################################################################
# Remove one or more networks
docker network rm NETWORK [NETWORK...]
docker inspect
docker port
docker ps
docker run --publish
is to publish a port so that an application is accessible externally
When publishing a container/service’s service ports (like HTTP port 80) to the underlying host(s) with the -P
option, Docker will map the container ports to port numbers above port 32768 on the host.
The -P
option will map the ports in a container that is EXPOSED during its build to ports on a host with a port
number higher than 32768.
Publishing a service’s port using the Routing Mesh makes the service accessible at the published port on every swarm node.
The Routing Mesh allows all nodes that participate in a Swarm for a given service to be aware of and capable of responding to any published service port request even if a node does not have a replica for said service running on it.
The ability for any node in a cluster to answer for an exposed service port even if there is no replica for that service running on it, is handled by Routing Mesh.
By default, a container inherits the DNS settings of the Docker daemon, including the /etc/hosts
and
/etc/resolv.conf
.
Set the DNS server for all Docker containers.
# To set the DNS server for all Docker containers, use:
$ sudo dockerd --dns 8.8.8.8
# To set the DNS search domain for all Docker containers, use:
$ sudo dockerd --dns-search example.com
You can override these settings on a per-container basis.
# Use the --dns option to override the default DNS server when creating a container.
docker container create --dns=IP_ADDRESS ...
# The 'docker run' command uses the --dns option to override the default DNS servers for a container.
docker run -d --dns=8.8.8.8 IMAGE_NAME
--dns IP_ADDRESS
The IP address of a DNS server. To specify multiple DNS servers, use multiple --dns flags. If the
container cannot reach any of the IP addresses you specify, Google’s public DNS server 8.8.8.8 is
added, so that your container can resolve internet domains.
--dns-search
A DNS search domain to search non-fully-qualified hostnames. To specify multiple DNS search
prefixes, use multiple --dns-search flags.
--dns-opt
A key-value pair representing a DNS option and its value. See your operating system’s documentation
for resolv.conf for valid options.
--hostname
The hostname a container uses for itself. Defaults to the container’s name if not specified.