Docker on EOS
In this article we will talk about what is a container, how it is applicable to Arista EOS switches and pulling containers from a public or private repot to run on a Arista physical or virtual device. A docker container is simply a way to abstract and decouple an application from a linux(and now windows) operating system to run as a process on a host machine with the bare minimum requirements.
Docker makes creating cloud portable applications extremely easy. So a application can be written from a mac laptop intended to be ran on a Ubuntu container that can then be ran on a EOS switch or any Linux device that runs docker run time engine. So the same applications that are ran on a server for microservices can be ran on switch with Arista EOS. Since Arista extensible operating system is simply Fedora linux we are able to integrate docker run time engine into the operating system. This also allows for complete isolation of software libraries. So a container in isolation can run a different version of Golang than what is installed on the switch for example.
The photo above depicts an example of how docker works. Docker takes native linux containers and packages them in a simplistic fashion. Docker will take a Ubuntu container for example and place that container in the correct namespace, network and apply netfilter rules for example.
Docker comes with three ways to network containers by default. There are other popular container networking methods for example Macvlan, IPvlan and VXLAN.
Bridge mode - This is the default mode of Docker networking a container comes online bridges itself to the docker0 interface and is natted to the outside world.
Host mode - This method allows a container to see the host's native networking stack. So it can bridge to an interface or add kernel routes.
none - Is exactly what it sounds like it is not connected to an interface.
Macvlan - Macvlan allows an interface to be bridged to a physical interface or subinterface. However, this differs from host mode as it does not directly talk to host networking. Each container receives a unique mac address.
IPvlan - IPvlan is similar to Macvlan however, each container can receive the same mac address with different IP address. IPvlan also allows for routing on the host or similar to router on a stick.
VXLAN - VXLAN mode is the default networking mode for host overlays in Docker Swarm mode.
Docker Run time examples
Lets start off by dropping into bash and checking the docker run time engine.
leaf1b#bash Arista Networks EOS shell bash-4.3# sudo su bash-4.3# service docker start Starting docker (via systemctl): [ OK ]
We can now check to see if Docker run time engine is running on the switch by running the service docker status command.
Looks like docker run time engine is running on the switch. Now lets pull down a container image from Dockerhub. Dockerhub is a public repository of Docker images. The particular image we are going to pull from Dockerhub is one of the most common images Ubuntu. There are a variety of images from Ubuntu,Fedora to NGINX that are all very simplistic to pull from Dockerhub and run on the switch. With Arista EOS any of these container images can be ran on the switch.
So lets go ahead and run the Ubuntu image on the switch.
bash-4.3# docker run -dit ubuntu
docker run – Runs the container
D – Detached, We are telling the container to run but not to attach to the container.
IT – Interactive shell, We are telling the container to start a interactive shell
ubuntu – name of the container image we are planning on using.
If everything is successful in running the container we should be able to issue a docker ps to see the container running. By issuing the docker exec command and the container ID we can pass through any commands directly into the container. Lets check the release of the Ubuntu container running.
Running this Ubuntu container we did not pass many arguments in other than detached and interactive. So docker will place the container in its default bridge/nat mode. So we can see that by running the docker inspect container ID.
We can see that the container will receive the 172.17.0.2/24 network and use 172.17.0.1 as its default gateway. The switch has a docker0 interface which uses this address. Netfilter/IPtables will then nat all traffic to a outgoing interface. This is the default way of doing networking within Docker.
Lets take that exact same Ubuntu container and bridge it to a SVI on the switch and run a container using that VLAN backed IP space.
bash-4.3# docker network create -d macvlan --subnet=10.0.5.0/24 --ip-range=10.0.5.128/25 --gateway=10.0.5.1 -o parent=vlan5 vlan5-10.0.5.0 && ip link add mac0 link vlan5 type macvlan mode bridge
The docker network create command will simply create a new network driver. In this case we are using the Macvlan driver to bridge any container ran to VLAN 5 which is on the 10.0.5.0/24 subnet. The following command will allow any container to talk to the switch SVI since it needs to be in the default name space.
docker run --net=vlan5-10.0.5.0 -dit ubuntu
If everything is now successful we should be able to see this container bridged to the same network and interface ad VLAN5. By running the docker inspect command we should see the container on the 10.0.5.0/24 network
The container is now able to allow bidirectional communication from either the outside world or source traffic directly from the container.
Lets now run a web app on the switch through a container on Dockerhub. We are going to use A popular flask python app on Dockerhub to simply test that we can run a web app on the switch and have users connect on port 5000 which is the default for flask. This container will use –net host mode which allows the container to use the hosts networking stack on the switch and -privileged mode to allow the container to talk directly to the linux kernel.
docker run --privileged --net=host -dit https://hub.docker.com/r/jcdemo/flaskapp/
The container is now running. If we perform a curl on localhost:5000 or any interface we should see the following.
Running a load balancer on the switch with HAproxy
The last example we are going to look at is a Github project running a load balancer on the switch following the guide on the Github repo will allow a user to run HAproxy on a EOS device and load balance to two linux servers.