As network engineers, we have seen the shift from physical hardware to virtual machines. Docker is taking the concept of virtual machines a step further and make it even better. To revisit, let’s see what is a Virtual Machine.
A hypervisor runs between base infrastructure, a physical server most of the time and allows running separate instances of guest operating systems with each guest operating system known as a virtual machine acting as an independent unit. While this allowed better utilization of hardware resources it consumed a lot of resources from the host machine to provision a guest operating system. Each VM includes a full copy of an operating system, the application, necessary binaries, and libraries – taking up tens of GBs. VMs can also be slow to boot
Let’s see what is a docker container.
Instead of having the resources consumed by multiple operating systems in the case of VMs, a docker setup will have one host operating system which runs the docker application that will serve as the abstraction layer that communicates between the Host operating system and the application containers. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in userspace. Containers take up less space than VMs (container images are typically tens of MBs in size), can handle more applications, and require fewer VMs and Operating systems.
What is a container ?
Notice it doesn’t contain the full operating system image in the container. All it contains is the least amount of system libraries that are required to run the environment. Ubuntu for example is from its website is available at 2.6GB but it contains the GUI and a lot of applications that developers don’t need. A common ubuntu docker image will probably be 60 – 70 Mb. There are also other Linux flavors that are as little as 20Mb in size.
To bring more context to network engineers.
Imagine you wrote a device status monitoring web application on your local machine ( maybe MACOS ). That application used a lot of libraries like Flask, Pyats, MySql, and a few more. Now you want to give that application to other users. For them to be able to run that application either you need to pass the instructions to them to install all the dependencies and then run the code. This application will probably work on a user running Linux-based OS after they have done pip install the requirements but it won’t work on Windows because I know PyAts is not supported on Windows at the moment. How do you get around this problem?
If you develop your application inside a container, your container will not only contain the code of your application but it will also contain all the libraries and the base system libraries that are a bare minimum must run your application. With Docker installed on their machine, they just run the container you gave and the application will definitely run for them too because they just ran the exact same environment that you used to develop the application.
In Part II of this series, we will see some CLI action of how to work with docker.