For years, virtual machines (VMs) has been a common approach for deploying middleware and application that run on them, whether in Test, Staging or Production environment. However, often take time to boot-up, duplicate OS services and often can’t provide protection against crashes.
Figure 1: Virtual Machines vs. Docker Containers
Container technology has existed for over 30 years but recently has gained popularity due to the simplicity of using containers provided by Docker. Docker is an open-source platform that has become the default standard many developers and system administrators to develop, ship, and run applications using containers.
Containers work by isolating the application inside the container so that everything outside the container can be standardized. Containers provide developers with a way to package up applications and their dependencies in a lightweight manner.
When you package your application into a container, you are guaranteed that the same code you ran in development will run in QA, staging, and production. This works best when you incorporate containers into your entire development cycle.
While containers are a bit like VM they require much fewer resources to run on host computer; are superfast to create; and unlike VMs the containers are isolated from the OS and the other shared assets.
If you’re having trouble deciding between virtual machines or containers, check out the article by Jack Wallen entitled “Containers vs Virtual Machines: A simplified answer to a complex question” where he boils down the choice down to two simple bullet points:
- Do you need a full platform that can house multiple services? Go with a virtual machine.
- Do you need a single service that can be clustered and deployed at scale? Go with a container.
What are Docker’s characteristics?
The main characteristics of Docker are portability, agility and self-sufficiency:
- Portability. One of Docker’s greatest benefits is that it enables cross-platform deployment. We can deploy Docker containers on Windows, Linux or Mac. Also, Docker is available on many cloud-platforms like Amazon EC2, Microsoft Azure, Google Cloud, etc.
- Agility. As discussed earlier, containers are often compared to virtualization. While in a virtual machine you can find a full O.S. installed, containers use and share the O.S. and device drivers of the host. Therefore, containers are smaller than VMs, start up much faster, and have better performance.
- Self-sufficiency. A Docker container has only the libraries, files and configurations needed to deploy specific functionalities. Docker handles the management of the container and its containing applications.
What are Docker’s components?
To understand Docker’s internals, you need to know about three components:
- Docker images. They are the build component of Docker: they contain all the dependencies of your application. You specify how to build an image using a file called Dockerfile, which is nothing more than a series of commands.
- Docker containers. They are the run component of Docker, and they are basically an instance of an image. They hold everything that is needed for an application to run. Each container can be run, started, stopped, moved and deleted.
- Docker registries. They are the distribution component of Docker. They are repositories that hold base images, and they can be public (Docker Hub) or private.
So let’s say you have your application and you want to deploy it to a Docker host. How can you do this? There are several ways, but this is the most common:
- Create a Dockerfile that specifies what your app needs to run (i.e., define an image).
- Connect to the Docker Host.
- Using the Dockerfile, create an image in the Host.
- Create a new container using this image.
- Start the container. Your app is now “Dockerized”!
- Optionally, you can take a snapshot of this container. This will create a new image which you can then push into a Registry for later use