Docker is a tool designed to make it easier to create, deploy, and run applications using containers. Containers allow a developer to collect an application with all the parts it needs, such as libraries and other dependencies, and distribute it as a single package. By doing this, thanks to the container, the developer ensures that the application will run on another Linux machine, regardless of the customized settings that the machine may have, which may differ from the machine used to write and test the code.

In a way, Docker is a bit like a virtual machine. Unlike a virtual machine, however, Docker allows applications to use the same Linux kernel as the system they are working on, rather than creating an entire virtual operating system, and only requires applications to be sent with things that are not already running on the host computer. This provides a significant increase in performance and reduces the size of the application.

More importantly, Docker is open source. This means that anyone can contribute to Docker and expand it to meet their own needs if they need additional features that are not available outside the container.

 

Who is Docker for?

Docker is a tool designed to benefit both developers and system administrators, making it a part of many DevOps (developer + process) tool chains. For developers, it means they can focus on writing code without worrying about the system it will eventually run on. It also allows them to get started quickly by using one of thousands of pre-designed programs to run in a Docker container as part of their application. For operating personnel, Docker provides flexibility and potentially reduces the number of systems needed due to its small footprint and low overhead.

 

Start

Here are some resources to help you get started with Docker in your workflow. Docker provides a web-based tutorial with a command line simulator where you can try basic Docker commands and start understanding how it works. There is also a getting started guide for Docker introducing some basic commands and container terminology.

 

Docker and Security

Docker brings security to applications running in a shared environment, but containers are not an alternative to taking appropriate security measures on their own.

Dan Walsh, a computer security leader known for his work at SELinux, provides important clues about the importance of making sure Docker containers are safe. In addition, there are currently studies that provide a detailed breakdown of the security features and functions in Docker.

 

Understanding Containers

Containers can be considered to require three categories of software:

Builder: Technology used to create a container.

Engine: The technology used to operate a container.

Orchestration: Technology used to manage many containers.

One of the temptations of using containers is their ability to gracefully retire and be reborn on demand. Because the use of a container is no longer necessary due to a crash or when server traffic is low, containers are inexpensive to start with and are designed to look and disappear smoothly. Since the containers are intended to be short-lived and produce new samples as often as needed, monitoring and managing them is not expected to be done in real time by a person, but to be automated instead.

Linux containers have facilitated a big shift in high-availability computing, and there are many toolsets to help you run services (or even your entire operating system) in containers. Docker is one of the many choices defined by the Open Container Initiative (OCI), an industry standards organization that aims to promote innovation while avoiding the danger of supplier lockdown. With OCI, you can choose from a chain of container vehicles, including Docker, OKD, Podman, rkt, OpenShift and others.

If you decide to run the services in containers, you probably need software designed to host and manage these containers. This is commonly known as container arrangement. Kubernetes provides container arrangement for various container runtimes.