Tutorial on Docker Container Lifecycle Management
Tutorial on Docker Container Lifecycle Management
Managing an application's dependencies and tech stack across numerous cloud and development environments is a regular difficulty for DevOps teams. Regardless of the underlying platform it uses, it must maintain the application's stability and functionality as part of its regular duties. However, one possible solution to this problem is to create an OS image that already contains the required libraries and configurations needed to run the application. This approach makes it easy for software deployers to deploy their applications on the cloud without the tedious task of setting up an OS environment.
One way to create such an image is to use a virtual machine (VM). With a VM, you can install all the necessary libraries and configure the OS, then take an image of the VM. When it's time to deploy the application, you can simply start the machine with that image. However, VMs can be slower due to the operational overhead they incur.
Alternatively, container technology provides a more lightweight and efficient approach to packaging and deploying applications. With containers, each application and its dependencies can be packaged as a container image that can be easily deployed on any infrastructure that supports containerization. Containers are isolated from the host system and other containers, providing security and preventing conflicts with other software running on the same machine. Additionally, containerization allows for more efficient use of system resources, making it possible to run multiple containers on a single host.
Before understanding the concept of Docker containers or containerization in general, it's imperative to understand the Docker container lifecycle.
In this blog, we will learn about Docker container lifecycle management. Before we take a closer look at the topic, let's look at some basic jargon.
Docker Application
A Docker application is a collection of Docker containers that work together to provide a complete software solution. Each container in the application can perform a specific function, such as running a web server, a database, or a message broker. Docker applications are typically managed using Docker Compose, which is a tool for defining and running multi-container Docker applications.
Docker applications offer several benefits over traditional monolithic applications. They are modular, allowing developers to update and scale individual components without affecting the entire application. They are also portable, meaning that they can be deployed on any Docker-compatible infrastructure, from a developer's laptop to a public cloud.
Another advantage of Docker applications is that they can be easily versioned and rolled back. Each container in the application can have its version, and the entire application can be rolled back to a previous version if needed.
Docker also provides tools for managing Docker applications, such as Docker Swarm, which is a native clustering and orchestration solution for Docker. With Docker Swarm, developers can manage a cluster of Docker hosts and deploy and scale applications across them.
Docker Image
A Docker image is a read-only template that includes the programme code, libraries, dependencies, and other configuration files required to run a piece of software. Docker containers, which are nimble, portable, and self-contained environments that can run the programme and its dependencies, are built from Docker images.
A Dockerfile, a script containing instructions for generating the image, is used to produce a Docker image. The Dockerfile often defines a base image to utilise, such as an operating system or a ready-made application image, and then adds layers of configuration and dependencies on top of that base image. Docker images are efficient to build and store because only the changes in each layer need to be preserved. This is because each instruction in the Dockerfile creates a new layer in the image.
A Docker registry is a centralised site for storing and sharing Docker images, which can house Docker images. The most well-known Docker registries are Docker Hub, Google Container Registry, and Amazon Elastic Container Registry. Additionally, Docker images can be pushed and pulled between several environments, making it simple to deploy the same image across development, testing, and production environments.
Docker Container
A Docker container is a Docker image runtime instance. It is a small, portable, and independent environment that can run an application and all of its dependencies separately. A Docker image, which contains the application code, libraries, and configuration files required to run the application, is the starting point for every Docker container.
Regardless of the host system or infrastructure on which it is installed, Docker containers offer a consistent runtime environment for the application. Containers provide security and prevent problems with other software that is executing on the same machine since they are segregated from the host system and other containers.
Docker Compose, Kubernetes, and other container orchestration solutions can be used to manage and organise Docker containers. Because containers are simple to start, stop, and restart, it's simple to scale the application up or down in response to demand.
Docker containers may be readily deployed across several environments, including development, testing, and production environments, making it simple to maintain consistency throughout various phases of the application development lifecycle.
Why do we use Docker Container?
In contrast to the technique of the virtual machine, it virtualizes at the operating system level, with numerous containers running directly on the OS kernel.
It simply means that compared to launching a whole OS, containers are a lot lighter, start-up much faster, and consume much less RAM. Additional benefits of using a container with Docker include:
Docker containers make it easier and faster than virtual machines to deploy, replicate, relocate, or back up an entire workload. This helps us save a tonne of time and complexity.
With containers, we have cloud-like flexibility for any architecture that utilises containers.
Docker containers, which are more sophisticated variations of Linux Containers (LXC), let us create image libraries, create applications from those images, and deploy both the apps and the containers on both local and remote infrastructure.
The issue of transporting and running software from one computing environment to another, such as from the development environment to the testing, staging, or production environment, is also resolved by Docker containers.
Applications and images can be transferred from a physical system to a virtual machine in a private or public cloud with the help of Docker containers.
We can simply abstract the differences in OS distributions and their underlying infrastructure and focus solely on software because a container comprises the full runtime environment, including an application, dependencies, binaries, libraries, configuration files, etc., in one package.
Docker Container Vs Virtual Machines
Virtual machines and Docker containers are two separate approaches for packaging and deploying programmes. Both technologies offer portability and isolation, but they take different approaches to virtualization and resource usage.
Virtual machines (VMs) are an operating system's kernel and hardware that run as an emulated whole on top of a hypervisor. Each VM runs its operating system and has access to its own set of resources, such as CPU, memory, and storage. Although VMs can be difficult to start and consume a lot of resources, they offer total separation between the host system and the virtual machine.
On the other hand, Docker containers are portable, lightweight, and self-contained environments that use the same kernel and resources as the host system. They share the same kernel and resources, which makes containers more resource-efficient than virtual machines (VMs) and faster to start. Containers employ namespaces and cgroups to establish separation between the container and the host system.
The fact that Docker containers offer a consistent runtime environment for the application regardless of the host system or infrastructure on which it is installed is one of the main advantages of adopting them over virtual machines (VMs). Additionally, Docker containers provide simple deployment and management, which makes it simple to scale the application up or down in response to demand.
In conclusion, while virtual machines and Docker containers both offer isolation and portability, they differ in how they approach virtualization and how they use resources. Virtual machines offer total separation between the host system and the virtual machine, but Docker containers are compact, effective, and simple to deploy and administer. The decision between the two is based on the requirements and limitations that the application and deployment environment have in particular.
Overview of Docker Container LifeCycle
A Docker container has several stages in its lifecycle, including construction, operating, pausing, stopping, and deletion.
This will be much clearer in the diagram that follows.
Let me describe each phase of the lifespan of a container.
Create: An image is created using a Dockerfile or an existing image at the first stage, which is constructing a Docker container. Following that, the container is built using the "docker create" command, but it is not yet active.
Run: Use the "docker start" command to launch the container after it has been constructed. As of right now, the container is active and the application inside is prepared to accept requests.
Pause: The "docker pause" command can be used to pause the container. The processes in the container will stop and the state will be frozen as a result. This is helpful if you need to temporarily free up resources but yet want to save the container's state.
Stop: The "docker stop" command can be used to terminate a container. This will instruct the container to stop operating and provide a graceful shutdown. The status of the container is kept, and it can be restarted at a later time with the "docker start" command.
Delete: The "docker rm" command can then be used to remove the container. By doing this, the container will be eliminated from the Docker environment and any resources it was utilising will be released. Keep in mind that stopping the container is necessary before deleting it.
Overview of POSIX signals
Signals are standardised messages that are given to an active programme to cause certain actions, such as quitting or managing errors. They are a restricted type of inter-process communication (IPC) that is frequently used in POSIX-compliant operating systems like Unix and Unix-like systems.
A signal is an asynchronous notice that is sent to a process or to a particular thread inside a process to inform it of an event. Signals are frequently used to halt, stop, suspend, or kill a process.
Briefly stated signals are the accepted means by which an Operating System (OS) instructs a young process on how to behave.
There are numerous signals, and each has a unique function. However, we'll limit our attention to these three:
SIGCONT - "Signal continue"
The SIGCONT signal commands the operating system to resume (continue) a process that was previously halted by the SIGSTOP or SIGTSTP signal. The Unix shell's job control is one significant application of this signal.
SIGKILL - "Signal kill"
When a process receives the SIGKILL signal, it is instantly terminated (killed). This signal, unlike SIGTERM and SIGINT, cannot be intercepted or ignored, and the receiving process cannot undertake any cleanup after receiving it.
SIGTERM - "Signal terminate"
To end a process, the SIGTERM signal is provided to the process. It can either be intercepted and interpreted by the process, like the SIGKILL signal, or it can be ignored. This enables a smooth termination of the process, freeing resources and, if necessary, storing state. Nearly the same as SIGTERM is SIGINT.
Due to the opportunity for a graceful/safe process termination, SIGTERM is typically favoured over SIGKILL.
Docker Container LifeCycle Management
Create
This is the container lifecycle's initial state. This signifies, the container is constructed but not running in this stage. 'docker create' is used to accomplish this.
docker container create --name container01 dev/docker:v2
A read-write (R/W) layer is added to the read-only (R/O) layer of the selected image when a Docker container is created. This gets the container ready to run the programme by retrieving the image, setting up the environment variables, setting up entry points, etc.
It's crucial to remember that starting the programme doesn't happen instantly when you create a container. The CPU and memory limitations, container image, and capabilities can all be set while the configuration is being created, though. The 'docker update' command may also be used to update the container's configuration when it is in this condition.
This implies that we can create the container once with all the necessary parameters and start it at a later time without having to specify them again.
Resources are not allocated in this condition, which is another important point to make.
Run
This indicates that commands mentioned in the image are being carried out one by one by the container in this condition.
docker container start container01
Docker prepares the resources, including network, memory, and CPU, that is required when we start a container. After that, it develops a setting for the container to operate in. When this is finished, the container is operational and starts carrying out the tasks assigned to it.
The "docker run" command can do the same purpose as the two instructions mentioned above. This command immediately starts the container after creating it.
docker run -d --name container02 dev/docker:v2
Stop
A container that has finished running is said to be in the "exited" state. A container may enter the exited state for several reasons, such as:
The process that was executed inside the container finished its job and shut down.
A user or an outside signal terminated the process that was operating inside the container.
The process inside the container encountered a problem.
The state of being exited includes the state of being killed. A container is said to have been destroyed when Docker forcibly terminates the process running inside it. This may occur if a user issues the "docker kill" command or if a container does not react to a SIGTERM signal, in which case Docker will send a SIGKILL signal to force the container to terminate.
A command to stop a Docker container:
docker container stop container01
The 'docker stop' command performs the following actions when it is run:
A SIGTERM signal is sent by Docker to the container's primary process (PID 1). This signal asks the process to terminate gracefully.
Docker sends a SIGKILL signal to forcibly end the process if it does not react to the SIGTERM signal within a predetermined period (by default, 10 seconds; to override, use the '-t' switch).
Docker switches the container to the exited state once the process has been stopped.
Docker will delete the container and its filesystem from the system if the --rm flag was used to start the container.
Overall, using the docker stop command enables us to gently terminate a container and the application or process operating inside of it.
Pause
A container that has been momentarily suspended is in the "paused" condition. A container that has been paused is still running, but all of its processes have been stopped, and no new processes can be launched until the container has been unpaused.
When we need to temporarily free up resources on the host system or when we need to diagnose an issue with the container, pausing a container can be helpful. When a container is paused, it continues to use resources like memory and CPU, but at a considerably slower rate than when it is actively working.
When the container is stopped, the state of the execution is still stored in memory, and it will continue from the same place when it is restarted. For instance, if I pause my Docker container while it is counting from 1 to 100 and then resume it at any time, it will continue from that point. While it is pausing, the CPU utilisation would be close to nothing but memory usage would not be.
You can use the docker pause command with the container ID or name to pause an active container. You can use the docker unpause command and the container ID or name to unpause a paused container.
docker pause container01
docker unpause container01
It's crucial to remember that not all containers can be halted. It is impossible to pause containers that are operating in privileged mode or with specific system capabilities. It's necessary to exercise caution while halting containers in production environments since pausing a container that is conducting a crucial task could have unforeseen repercussions.
Delete
Since the container has already been removed, there is no official condition known as "Deleted."
A container that has been eliminated from the Docker host system is in the "deleted" state. All of a container's resources, including its filesystem and configuration, are permanently wiped from the host system when it is deleted.
The docker rm command must be used in combination with the container ID or name to delete a container. A deleted container cannot be started again or resumed, and all changes or data made to the container are lost.
docker container rm container01
It's crucial to remember that deleting a container doesn't also remove the image that was used to make it. The image is still usable for producing new containers, and any modifications made to it will be reflected in the new containers produced using it.
Dead
A Docker container is switched to the dead state when it is unable to be destroyed because certain resources are still being used by an external process. The container can no longer function in this state and cannot be restarted. It is only removable. It is only partially deleted, but it doesn't use any CPU or memory.
Conclusion
Comments
Post a Comment