Containerization is a software deployment technology that allows developers to package software and applications in code and run them in isolated compute environments as immutable executable images containing all the necessary files, configurations, libraries, and binaries needed to run that specific application. As a result, when a container is run, it depends only on the host OS kernel, thanks to containerization engines like Docker Engine that serve as the interface between the application runtime and the OS kernel.
This article is an overview of the role of application containerization in modern software deployments, including how containerization differs from virtualization at the OS level, the benefits of using containerization, and how to containerize an application.
The Benefits of Containerizing Your Application
Developing software in containerized environments has several advantages over the more traditional software paradigm of only packaging application code and running it directly on the host system. Just as virtual machines provide flexibility and elastic scaling properties to cloud providers and data centers, leveraging containerization to package and run software applications can bring even more, and similar, advantages.
|Portability||A containerized application, due to the fact that dependencies outside the container are minimal, can run reliably in different environments. One example where this is advantageous is the sidecar model in a microservice architecture. Here, a generalized function such as metrics collection or service registration runs as a process alongside a variety of different services. Containerization helps to encapsulate dependencies of this sidecar process, eliminating the need for the host to have those dependencies already installed. Consistency across development, staging, and production environments is another benefit of the portability of containerized applications.|
|Containerization Is Declarative||The ability to declaratively define application dependencies in code allows the application to have more control over its runtime environment. Applications that rely on host-installed packages may run into compatibility issues, unexpected runtime errors, and other problems, especially if the application needs to run in different environments or the host environment is otherwise unstable.|
|Developer Speed||Running applications in isolated containers that encapsulate all the necessary dependencies eliminates a class of problems for developers by introducing a run-anywhere paradigm. Software packaged in this way is no longer coupled with the host OS, simplifying dependency management. The consistency of a containerized runtime in different environments can also benefit the development lifecycle, enabling software to be run reliably in development, staging, and production environments.|
|Improved Security||Containerized applications help reduce how serious a data breach can be thanks to the container vacuum protecting the host system from widespread infections. Furthermore, developers can assign permissions to control access to specific containers.|
|Resource and Server Efficiency||Application containers are more lightweight than virtual machines, since they leverage the host OS kernel and the resulting images only package what an application needs. This results in a lighter footprint and the ability to flexibly run multiple containers in a single compute environment, which leads to more efficient resource utilization.|
|Scalability||Because containerized applications are easier, faster to deploy, and way more secure to run, it makes it easier for developers to scale deployments, resulting in less overhead spending. This is one of the reasons containers have become the norm when it comes to microservices and cloud-based applications.|
|Fault Isolation||Because containerized applications are isolated at the process level, fatal crashes in one container will not affect other running containers. Containers can also integrate with host-level security policies, and virtualized resources provide isolation from physical host-level resources that can block malicious code from accessing the host system.|
Container-specific configurations can also add security controls on top, limiting access to resources and enacting additional security policies. Still, containers in and of themselves are not a holistic defense from security threats, and the ability to compose multiple container images within a single container increases the surface area of security concerns when using images from third parties.
Containerization vs. Virtualization (VMs)
Virtual machines are another kind of system isolation, but there are some major differences between virtual machines and application containers. Virtual machines are much more heavyweight than application containers. Also, because they’re designed to emulate entire systems, and not just single applications, they can take on workloads with higher resource requirements.
VMs emulate, or virtualize, an entire computer system, including the OS kernel, allowing for a single host machine to run one or more guest virtual machines. These guest virtual machines are managed by what is called a hypervisor, which runs on the host machine and coordinates both the filesystem and hardware resources of the host machine among the guest virtual machines.
What’s the advantage of allowing a host machine to run one or more virtual machines? The ability to host a variety of different workloads and use cases via multiple guest virtual machines on a single host machine gives organizations and cloud providers a lot of flexibility when it comes to resource utilization.
A large and powerful physical machine may require a certain portfolio of virtual machines one year, but as business needs change or architectures evolve, the ability to scale those infrastructure resources up or down and vary their properties without having to necessarily buy a new set of hardware is of huge value to both cloud providers and users managing their own data centers.
Containerization and Microservice Architecture
While there are advantages to operating monolithic services, such as uniform tooling, lower latencies for various operations, and simpler observability and deploys, many complex applications are being broken down into smaller pieces, or services, that interact with each other in what’s called a microservice architecture. How microservice architectures are designed and how monoliths are broken into microservices is a complex topic in and of itself, and outside the scope of this article. But it’s easy to see how application containerization becomes a very relevant and beneficial tool when deploying and hosting microservices.
A simple example: Imagine a monolithic application that serves web requests, processes those requests with business logic, and also maintains connections with the database layer. As the complexity of each layer grows over time, the business decides that it would be a good strategic move to separate this application into three separate services: a web service tier, a core logic API service, and a database service.
Now, instead of running large heavyweight processes in VMs, the organization decides to containerize these separate applications with very scoped and specific concerns. They even have a choice of either managing each specialized service in their own elastic clusters of virtual machines and scaling the number of containers in each VM, or using a platform like Kubernetes to abstract away the management of the infrastructure.
To make things simpler, container orchestration is the automation of the process to manage containerized applications. One single application may contain hundreds, if not thousands of microservices. This becomes almost impossible for the dev team to manage these applications manually. Instead, container orchestration tools reduce human errors and aid in scaling cloud applications.
How Does Containerization Work?
The following is a general and simplified illustration of how to containerize a software application to provide a high-level overview of what a containerization workflow might look like.
The development lifecycle of a containerized application falls into roughly three phases.
When you develop the application and commit the source code, you define an application’s dependencies into a container image file, such as a Dockerfile. Traditional source code management is very compatible with the containerization model because all container configuration is stored as code, usually alongside the source code of the application. In some cases, for example if using Docker Engine, containerizing an existing application can be as simple as adding and configuring a Dockerfile and associated dependencies to the source code.
Here, you build and publish the image to a container repository, where it is immutable, versioned, and tagged. Once an application includes an image definition file such as a Dockerfile and is configured to install and pull required dependencies into an image, it’s time to materialize an image and store it. You can do this either locally or in a remote repository where it can be referenced and downloaded.
Lastly, you deploy and run the containerized application locally, in CI/CD pipelines or testing environments, in staging, or in a production environment. Once a published image is accessible by an environment, the image represents an executable that can then be run. Again using Docker Engine as an example, the target environment that will be running the container will need to have the Docker Daemon installed, which is a long-running service that manages the creation and running of the containerized processes. The Docker CLI provides an interface to manually or programmatically run the containerized application.
There are many different containerization technologies and container orchestration methods and platforms to choose from, so each organization should do a thorough evaluation when selecting the technology they’d like to adopt. That being said, the Open Container Initiative, founded in part by the Docker team, works to define industry standards and specifications for container runtimes and images.
Containerization Use Cases
These are some of the most common containerization use cases:
- Microservices: Because microservices tend to be independent, small services working together, it is very common to have these containerized.
- CI/CD: The continuous integration and continuous deployment process involves doing just that, continuously integrating code and deploying it as fast and accurately as possible. When containerizing these codes, it is much easier for the dev team to automate deployments.
- Cloud Migration: Also called the lift and shift approach, it is a software strategy that allows for legacy applications to be containerized and deployed in cloud environments, modernizing applications without rewriting all code.
- IoT Devices: Due to limited computing resources, manual software updates are required for IoT devices that take a long time because of its long, arduous process. Containerizing would allow developers to update these applications a lot faster, improving IoT security on the way.
Is Containerization Right for You?
The developer-friendliness of modern containerization technologies like Docker make it an approachable technology to incorporate into a proof-of-concept. If it’s possible to introduce containerization into your software deployment cycle iteratively, such as containerizing a single service or a sidecar service, this may be a good way to gain operational experience with the technology and make a decision.
The choice to leverage containerization may or may not be simple, depending on the size and scale of your current organization. Introducing and adopting any new technology, however developer-friendly, requires an understanding of what benefits and what tradeoffs may be involved, especially where observability and security may be concerned.
That being said, there is a broad and growing community of developer support, and there are early indications that containerization is becoming more of an industry standard. Especially if your software is in the early stages or you’re working with a greenfield, leveraging containerization may be a good option and allow you to take advantage of some of the most modern technological advances in software development and deployment.