Docker enables engineers to package and deploy applications as containers. Containerization provides developers with consistent and reproducible environments for running applications, regardless of the underlying infrastructure. Over the past several years, developers have fully embraced Docker, leveraging it as a container runtime on popular platforms like Kubernetes or Nomad.
Docker is an excellent vehicle for modern application architectures built around microservices. However, Docker’s containerization approach to applications also presents challenges when dealing with logs.
In this article, we’ll consider the logging challenges of using Docker, covering strategies and best practices to overcome them. We’ll also demonstrate how to get started with Docker logs and provide examples of retrieving logs from containers.
The Challenges of Logging with Docker
Containers are lightweight execution environments that run applications as isolated processes. In a microservices architecture, each container serves a specific function, and multiple containers work together to deliver a complex application or service.
One advantage of using containers in a microservices architecture is that containers are ephemeral: you can create or destroy them as needed. However, the short-lived and ephemeral nature of containers means that logging requires extra consideration to ensure efficiency and reliability.
Containerized applications do not provide insight into the underlying host system’s behavior, since they run within isolated environments with their own logs. In general, containers emit logs to standard output (stdout) and standard error (stderr), which are stored by default on the host system as JSON files.
Once stored, container logging then involves two levels of log aggregation:
- Application-level logging captures logs generated by the containerized application itself, such as errors, warnings, and events.
- Host-level logging captures system-level logs and logs generated by the Docker daemon.
For logging to be effective, correlating logs between the containerized application and the host is important, as identifying relationships between the two will provide a more comprehensive understanding of how the system is behaving. You can also identify issues impacting application performance for faster troubleshooting and resolution. Additionally, correlating logs can provide valuable insights for security monitoring and compliance purposes.
Overall, container logging is critical for managing microservice architectures. By implementing the right strategies and best practices, organizations can quickly identify and resolve issues, ensuring their services function reliably.
Docker Logging Strategies and Best Practices
The following strategies show different approaches to handling many of the common problems associated with Docker logging.
Log within the application, then ship logs externally
When logging within an individual microservice or application, each service can handle its own logging, and this includes log shipping. By letting each service control its own logging, you’ll have more flexibility. Use the Logrus or Zap libraries to implement logging within your application code.
Log with data volumes
With this approach, your applications send logs to a data volume from which they are collected. This strategy provides a centralized location for storing logs, making it easier to collect, aggregate, and store logs. You can use tools like Fluentd or Logstash to collect and parse logs from data volumes.
Use logging drivers
When logging in Docker, you can also leverage different logging drivers, such as Syslog and Journald. A logging driver directly reads data from a container and forwards it. The logging driver can then send the logs to a central location.
Use a dedicated logging container
With this strategy, Docker will log events by using a dedicated logging container running on the same host. This logging container is responsible for handling logs instead of exposing a service on the host. The logging container can run tools like Fluentd or Logstash to collect and parse logs from other containers.
Use a sidecar container
Similar to the previous strategy, you can use a sidecar container for logging. With this strategy, you would configure a container alongside each application container, and this sidecar container would be responsible for collecting and handling logs. The sidecar can then handle all the processing and shipping of logs to a centralized location for storage.
Getting Started with Docker Container Logs
Docker provides an easy-to-use CLI for viewing container logs. In this section, we will discuss the basic usage of the
docker logs command, providing examples of how to view and interact with container logs.
First, we’ll start a Docker container. This command starts a container named
my-nginx based on the official nginx image. You can check that it’s running by accessing
~$ docker run --name my-nginx -p 80:80 -d nginx b0dc9f5109d37b5c959f1969dcb2f570c7f5f0f33c1751847b7af662397bcfce
To check the logs of the
my-nginx container, we run the following command:
~$ docker logs my-nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2023/03/06 18:44:43 [notice] 1#1: using the "epoll" event method 2023/03/06 18:44:43 [notice] 1#1: nginx/1.23.3 2023/03/06 18:44:43 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 2023/03/06 18:44:43 [notice] 1#1: OS: Linux 5.15.49-linuxkit 2023/03/06 18:44:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2023/03/06 18:44:43 [notice] 1#1: start worker processes 2023/03/06 18:44:43 [notice] 1#1: start worker process 29 2023/03/06 18:44:43 [notice] 1#1: start worker process 30 2023/03/06 18:44:43 [notice] 1#1: start worker process 31 2023/03/06 18:44:43 [notice] 1#1: start worker process 32
By default, Docker logs will display the entire log output of the container, beginning from its execution. If you want to follow the logs in real time, providing visibility to new log entries as they are generated, you can use the
~$ docker logs -f my-nginx
As you check
https://www.crowdstrike.com several times, you’ll see new log requests appear at the command line. This command is useful when troubleshooting or monitoring the behavior of a running container.
Another useful option is
--tail, which allows you to specify the number of lines of logs to display. This is useful for seeing a small set of the most recent log entries, rather than the entire log history. For example, to show the last five log lines of
my-nginx, use the following command:
~$ docker logs --tail 5 my-nginx
You can also filter logs by time using the
--until arguments. The
--since argument lets you display logs generated after a specified timestamp, while
--until displays logs generated before a specified timestamp. For example, to show logs generated in the past hour, use the following command:
~$ docker logs --since 1h my-nginx
Or, if you want to see the logs generated up until 15 minutes ago, you can run:
~$ docker logs --until 15m my-nginx
Docker containers are a popular technology for microservices architectures, but due to their ephemeral nature, they present unique challenges when it comes to logging. However, these challenges can be addressed with some simple logging best practices and strategies.
docker logs command is a powerful tool for quickly finding and analyzing relevant container log entries, making troubleshooting and monitoring containerized applications much easier.
Log Everything, Answer Everything—For Free
Falcon LogScale Community Edition (previously Humio) offers a free modern log management platform for the cloud. Leverage streaming data ingestion to achieve instant visibility across distributed systems and prevent and resolve incidents.
Falcon LogScale Community Edition, available instantly at no cost, includes the following:
- Ingest up to 16GB per day
- 7-day retention
- No credit card required
- Ongoing access with no trial period
- Index-free logging, real-time alerts and live dashboards
- Access our marketplace and packages, including guides to build new packages
- Learn and collaborate with an active community