The -d flag in docker-compose up -d
stands for detached mode and not deamon mode.
In detached mode, your service(s) (e.g. container(s)) runs in the background of your terminal. You can’t see logs in this mode.
To see all service(s) logs you need to run this command :
docker-compose logs -f
The -f flag stands for «Follow log output».
This will output all the logs for each running service you have in your docker-compose.yml
From my understanding you want to fire up your service(s) with :
docker-compose up -d
In order to let service(s) run in the background and have a clean console output.
And you want to print out only the errors from the logs, to do so add a pipe operator and search for error with the grep command :
docker-compose logs | grep error
This will output all the errors logged by a docker service(s).
You’ll find the official documentation related to the docker-compose up
command here and to the logs
command here. More info on logs-handling in this article.
Related answer here.
- Docker container logs in the foreground
- Docker logs of detached containers
- Follow container logs
- Tail logs
- Display timestamps
- Show logs since timestamp
- Show logs until timestamp
- Docker Compose logs
- View logs of a single service
- Docker Compose logs options
- Docker Swarm logs
- Do not map service name to ID
- Omit task ids
- Do not truncate output
- Get raw output
- Docker Engine logs
Let’s explore Docker logs from containers to Compose, Swarm and the Docker Engine in a single post.
Docker container logs in the foreground
The easiest way to see Docker logs is to start containers in the foreground. Your log messages are printed to your terminal:
~ docker run -p 80:80 nginx:latest
172.17.0.1 - - [02/Oct/2018:12:57:59 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:62.0) Gecko/20100101 Firefox/62.0" "-"
You start a Docker container in the foreground by not specifying the -d
(detached) flag of the docker run
command. Docker will attach to stdout and stderr in this mode.
Docker logs of detached containers
Use the docker container logs
command to see the logs of detached Docker containers.
Let’s start a container in detached mode:
$ docker run -p 80:80 -d nginx:latest
3f840a82aabe788ecf7c7d3865e4fd3403f8de5e4b93ced5e0fd807dfc2d7180
Check the logs:
$ docker container logs 3f840a82aabe
172.17.0.1 - - [02/Oct/2018:13:08:18 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
The docker logs
command is an alias of docker container logs
. They show the logs of a single container.
Follow container logs
Often you need to keep the log open and follow messages, you can do this with the --follow
or -f
flag.
$ docker container logs --follow 3f840a82aabe
Tail logs
Limit the output to a certain number of lines from the end of the log.
$ docker container logs --tail 1 3f840a82aabe
172.17.0.1 - - [02/Oct/2018:13:08:18 +0000] "GET /favicon.ico HTTP/1.1" 404 571 "http://localhost/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Display timestamps
Display timestamps in Docker log output with --timestamps
or -t
.
docker container logs --tail 1 --timestamps 3f840a82aabe
2018-10-02T13:08:18.661924100Z 172.17.0.1 - - [02/Oct/2018:13:08:18 +0000] "GET /favicon.ico HTTP/1.1" 404 571 "http://localhost/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Show logs since timestamp
Use relative time.
$ docker container logs --since 15m --timestamps 3f840a82aabe
2018-10-02T13:08:18.004863500Z 172.17.0.1 - - [02/Oct/2018:13:08:18 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Use absolute time.
$ docker container logs --since 2018-10-02T13:08:15 --timestamps 3f840a82aabe
2018-10-02T13:08:18.004863500Z 172.17.0.1 - - [02/Oct/2018:13:08:18 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Show logs until timestamp
The --until
option is similar to --since
, but shows logs before a certain point in time. It works with absolute and relative time values.
Docker Compose logs
Docker Compose will send logs to stdout and stderr if started in the foreground (not with -d
). It will display all log messages from all Compose services.
When starting your application in detached mode, i.e. with docker-compose up -d
, then the docker-compose logs
command is used to analyze logs. Let’s see an example of an app that has two services:
$ docker-compose logs
Attaching to flask-redis-final_app_1, flask-redis-final_redis_1
redis_1 | 1:C 02 Oct 13:25:30.164 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 02 Oct 13:25:30.164 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:M 02 Oct 13:25:30.165 * Running mode=standalone, port=6379.
redis_1 | 1:M 02 Oct 13:25:30.165 # Server initialized
redis_1 | 1:M 02 Oct 13:25:30.165 * Ready to accept connections
app_1 | * Serving Flask app "app.py" (lazy loading)
app_1 | * Environment: development
app_1 | * Debug mode: on
app_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 119-685-546
The docker-compose logs
command will display the logs from all services in the application defined in the Compose file by default.
View logs of a single service
View logs of a single service in the form docker-compose logs SERVICE
:
docker-compose logs app
Attaching to flask-redis-final_app_1
app_1 | * Serving Flask app "app.py" (lazy loading)
app_1 | * Environment: development
app_1 | * Debug mode: on
app_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 119-685-546
Docker Compose logs options
You can use the --follow
, --timestamps
and --tail
options with docker-compose logs
with the same meaning you have seen earlier in the post. The --since
and --until
options are not available.
Docker Swarm logs
In the Swarm you analyize logs on the Docker service level. Use the docker service logs SERVICE
command.
$ docker service logs mystifying_lamarr
[email protected] | 10.255.0.2 - - [02/Oct/2018:14:14:17 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
You can use --follow
, --since
, --tail
and --timestamps
just like before.
Docker service logs has some extra options.
Do not map service name to ID
The above log output contains the service name (mystifying_lamarr
). You can tell Docker to use the service ID instead with --no-resolve
.
$ docker service logs --no-resolve mystifying_lamarr
[email protected]pz | 10.255.0.2 - - [02/Oct/2018:14:14:17 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Omit task ids
You can tell Docker no to print task IDs with --no-task-ids
.
$ docker service logs --no-task-ids mystifying_lamarr
[email protected] | 10.255.0.2 - - [02/Oct/2018:14:14:17 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Do not truncate output
If you use the --no-trunc
option, Docker will not truncate the output, this will give you the full IDs in the logs, like task IDs, for example.
$ docker service logs --no-trunc mystifying_lamarr
[email protected]001 | 10.255.0.2 - - [02/Oct/2018:14:14:17 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Get raw output
Using the --raw
option, you’ll get raw output, note how the service identifier has been replaced with the IP address.
$ docker service logs --raw mystifying_lamarr
10.255.0.2 - - [02/Oct/2018:14:14:17 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" "-"
Docker Engine logs
Docker logs are not only about containers, they’re also about the logs of the Docker Engine itself. The location of the Docker Engine logs is different on different platforms.
You can find the up-to-date spec on the Docker site.
Let me share the current info here:
- RHEL, Oracle Linux —
/var/log/messages
- Debian —
/var/log/daemon.log
- Ubuntu 16.04+, CentOS —
$ journalctl -u docker.service
- Ubuntu 14.10- —
/var/log/upstart/docker.log
- macOS (Docker 18.01+) —
~/Library/Containers/com.docker.docker/Data/vms/0/console-ring
- macOS (Docker <18.01) —
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/console-ring
- Windows —
AppDataLocal
When building containerized applications, logging is definitely one of the most important things to get right from a DevOps standpoint. Log management helps DevOps teams debug and troubleshoot issues faster, making it easier to identify patterns, spot bugs, and make sure they don’t come back to bite you!
In this article, we’ll refer to Docker logging in terms of container logging, meaning logs that are generated by containers. These logs are specific to Docker and are stored on the Docker host. Later on, we’ll check out Docker daemon logs as well. These are the logs that are generated by Docker itself. You will need those to debug errors in the Docker engine.
Docker Logging: Why Are Logs Important When Using Docker
The importance of logging applies to a much larger extent to Dockerized applications. When an application in a Docker container emits logs, they are sent to the application’s stdout and stderr output streams.
The container’s logging driver can access these streams and send the logs to a file, a log collector running on the host, or a log management service endpoint.
By default, Docker uses a json-file driver, which writes JSON-formatted logs to a container-specific file on the host where the container is running. More about this in the section below called “What’s a Logging Driver?”
The example below shows JSON logs created using the json-file driver:
{"log":"Hello World!n","stream":"stdout","time":"2020-03-29T22:51:31.549390877Z"}
If that wasn’t complicated enough, you have to deal with Docker daemon logs and host logs apart from container logs. All of them are vital in troubleshooting errors and issues when using Docker.
We know how challenging handling Docker logs can be. Check out Top 10 Docker Logging Gotchas to see some of the best practices we discovered along the years.
Before moving on, let’s go over the basics.
What Is a Docker Container
A container is a unit of software that packages an application, making it easy to deploy and manage no matter the host. Say goodbye to the infamous “it works on my machine” statement!
How? Containers are isolated and stateless, which enables them to behave the same regardless of the differences in infrastructure. A Docker container is a runtime instance of an image that’s like a template for creating the environment you want.
What Is a Docker Image?
A Docker image is an executable package that includes everything that the application needs to run. This includes code, libraries, configuration files, and environment variables.
Why Do You Need Containers?
Containers allow breaking down applications into microservices — multiple small parts of the app that can interact with each other via functional APIs. Each microservice is responsible for a single feature so development teams can work on different parts of the application at the same time. That makes building an application easier and faster.
Popular Docker Logging Topics
How Is Docker Logging Different
Most conventional log analysis methods don’t work on containerized logging — troubleshooting becomes more complex compared to traditional hardware-centric apps that run on a single node and need less troubleshooting. You need more data to work with so you must extend your search to get to the root of the problem.
Here’s why:
Containers are Ephemeral
Docker containers emit logs to the stdout
and stderr
output streams. Because containers are stateless, the logs are stored on the Docker host in JSON files by default. Why?
The default logging driver is json-file. The logs are then annotated with the log origin, either stdout or stderr, and a timestamp. Each log file contains information about only one container.
You can find these JSON log files in the /var/lib/docker/containers/
directory on a Linux Docker host. Here’s how you can access them:
/var/lib/docker/containers/<container id>/<container id>-json.log
That’s where logging comes into play. You can collect the logs with a log aggregator and store them in a place where they’ll be available forever. It’s dangerous to keep logs on the Docker host because they can build up over time and eat into your disk space. That’s why you should use a central location for your logs and enable log rotation for your Docker containers.
Containers are Multi-Tiered
This is one of the biggest challenges to Docker logging. However basic your Docker installation is, you will have to work with two levels of aggregation. One refers to the logs from the Dockerized application inside the container. The other involves the logs from the host servers, which consist of the system logs, as well as the Docker Daemon logs which are usually located in /var/log
or a subdirectory within this directory.
A simple log aggregator that has access to the host can’t just pull application log files as if they were host log files. Instead, it must be able to access the file system inside the container to collect the logs. Furthermore, your infrastructure will, inevitably, extend to more containers and you’ll need to find a way to correlate log events to processes rather than their respective containers.
Docker Logging Strategies and Best Practices
Needless to say, logging in Docker could be challenging. But there are a few best practices to have in mind when working with containerized apps.
Logging via Application
This technique means that the application inside the containers handles its own logging using a logging framework. For example, a Java app could use a Log4j2 to format and send the logs from the app to a remote centralized location skipping both Docker and the OS.
On the plus side, this approach gives developers the most control over the logging event. However, it creates extra load on the application process. If the logging framework is limited to the container itself, considering the transient nature of containers, any logs stored in the container’s filesystem will be wiped out if the container is terminated or shut down.
To keep your data, you’ll have to either configure persistent storage or forward logs to a remote destination like a log management solution such as Elastic Stack or Sematext Cloud. Furthermore, application-based logging becomes difficult when deploying multiple identical containers, since you would need to find a way to tell which log belongs to which container.
Logging Using Data Volumes
As we’ve mentioned above, one way to work around containers being stateless when logging is to use data volumes.
With this approach you create a directory inside your container that links to a directory on the host machine where long-term or commonly-shared data will be stored regardless of what happens to your container. Now, you can make copies, perform backups, and access logs from other containers.
You can also share volume across multiple containers. But on the downside, using data volumes make it difficult to move the containers to different hosts without any loss of data.
Logging Using the Docker Logging Driver
Another option to logging when working with Docker, is to use logging drivers. Unlike data volumes, the Docker logging driver reads data directly from the container’s stdout and stderr output. The default configuration writes logs to a file on the host machine, but changing the logging driver will allow you to forward events to syslog, gelf, journald, and other endpoints.
Since containers will no longer need to write to and read from log files, you’ll likely notice improvements in terms of performance. However, there are a few disadvantages of using this approach as well: Docker log commands work only with the json-file log driver; the log driver has limited functionality, allowing only log shipping without parsing; and containers shut down when the TCP server becomes unreachable.
Logging Using a Dedicated Logging Container
Another solution is to have a container dedicated solely to logging and collecting logs, which makes it a better fit for the microservices architecture. The main advantage of this approach is that it doesn’t depend on a host machine. Instead, the dedicated logging container allows you to manage log files within the Docker environment. It will automatically aggregate logs from other containers, monitor, analyze, and store or forward them to a central location.
This logging approach makes it easier to move containers between hosts and scale your logging infrastructure by simply adding new logging containers. At the same time, it enables you to collect logs through various streams of log events, Docker API data, and stats.
This is the approach we suggest you should use. You can set up Logagent as a dedicated logging container and have all Docker logs ship to Sematext Logs in under a few minutes as explained a bit further down.
Logging Using the Sidecar Approach
For larger and more complex deployments, using a sidecar is among the most popular approaches to logging microservices architectures.
Similarly to the dedicated container solution, it uses logging containers. The difference is that this time, each application container has its own dedicated container, allowing you to customize each app’s logging solution. The first container saves log files to a volume which are then tagged and shipped by the logging container to a third-party log management solution.
One of the main advantages of using sidecars is that it allows you to set up additional custom tags to each log, making it easier for you to identify their origin.
There are some drawbacks, however — it can be complex and difficult to set up and scale, and it can require more resources than the dedicated logging method. You must ensure that both application container and sidecar container are working as a single unit, otherwise, you might end up losing data.
Get Started with Docker Container Logs
When you’re using Docker, you work with two different types of logs: daemon logs and container logs.
What Are Docker Container Logs?
Docker container logs are generated by the Docker containers. They need to be collected directly from the containers. Any messages that a container sends to stdout
or stderr
is logged then passed on to a logging driver that forwards them to a remote destination of your choosing.
Here are a few basic Docker commands to help you get started with Docker logs and metrics:
- Show container logs:
docker logs containerName
- Show only new logs:
docker logs -f containerName
- Show CPU and memory usage:
docker stats
- Show CPU and memory usage for specific containers:
docker stats containerName1 containerName2
- Show running processes in a container:
docker top containerName
- Show Docker events:
docker events
- Show storage usage:
docker system df
Watching logs in the console is nice for development and debugging, however in production you want to store the logs in a central location for search, analysis, troubleshooting and alerting.
What Is a Logging Driver?
Logging drivers are Docker’s mechanisms for gathering data from running containers and services to make it available for analysis. Whenever a new container is created, Docker automatically provides the json-file log driver if no other log driver option has been specified. At the same time, it allows you to implement and use logging driver plugins if you would like to integrate other logging tools.
Here’s an example of how to run a container with a custom logging driver, in this case syslog:
docker run -–log-driver syslog –-log-opt syslog-address=udp://syslog-server:514 alpine echo hello world
How to Configure the Docker Logging Driver?
When it comes to configuring the logging driver, you have two options:
- setup a default logging driver for all containers
- specify a logging driver for each container
In the first case, the default logging driver is a JSON file, but, as mentioned above, you have many other options such as logagent, syslog, fluentd, journald, splunk, etc. You can switch to another logging driver by editing the Docker configuration file and changing the log-driver parameter, or using your preferred log shipper.
# /etc/docker/daemon.json { "log-driver": "journald" } systemctl restart docker
Alternatively, you can choose to configure a logging driver on a per-container basis. As Docker provides a default logging driver when you start a new container, you need to specify the new driver from the very beginning by using the -log-driver
and -log-opt
parameters.
docker run -–log-driver syslog –-log-opt syslog-address=udp://syslog-server:514 alpine echo hello world
Where Are Docker Logs Stored By Default?
The logging driver enables you to choose how and where to ship your data. The default logging driver as I mentioned above is a JSON file located on the local disk of your Docker host:
/var/lib/docker/containers/[container-id]/[container-id]-json.log.
Have in mind, though, that when you use another logging driver than json-file
or journald
you will not find any log files on your disk. Docker will send the logs over the network without storing any local copies. This is risky if you ever have to deal with network issues.
In some cases Docker might even stop your container, when the logging driver fails to ship the logs. This issue might happen depending on what delivery mode you are using.
Learn more about where Docker logs are stored from our post about Docker logs location.
Where Are Delivery Modes?
Docker containers can write logs by using either the blocking or non-blocking delivery mode. The mode you choose will determine how the container prioritizes logging operations relative to its other tasks.
Direct/Blocking
Blocking is Docker’s default mode. It will interrupt the application each time it needs to deliver a message to the driver.
It makes sure all messages are sent to the driver, but can introduce latency in the performance of your application. if the logging driver is busy, the container delays the application’s other tasks until it has delivered the message.
Depending on the logging driver you use, the latency differs. The default json-file driver writes logs very quickly since it writes to the local filesystem, so it’s unlikely to block and cause latency. However, log drivers that need to open a connection to a remote server can block for longer periods and cause noticeable latency.
That’s why we suggest you use the json-file driver and blocking mode with a dedicated logging container to get the most of your log management setup. Luckily it’s the default log driver setup, so you don’t need to configure anything in the /etc/docker/daemon.json
file.
Non-blocking
In non-blocking mode, a container first writes its logs to an in-memory ring buffer, where they’re stored until the logging driver is available to process them. Even if the driver is busy, the container can immediately hand off application output to the ring buffer and resume executing the application. This ensures that a high volume of logging activity won’t affect the performance of the application running in the container.
When running in non-blocking mode, the container writes logs to an in-memory ring buffer. The logs are stored in the ring-buffer until it’s full. Only then is the log shipped. Even if the driver is unavailable, the container sends logs to the ring buffer and continues executing the application. This ensures high volume of logging without impacting performance. But there are downsides.
Non-blocking mode does not guarantee that the logging driver will log all the events. If the buffer runs out of space, buffered logs will be deleted before they are sent. You can use the max-buffer-size option to set the amount of RAM used by the ring buffer. The default value for max-buffer-size is 1 MB, but if you have more RAM available, increasing the buffer size can increase the reliability of your container’s logging.
Although blocking mode is Docker’s default for new containers, you can set this to non-blocking mode by adding a log-opts item to Docker’s daemon.json
file.
# /etc/docker/daemon.json { "log-driver": "json-file", "log-opts": { "mode": "non-blocking" } }
Alternatively, you can set non-blocking mode on an individual container by using the --log-opt
option in the command that creates the container:
docker run --log-opt mode=non-blocking alpine echo hello world
Logging Driver Options
The log file format for the json-file
logging driver is machine readable JSON format with a timestamp, stream name and the log message. Therefore users prefer the docker logs command to see the logs on their console.
On the other hand the machine readable log format is a good base for log shippers to ship the logs to log management platforms, where you can search, visualise and alert on log data.
However, you have other log driver options as follows:
- logagent: A general purpose log shipper. The Logagent Docker image is pre-configured for log collection on container platforms. Logagent collects not only logs, it also adds meta-data such as image name, container id, container name, Swarm service or Kubernetes meta-data to all logs. Plus it handles multiline logs and can parse container logs.
- syslog: Ships log data to a syslog server. This is a popular option for logging applications.
- journald: Sends container logs to the systemd journal.
- fluentd: Sends log messages to the Fluentd collector as structured data.
- elf: Writes container logs to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
- awslogs: Sends log messages to AWS CloudWatch Logs.
- splunk: Writes log messages to Splunk using HTTP Event Collector (HEC).
- cplogs: Ships log data to Google Cloud Platform (GCP) Logging.
- logentries: Writes container logs to Rapid7 Logentries.
- etwlogs: Writes log messages as Event Tracing for Windows (ETW) events, thus only available on Windows platforms.
Use the json-file Log Driver With a Log Shipper Container
The most reliable and convenient way of log collection is to use the json-file driver and set up a log shipper to ship the logs. You always have a local copy of logs on your server and you get the advantage of centralized log management.
If you were to use Sematext Logagent there are a few simple steps to follow in order to start sending logs to Sematext. After creating a Logs App, run these commands in a terminal.
docker pull sematext/logagent docker run -d --restart=always --name st-logagent -e LOGS_TOKEN=YOUR_LOGS_TOKEN -e LOGS_RECEIVER_URL="https://logsene-receiver.sematext.com" -v /var/run/docker.sock:/var/run/docker.sock sematext/logagent
This will start sending all container logs to Sematext.
How to Work With Docker Container Logs Using the docker logs Command?
Docker has a dedicated command which lists container logs. The docker logs command. The flow will usually involve you checking your running containers with docker ps, then check the logs by using a container’s ID.
docker logs <container_id>
This command will list all logs for the specified container. You can add a timestamp flag and list logs for particular dates.
docker logs <container_id> --timestamps docker logs <container_id> --since (or --until) YYYY-MM-DD
What you’ll end up doing will be tailing these logs, either to check the last N number of lines or tailing the logs in real time.
The --tail
flag will show the last N lines of logs:
docker logs <container_id> --tail N
Using the --follow
flag will tail -f
(follow) the Docker container logs:
docker logs <container_id> --follow
But what if you only want to see specific logs? Luckily, grep works with docker logs as well.
docker logs <container_id> | grep pattern
This command will only show errors:
docker logs <container_id> | grep -i error
Once an application starts growing, you tend to start using Docker Compose. Don’t worry, it has a logs command as well.
docker-compose logs
This will display the logs from all services in the application defined in the Docker Compose configuration file.
Get started with Docker with our Docker Commands Cheat Sheet!
How to Work with Docker Container Logs Using a Log Shipper?
While everyone’s infrastructure is growing — nowadays, mostly in the container space — so are the monitoring needs. However, monitoring containers is different — and more challenging — from traditional server monitoring.
Unlike non-containerized applications that write logs into files, containers write their logs to the standard output and standard error stream. Container logs can be a mix of plain text messages from start scripts and structured logs from applications, which makes it difficult for you to tell which log event belongs to what container and app, then parse it correctly and so on.
Although Docker log drivers can ship logs to log management tools, most of them don’t allow you to parse container logs. You need a separate tool called a log shipper, such as Logagent, Logstash or rsyslog to structure logs before shipping them to storage. The problem is that when your logging solution uses multiple tools with dependencies for log processing, the chances your logging pipeline will crash increases with every new tool.
But there are a few Docker logging driver alternatives that can help make your job easier, one of them being Sematext Logagent.
Logagent is an all-in-one general-purpose solution for container log processing that allows you to monitor container logs, as well as your whole infrastructure and applications if paired with the Sematext Agent container.
You can read more about how Logagent works and how to use it for monitoring logs in our post on Docker Container Monitoring with Sematext.
What About Docker Daemon Logs
Docker daemon logs are generated by the Docker platform and located on the host. Depending on the host operating system, daemon logs are written to the system’s logging service or to a log file.
If you were to collect only container logs you’d get insight into the state of your services. However, by traditional logging methods, you also need to be aware of the state of your Docker platform, which is what Docker daemon logs are for. They paint a clear picture of your overall microservices architecture.
On that note, the Docker daemon logs two types of events:
- Events generated by the Docker service itself
- Commands sent to the daemon through Docker’s Remote API
Depending on your Operating System, the Docker daemon log file is stored in different locations.
Check out Guide to Docker Logs Location to find out more.
Popular Docker Logging Topics
Logging is a key part of gathering insight into the state of your infrastructure, but only if it’s analyzed. However, log data comes in huge volumes so doing it manually would be like looking for a needle in a haystack. Which is why you need a log data analysis platform. You can opt for open-source solutions or commercial software to get the most out of your Docker logs.
Open-Source Log Analysis Solutions
With open-source solutions, you need an expert team ready to handle everything from setup to configuration, providing infrastructure, maintenance, and management.
The most popular open source log analysis software is Elastic Stack (formerly known as ELK Stack). It’s a robust platform comprising three different tools — Elasticsearch to store log data, Logstash to process it, and Kibana to visualize log data.
For more information on Elasticsearch, check out our Elasticsearch Complete Guide.
If you still want to use Elasticsearch and Kibana but don’t want to manage it yourself, Sematext Cloud has an Elasticsearch API and integrated Kibana in the UI, if you feel like using it instead of the default Sematext Dashboards. This makes migrating to a managed Elasticsearch cluster a walk in the park. In your log shipper configuration, you’d only change the Elasticsearch endpoints from your local Elasticsearch cluster to the Sematext Cloud Elasticsearch API endpoint.
Commercial Log Analysis Tools: Logging as a Service
If you don’t have the resources to deal with Docker log data on your own, you can reach out to vendors who provide “logging as a service” as part of a full log management solution. You only need to point out the Docker logs and they’ll take over managing your log data from collection to storage, analysis, monitoring, and presentation.
Sematext as a Log Management Solution for Docker Logs
The Docker logging driver and log delivery mode you choose can have a noticeable effect on the performance of your containerized applications. We recommend using the json-file driver for reliable logging, consistent performance, and observability by using a centralized logging tool like Sematext Logs, We want to give you an all-in-one solution that provides hassle-free log management and analytics for your infrastructure and applications. It allows you to filter, analyze, and alert on logs from all your applications. By storing your Docker logs, you can detect and troubleshoot issues easier, but also gather actionable insights from both your infrastructure and Dockerized applications.
Check out the video below to see how easy it is to set up Docker log shipping with Sematext Logs. Or watch this video tutorial to learn how to set up Docker log monitoring using Sematext to get the visibility you need to make sure your containers are performing as they should.
If you want to earn more about Sematext Logs, see how it stacks against similar solutions in our dedicated articles about the best log management tools, log aggregation tools, log analysis software, and cloud logging services.
For optimum performance, we recommend you collect logs along with metrics and traces. We talked more about this in our Docker monitoring series. Check it out if you’re into that:
- Docker Container Monitoring and Management Challenges
- Docker Container Performance Metrics to Monitor
- Docker Container Monitoring Tools
- Docker Monitoring with Sematext
Now that you know how logging works in Docker, you can take a look at Kubernetes logs as well. Learn more from our Kubernetes logging guide.
When developing applications based on Docker, being able to find specific information in the logs and save this data to file can speed up the troubleshooting and debugging process. Here are some tips on using log options, tail and grep to find what you are looking for in docker containers’ log data.
Displaying all logs
When spinning up a Docker container, such as with docker-compose up, it will automatically show the logs. If you run them in the background, such as with docker-compose up -d, or from a different terminal then you can display logs using:
docker logs [OPTIONS] CONTAINER
docker-compose logs (all containers)
However, this will give you a large output of information.
Target and follow specific containers
Using docker-compose you can specify which container logs you want to show using:
docker-compose logs [options] [SERVICE…]
When debugging specific applications, a useful option is to follow log output. This means you can start a container, test a feature and see what is sent to the logs as you use it.
—follow , -f
An alternative is to test your application, then search the logs for specific information to show you how well it worked (or not!!!). There are two commands based on Unix commands that you can use for this.
Slice and search logs using tail and grep
The tail command outputs the last n number of lines from the end of a file. For example:
tail -n5 docker-compose.yml =>
NAME: «rabbit»
tty: true
volumes:
— ./rabbitmq.conf:/etc/rabbitmq.conf:ro
— ./definitions.json:/etc/rabbitmq/definitions.json:ro
To see the most recent output in the docker logs, you can either use this directly on a log file or use docker’s —tail option.
—tail number of lines to show from the end of the logs
docker-compose logs —tail 5 rabbit
=>
Attaching to docker_rabbit_1
rabbit_1 | completed with 3 plugins.
rabbit_1 | 2018-03-03 14:02:01.377 [info] <0.5.0> Server startup complete; 3 plugins started.
rabbit_1 | * rabbitmq_management
rabbit_1 | * rabbitmq_web_dispatch
rabbit_1 | * rabbitmq_management_agent
Another Bash command you can use with the logs is grep to return lines containing a specified string. For example:
docker-compose logs | grep error
…will show you all the errors logged by a docker containers. Very useful for seeing what you need to focus your development on.
docker-compose logs | grep error
=>
rabbit_1 | 2018-03-03 16:40:41.938 [error] <0.192.0> CRASH REPORT Process <0.192.0> with 0 neighbours exited with reason: {error,<<«Please create virtual host «/my_vhost» prior to importing definitions.»>>} in application_master:init/4 line 134
…
Logs by time
If you know which time period you want to focus on, for example a time when you know there was an issue you can tell docker to show timestamps using
—timestamps , -t
docker-compose logs -t =>
Attaching to docker_rabbit_1
rabbit_1 | 2018-03-03T14:01:58.109383986Z 2018-03-03 14:01:58.099 [info] <0.33.0> Application lager started on node rabbit@71bfa4cd9dc2
…
Pick a specific time period —since and —until options (only for docker logs, not docker-compose logs):
—sinceShow logs since timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes)
—untilShow logs before a timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes)
For example, if I wanted to see the logs close to the warning message in the earlier example I would execute:
docker logs -t —since 2018-03-03T16:52:45.000996884Z —until 2018-03-03T16:52:45.001996884Z docker_rabbit_1
=>
2018-03-03T16:52:45.000996884Z 2018-03-03 16:52:45.000 [warning] <0.417.0> Message store «61NHVEJY8W4BPTRU4AS08KK9D/msg_store_persistent»: rebuilding indices from scratch
2018-03-03T16:52:45.001801685Z 2018-03-03 16:52:45.001 [info] <0.410.0> Started message store of type persistent for vhost ‘/my_vhost’
Combining commands
You can combine these options and commands to target very specific areas of the logs with the information you need. In the below example we combine the -t timestamps option with —tail for the last 10 lines of the logs for container docker_rabbit_1, then search these for lines containing info to just see lines that were logged at info level.
docker logs -t —tail 10 docker_rabbit_1 | grep info =>
2018-03-03T16:52:45.029907263Z 2018-03-03 16:52:45.020 [info] <0.33.0> Application rabbitmq_web_dispatch started on node rabbit@d5d541f785f4
2018-03-03T16:52:45.042007695Z 2018-03-03 16:52:45.041 [info] <0.517.0> Management plugin started. Port: 15672
2018-03-03T16:52:45.042222660Z 2018-03-03 16:52:45.041 [info] <0.623.0> Statistics database started.
2018-03-03T16:52:45.043026767Z 2018-03-03 16:52:45.042 [info] <0.33.0> Application rabbitmq_management started on node rabbit@d5d541f785f4
2018-03-03T16:52:45.361845369Z 2018-03-03 16:52:45.361 [info] <0.5.0> Server startup complete; 3 plugins started.
Writing logs to file
Now that you have mastery over docker logs commands and how to find exactly what you want, use this to send the data to a log file. Using Bash, or alternative shell such as Zsh, the >> command followed by the file name outputs and saves the data to that file.
docker logs -t —tail 10 docker_rabbit_1 | grep info >> my_log_file.txt
You might want to use this to create log files for specific log data. For example, when debugging you could create ones for warnings or errors.
docker logs -t docker_rabbit_1 | grep warning >> logs_warnings.txt
Now the contents of my logs_warnings.txt file contains:
2018-03-03T16:52:45.000996884Z 2018-03-03 16:52:45.000 [warning] <0.417.0> Message store «61NHVEJY8W4BPTRU4AS08KK9D/msg_store_persistent»: rebuilding indices from scratch
2018-03-03T16:52:51.036989298Z 2018-03-03 16:52:51.036 [warning] <0.654.0> HTTP access denied: user ‘guest’ — invalid credentials
This means you can use all other applications and commands you use with text files and apply them to this log data.
Why not try some of your own custom log commands and saving the output to your own log file?
This tutorial was originally posted on SigNoz Blog and is written by Muskan Paliwal
Log analysis is a very powerful feature for an application when it comes to debugging and finding out which flow is working properly in the application and which is not. Log management helps the DevOps team debug and troubleshoot issues faster, making it easier to identify patterns, spot bugs, and ensure they don’t come back to bite you!
In this article, we will discuss log analysis in Docker and how logging in Docker containers is different than in other applications. These logs are specific to Docker and are stored on the Docker host. We’ll thoroughly discuss the docker logs
command and how we can configure a logging driver for containers.
Why is Docker logging different?
Life would be much simpler if applications running inside containers always behaved correctly. But unfortunately, as every developer knows, that is never the case.
With other systems, recording application log messages can be done explicitly by writing those messages to the system logger. This is done using syslog()
system call from our application. But this doesn’t work for containerized logging; here’s why:
-
Containers are Multi-leveled
Containers are just like boxes inside a big box. No matter how simple the Docker installation is, we’ll have to deal with two levels of aggregation. One level is where you see logs inside the container in your Dockerized application, known as Docker Container Logs.The second level where you see the logs from the host servers (that is, system logs or Docker daemon logs). These are generally located in
/var/log
.A log aggregator that has access to the host application can not pull log files from the Dockerized application as if they are the host log files. In these scenarios, we will have to find a way to correlate the logs.
-
Containers are ephemeral
Container-based environments keep changing very often but it doesn’t serve well to the monitor. Docker containers emit logs tostdout
andstderr
output streams.Logs are often stored on the Docker host because containers are stateless (failing to remember or save data from previous actions).
json-file
is the default logging driver which writes JSON-formatted logs to a container-specific file.Thejson-file
is stored in the/var/lib/docker/containers/
directory on a Linux Docker host. Here’s how you can access the file:/var/lib/docker/containers/<container id>/<container-id>-json.log
It is dangerous to store logs in a Docker host because Docker doesn’t impose any size limit on log files, and they can build up over time and eat into your disk space. It is advised to store logs in a centralized location and enable log rotation for all the Docker containers.
docker logs
command
docker logs
command is used to get all the information logged by a running container. The docker service logs
command is used to do the same by all the containers participating in a service.
The example below shows JSON logs created by the hello-world Docker image using json-file driver:
{"log":"Hello there!n","stream":"stdout","time":"2022-07-28T22:51:31.549390877Z"}
{"log":"This message shows that everything seems to be working correctly.n","stream":"stdout","time":"2022-07-28T22:51:31.549396749Z"}
Enter fullscreen mode
Exit fullscreen mode
The log follows a pattern of printing:
- Log’s origin
- Either
stdout
orstderr
- A timestamp
In order to review a container’s logs from the command line, you can use the docker logs <container-id>
command. Using this command, the logs shown above are displayed this way:
Hello there!
This message shows that everything seems to be working correctly.
Enter fullscreen mode
Exit fullscreen mode
Here are a few options in the command that you can use to modify the output of your log:
docker logs [OPTIONS] <container-id>
Enter fullscreen mode
Exit fullscreen mode
-
Using
-f
or--follow
option, if you want to follow the logs:docker logs <container_id> --follow
-
If you want to see the last
N
log lines:docker logs <container-id> --tail N
-
If you want to see the specific logs, use the
grep
command:docker logs <container_id> | grep pattern
-
If you want to show errors:
docker logs <container_id> | grep -i error
Once an application starts growing, you tend to start using Docker Compose. docker-compose logs
command shows logs from all the services running in the containerized application.
Note that the offering from the
docker logs
command may vary based on the Docker version you are using. In case of Docker Community,docker logs
can only read logs created by thejson-file
, local, andjournald
drivers whereas in case of Docker Enterprise,docker logs
can read logs created by any logging driver.
Configure a Docker container to use a logging driver
Step1: Configure the Docker daemon to a logging driver
Set the value of the log-driver
key to the name of the logging driver in the daemon.json
configuration file. Then restart Docker for the changes to take effect for all the newly created containers. All the existing containers will remain as they are.
Let’s set up a default driver with some additional information:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "20m",
"max-file": "10",
}
}
Enter fullscreen mode
Exit fullscreen mode
To find the current logging driver for the Docker daemon:
$ docker info --format '{{.LoggingDriver}}'
json-file
Enter fullscreen mode
Exit fullscreen mode
In order to use a different driver, you can override the default by adding the --log-driver
option to the docker run
command that creates a container.
For example, the following command creates an Apache httpd container, overriding the default logging driver to use the journald driver instead.
docker run --log-driver journald httpd
Enter fullscreen mode
Exit fullscreen mode
Step 2: Deciding the delivery mode of log messages from container to log driver
Docker provides two types of delivery modes for log messages.
-
Blocking(default mode)
As the name suggests, this mode blocks the main process inside a container to deliver log messages. And this will add latency to the performance of the application. But, it ensures that all the log messages will be successfully delivered to the log driver.The default log driver (
json-files
) logs messages very quickly since it writes to the local file system. Therefore it’s unlikely to cause latency. But drivers likegcplogs
andawslogs
open a connection to a remote server and are more likely to block and cause latency. -
Non-blocking
In this mode, the container writes logs to an in-memory ring buffer. This in-memory ring buffer works like a mediator between logging-driver and the container. When the logging-driver isn’t busy processing the logs, the container shares the logs to the driver immediately. But when the driver is busy, these logs are put into the ring-buffer.This provides you a safety check that a high volume of logging activity won’t affect the application’s performance running inside the container. But there is a downside. It doesn’t guarantee that all the log messages will be delivered to the logging driver. In cases where log broadcasts are faster than the driver processor, the ring buffer will soon run out of space. As a result, buffered logs are deleted to make space for the next set of incoming logs. The default value for
max-buffer-size
is 1 MB.To change the mode:
# /etc/docker/daemon.json { "log-driver": "json-file", "log-opts": { "max-size": "20m", "max-file": "10", "mode": "non-blocking" } }
Alternatively, you can set the non-blocking mode on an individual container by using the
--log-opt
option in the command that creates the container:docker run --log-opt mode=non-blocking alpine echo hello world
The default log driver stores data in a local file, but if you want more features, then you can opt for other log drivers as well, such as logagent
, syslog
, journald
, elf
, awslogs
, etc.
Logging strategies
Docker logging means logging events of the dockerized application, host OS and the docker service. There are various ways to log events for a docker container.
Some of them are:
-
Application logging: In this strategy, the application inside the container can have its own logging framework. The logs can either be stored locally or sent to a centralized location using a log management tool.
-
Data volumes: Because containers are stateless, and to avoid losing logs data, you can bind the container’s directory to the host OS directory. Containers can now be terminated or shut down, and access logs from multiple containers. You can run a regular backup in order to prevent data corruption or loss in case of failure.
-
Docker logging driver: This type has already been discussed in detail. The configured driver reads the data broadcast by the container’s
stdout
orstderr
streams and writes it to a file on the host machine. You can then send this log data anywhere you want to.
Final Thoughts
Containerization surely provides an easy way to deal with application portability and scalability issues but it does requires maintenance time to time. Container environments are just like a box inside a box, with multiple layers of abstraction. So, it becomes hard to debug in such environments and if performed correctly, log-analysis can be your go-to friend to find out performance related issues.
In this guide, you learned how to configure the Docker logging driver for log analysis in containerized applications, how Docker logging is different from application logging hosted on a physical machine or virtual host, and in detail study of the docker logs command.
There are various logging strategies that you can follow for log analysis. This blog thoroughly discussed the default logging strategy — json-file
and the two delivery modes of log messages. Containers being stateless, doesn’t ensure data persistence, hence to prevent data loss, you can use various log management tools.
But logs are just one aspect of getting insights from your software systems. Modern applications are complex distributed systems. For debugging performance issues, you need to make your systems observable. Logs, when combined with metrics and traces form an observability dataset that can help you debug performance issues quickly.
SigNoz, an open source APM can help you monitor your application by collecting all types of telemetry data. It correlates all your telemetry data(logs, metrics, and traces) into a single suite of monitoring. It is built to support OpenTelemetry natively. OpenTelemetry is becoming the world standard for instrumenting cloud-native applications.
You can check out SigNoz GitHub repo:
If you want to read more about SigNoz, check out the following blog:
SigNoz — an open source alternative to DataDog
У нас сейчас запущены два контейнера на основе образа ubuntu:latest
:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 85417e60b40d ubuntu:latest "/bin/bash" 2 hours ago Up 2 hours laughing_clarke 157ffc6166fc ubuntu:latest "/bin/bash" 2 hours ago Up 2 hours eloquent_goodall
Выполним команду ping google.com
изнутри первого контейнера. А снаружи будем отслеживать ping
с помощью утилиты tcpdump
на виртуальном интерфейсе veth59134c1
:
$ sudo tcpdump -i veth59134c1 icmp # на основной системе tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth59134c1, link-type EN10MB (Ethernet), capture size 262144 bytes 15:12:49.026100 IP 172.17.0.2 > lq-in-f138.1e100.net: ICMP echo request, id 372, seq 1, length 64 15:12:49.067953 IP lq-in-f138.1e100.net > 172.17.0.2: ICMP echo reply, id 372, seq 1, length 64 15:12:50.027507 IP 172.17.0.2 > lq-in-f138.1e100.net: ICMP echo request, id 372, seq 2, length 64 15:12:50.055363 IP lq-in-f138.1e100.net > 172.17.0.2: ICMP echo reply, id 372, seq 2, length 64 15:12:51.028917 IP 172.17.0.2 > lq-in-f138.1e100.net: ICMP echo request, id 372, seq 3, length 64 15:12:51.055577 IP lq-in-f138.1e100.net > 172.17.0.2: ICMP echo reply, id 372, seq 3, length 64
# apt install -y iputils-ping # ping -c 3 google.com # внутри контейнера PING google.com (173.194.73.138) 56(84) bytes of data. 64 bytes from lq-in-f138.1e100.net (173.194.73.138): icmp_seq=1 ttl=42 time=42.0 ms 64 bytes from lq-in-f138.1e100.net (173.194.73.138): icmp_seq=2 ttl=42 time=27.9 ms 64 bytes from lq-in-f138.1e100.net (173.194.73.138): icmp_seq=3 ttl=42 time=26.7 ms --- google.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 26.738/32.239/42.028/6.942 ms
Аналогично можно выполнить пинг от одного контейнера к другому. Сначала посмотрим, какие ip-адреса у контейнеров в сети bridge
$ docker network inspect bridge
[ { "Name": "bridge", "Id": "4db4885e345cbad5662da7cc0fe753e2f323a84fd2ba01947067cb1cca2b70c5", "Created": "2020-03-30T08:56:17.5621013+03:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "157ffc6166fc55506cd971c9b731e7a32cf17935a8176dc9ad0ea8c0d17f1f22": { "Name": "eloquent_goodall", "EndpointID": "6c9f7d4675cec8bf982716bc4a524426462dafbfbbe5ca186174b5d48b4ab339", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" }, "85417e60b40d2dc2b1a92ba0ca4f1ffc00c7911b8bea2884b4be3db426b281e5": { "Name": "laughing_clarke", "EndpointID": "2006fd9dd9627863312261215e909887be461b54d54f1dfad1b786473d5db035", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "172.17.0.3/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]
Запускаем tcpdump
на хосте
$ sudo tcpdump -ni docker0 host 172.17.0.2 and host 172.17.0.3 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes 15:30:52.638834 IP 172.17.0.2 > 172.17.0.3: ICMP echo request, id 374, seq 1, length 64 15:30:52.638925 IP 172.17.0.3 > 172.17.0.2: ICMP echo reply, id 374, seq 1, length 64 15:30:53.643881 IP 172.17.0.2 > 172.17.0.3: ICMP echo request, id 374, seq 2, length 64 15:30:53.643935 IP 172.17.0.3 > 172.17.0.2: ICMP echo reply, id 374, seq 2, length 64 15:30:54.668023 IP 172.17.0.2 > 172.17.0.3: ICMP echo request, id 374, seq 3, length 64 15:30:54.668095 IP 172.17.0.3 > 172.17.0.2: ICMP echo reply, id 374, seq 3, length 64
И запускаем ping
с первого контейнера на второй
# ping -c 3 172.17.0.3 PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data. 64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.180 ms 64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.270 ms 64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.191 ms --- 172.17.0.3 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2031ms rtt min/avg/max/mdev = 0.180/0.213/0.270/0.043 ms
Сеть при использовании docker-compose
Давайте запустим три контейнера, которые описаны в docker-compose.yml
:
$ cd ~/www/ $ cat docker-compose.yml
version: '3' services: apache: build: # инструкции для создания образа # контекст (путь к каталогу, содержащему Dockerfile) context: ./apache/ # это необязательно, потому что контекст уже задан dockerfile: Dockerfile ports: # контейнер будет доступен на порту 80 основной системы - 80:80 volumes: # путь указывается от директории, где расположен docker-compose.yml # монтируем директорию с php-скриптом внутрь контейнера - ./apache/html/:/var/www/html/ # монтируем файл конфигурации php.ini внутрь контейнера - ./apache/php.ini:/usr/local/etc/php/php.ini # монтируем файл конфигурации Apache2 внутрь контейнера - ./apache/httpd.conf:/etc/apache2/apache2.conf # монтируем файл логов доступа Apache2 внутрь контейнера - ./apache/logs/access.log:/var/log/apache2/access.log # монтируем файл логов ошибок Apache2 внутрь контейнера - ./apache/logs/error.log:/var/log/apache2/error.log mysql: build: # инструкции для создания образа # контекст (путь к каталогу, содержащему Dockerfile) context: ./mysql/ # это необязательно, потому что контекст уже задан dockerfile: Dockerfile environment: # пароль пользователя root MYSQL_ROOT_PASSWORD: qwerty ports: - 3306:3306 volumes: # путь указывается от директории, где расположен docker-compose.yml # монтируем файл конфигурации MySQL внутрь контейнера - ./mysql/mysql.cnf:/etc/mysql/my.cnf # монтируем директорию с базами данных внутрь контейнера - ./mysql/data/:/var/lib/mysql/ # монтируем директорию с логами MySQL внутрь контейнера - ./mysql/logs/:/var/log/mysql/ pma: # используем готовый образ phpmyadmin image: phpmyadmin/phpmyadmin ports: - 8080:80 environment: # название хоста в сети www_default PMA_HOST: mysql MYSQL_USERNAME: root MYSQL_ROOT_PASSWORD: qwerty
$ docker-compose up -d Creating network "www_default" with the default driver Creating www_pma_1 ... done Creating www_apache_1 ... done Creating www_mysql_1 ... done
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bed4b88ceac www_mysql "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:3306->3306/tcp, 33060/tcp www_mysql_1 a9ab191d3f89 phpmyadmin/phpmyadmin "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:8080->80/tcp www_pma_1 f736a3a70e05 www_apache "docker-php-entrypoi…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp www_apache_1
$ docker network ls NETWORK ID NAME DRIVER SCOPE 4db4885e345c bridge bridge local 1a425e4362b4 host host local 9246f826508b none null local c3379a4dea9e www_default bridge local
Проверим сеть, через которую взаимодействуют контейнеры:
$ docker network inspect www_default
[ { "Name": "www_default", "Id": "c3379a4dea9e54238418f256c408598eeaa47ead9333213a51ac32dff917e565", "Created": "2020-03-30T15:36:57.477223789+03:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.22.0.0/16", "Gateway": "172.22.0.1" } ] }, "Internal": false, "Attachable": true, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "7bed4b88ceacc4ea8b0beba043648a0c647dbe3e663278bc8364e0e213062fc5": { "Name": "www_mysql_1", "EndpointID": "fdda9874a47774392148ab1b2bf3d1a6d34d6f8001167c918d2cefc608de3534", "MacAddress": "02:42:ac:16:00:03", "IPv4Address": "172.22.0.3/16", "IPv6Address": "" }, "a9ab191d3f89f1f333966f18202e00df5826a0d2ed7ffe1752348b7b54dfa4a7": { "Name": "www_pma_1", "EndpointID": "1a82eea63694b0d63481eda742ff584804b9715b0c4a9992a1ccbae584590dba", "MacAddress": "02:42:ac:16:00:04", "IPv4Address": "172.22.0.4/16", "IPv6Address": "" }, "f736a3a70e050fbdef9bd0a85d30e786acf2cf0e36f7378b084135c91e5243f3": { "Name": "www_apache_1", "EndpointID": "258c5e73a366db9afc69d10394746cea286b1838eebb1749a17c0eb024f9fda8", "MacAddress": "02:42:ac:16:00:02", "IPv4Address": "172.22.0.2/16", "IPv6Address": "" } }, "Options": {}, "Labels": { "com.docker.compose.network": "default", "com.docker.compose.project": "www", "com.docker.compose.version": "1.25.4" } } ]
Здесь мы видим ip-адреса контейнеров: 172.22.0.2/16
, 172.22.0.3/16
и 172.22.0.4/16
. Но внутри сети контейнер доступен не только по ip-адресу, но и по имени службы. Давайте заглянем внутрь контейнера www_apache_1
(служба apache
из YAML-файла):
$ docker-compose exec apache /bin/bash # apt update # apt install -y iputils-ping # ping -c 3 mysql # имя службы из YAML-файла PING mysql (172.22.0.3) 56(84) bytes of data. 64 bytes from www_mysql_1.www_default (172.22.0.3): icmp_seq=1 ttl=64 time=0.200 ms 64 bytes from www_mysql_1.www_default (172.22.0.3): icmp_seq=2 ttl=64 time=0.202 ms 64 bytes from www_mysql_1.www_default (172.22.0.3): icmp_seq=3 ttl=64 time=0.215 ms --- mysql ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 10ms rtt min/avg/max/mdev = 0.200/0.205/0.215/0.017 ms # exit
Просмотр логов в Docker
Для начала посмотрим, какие контейнеры запущены в работу:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 756794a08941 phpmyadmin/phpmyadmin "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:8080->80/tcp www_pma_1 d4e1f0ab03c4 www_mysql "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:3306->3306/tcp, 33060/tcp www_mysql_1 cff6fc8e9f03 www_apache "docker-php-entrypoi…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp www_apache_1
Посмотрим логи Apache:
$ docker logs www_apache_1 AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.23.0.3. Set the 'ServerName' directive globally to... [Tue Mar 31 12:01:51.924142 2020] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.4.4 configured -- resuming normal operations [Tue Mar 31 12:01:51.924294 2020] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
Посмотрим логи MySQL:
$ $ docker logs www_mysql_1 2020-03-31 12:01:52+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.19-1debian10 started. 2020-03-31 12:01:52+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2020-03-31 12:01:52+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.19-1debian10 started.
Эта команда работает только для контейнеров, запускаемых с драйвером ведения журнала json-file
или journald
.
Логи монтируются на хост, поэтому легко понять, где они лежат:
$ docker inspect www_apache_1 | grep LogPath "LogPath": "/var/lib/docker/containers/cff...253/cff...253-json.log",
$ sudo cat /var/lib/docker/containers/cff...253/cff...253-json.log {"log":"AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.23.0.3. Set the 'ServerName' directive globally to suppress this messagen","stream":"stderr","time":"2020-03-31T12:01:51.900770702Z"} {"log":"[Tue Mar 31 12:01:51.924142 2020] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.4.4 configured -- resuming normal operationsn","stream":"stderr","time":"2020-03-31T12:01:51.927844571Z"} {"log":"[Tue Mar 31 12:01:51.924294 2020] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'n","stream":"stderr","time":"2020-03-31T12:01:51.927901498Z"} {"log":"192.168.110.18 - - [31/Mar/2020:12:02:54 +0000] "GET / HTTP/1.1" 200 390 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:74.0) Gecko/20100101 Firefox/74.0"n","stream":"stdout","time":"2020-03-31T12:02:54.454904936Z"}
Логи можно просматривать в режиме реального времени:
$ docker logs www_pma_1 --follow
Docker включает несколько механизмов ведения журналов, которые называются драйверами регистрации. По умолчанию используется драйвер json-file
, но можно использовать journald
, syslog
и другие. Изменить драйвер можно глобально, через файл /etc/docker/daemon.json
:
$ sudo nano /etc/docker/daemon.json
{ "log-driver": "journald" }
$ sudo systemctl restart docker.service
Либо при запуске контейнера:
$ docker run -d -p 80:80 --name apache-server --log-driver=journald httpd:latest
Либо в файле docker-compose.yml
:
apache: image: httpd:latest logging: driver: journald options: tag: http-daemon
При использовании драйвера journald
логи можно смотреть так:
$ journalctl -u docker.service CONTAINER_NAME=apache-server -- Logs begin at Sat 2020-02-29 16:06:12 MSK, end at Tue 2020-03-31 16:09:53 MSK. -- мар 31 16:26:37 test-server 2f5f00f1e480[14225]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'Serv мар 31 16:26:37 test-server 2f5f00f1e480[14225]: [Tue Mar 31 13:26:37.904324 2020] [mpm_event:notice] [pid 1:tid 139724697171072] AH00489: Apache/2.4.43 (Unix) configu мар 31 16:26:37 test-server 2f5f00f1e480[14225]: [Tue Mar 31 13:26:37.904729 2020] [core:notice] [pid 1:tid 139724697171072] AH00094: Command line: 'httpd -D FOREGROUN $ journalctl -u docker CONTAINER_TAG=http-daemon -- Logs begin at Sat 2020-02-29 16:06:12 MSK, end at Tue 2020-03-31 16:17:05 MSK. -- мар 31 16:16:55 test-server http-daemon[14225]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'Serve мар 31 16:16:55 test-server http-daemon[14225]: [Tue Mar 31 13:16:55.540692 2020] [mpm_event:notice] [pid 1:tid 140145144452224] AH00489: Apache/2.4.43 (Unix) configur мар 31 16:16:55 test-server http-daemon[14225]: [Tue Mar 31 13:16:55.541064 2020] [core:notice] [pid 1:tid 140145144452224] AH00094: Command line: 'httpd -D FOREGROUND
Для фильтрации логов по тегу, надо задать тег в файле docker-compose.yml
или при запуске контейнера:
$ docker run -d -p 80:80 --name apache-server --log-opt tag=http-daemon httpd:latest
Поиск:
Apache • CLI • Docker • Linux • MySQL • PHP • Виртуализация • Команда • Настройка • Процесс
Каталог оборудования
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Производители
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Функциональные группы
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.