InfraRunBook
    Back to articles

    Docker Logs Not Showing Output

    Docker
    Published: Apr 20, 2026
    Updated: Apr 20, 2026

    Docker logs not showing output is almost always caused by one of seven fixable issues. This guide walks through each one with real commands and actual output so you can diagnose and resolve it fast.

    Docker Logs Not Showing Output

    Symptoms

    You run

    docker logs <container>
    and get nothing. Or the command returns immediately with an empty line. Maybe you're using the
    -f
    flag and the terminal just hangs there silently while you know the application is active — you can see CPU usage, traffic is going through, but not a single log line appears. Sometimes you get a few lines from hours ago and then complete silence. Other times you get an outright error from the Docker daemon itself.

    Before you go hunting for a bug in your application, you need to eliminate the logging pipeline as the culprit. In my experience, at least half of the "my app isn't logging" incidents I've responded to turned out to be infrastructure problems — wrong container ID, wrong logging driver, application writing to a file instead of stdout. This guide covers every common root cause with real commands and real output so you can identify the problem in minutes.


    Root Cause 1: You're Looking at the Wrong Container ID

    This sounds embarrassingly simple, but it catches engineers constantly — especially in environments with frequent restarts, rolling deployments, or multiple replicas. You grab a container ID from a terminal session you opened an hour ago, run

    docker logs
    against it, and get nothing because that container was replaced two restarts ago.

    Why It Happens

    Docker assigns a new container ID every single time a container starts. If your app crashes and respawns via a

    --restart
    policy, the old container ID still exists in Docker's history — it just isn't the running instance anymore. The same happens after a
    docker-compose up --force-recreate
    or any redeployment that tears down and recreates the container. The old ID is queryable, but it represents a dead container with whatever output it managed to emit before it stopped.

    How to Identify It

    Start by seeing what's actually running right now:

    docker ps --filter "name=myapp"

    Then check the full history including stopped containers:

    docker ps -a --filter "name=myapp" --format "table {{.ID}}	{{.Status}}	{{.CreatedAt}}"
    CONTAINER ID   STATUS                      CREATED AT
    a3f91c2d4e10   Up 4 minutes                2026-04-20 14:22:01 +0000 UTC
    7b80d1e5f923   Exited (1) 14 minutes ago   2026-04-20 14:12:44 +0000 UTC
    c1d4a9e22b77   Exited (1) 29 minutes ago   2026-04-20 13:57:12 +0000 UTC

    If you were running

    docker logs 7b80d1e5f923
    , you were querying a dead container — not the one currently serving traffic. The logs were there, just from the wrong instance.

    How to Fix It

    Stop hardcoding or copying container IDs. Resolve them dynamically at query time:

    docker logs $(docker ps -q --filter "name=myapp" --latest)

    Or just use the container name directly if it's deterministic:

    docker logs myapp

    In automation scripts, never capture a container ID at the top of the script and reuse it later. By the time the log query runs, that ID may already be stale. Re-resolve it at the point of use.


    Root Cause 2: The Container Exited Too Fast

    The container launched, hit a fatal error, and exited in under a second. By the time you run

    docker ps
    to check on it, it's gone. When you run
    docker logs
    against the container ID you just started, you get nothing — or at most one or two lines — because the process barely had time to emit anything before dying.

    Why It Happens

    Startup failures are the usual trigger: missing config file, unresolvable environment variable, failed TCP connection to a dependency, malformed command-line argument. The process starts, immediately hits an unrecoverable error, prints a message to stderr (maybe), and exits. If the container is configured with

    --restart=always
    , it'll loop: start, fail, exit, start, fail, exit. That rapid cycling makes it look like the container is running when in fact it's in a crash loop.

    How to Identify It

    Look for exited containers:

    docker ps -a --filter "status=exited"
    CONTAINER ID   IMAGE         COMMAND                  CREATED         STATUS
    f4c7e921a033   myapp:1.4     "/usr/local/bin/start"   9 seconds ago   Exited (2) 8 seconds ago

    That non-zero exit code is a red flag. Pull the logs from the exited container directly:

    docker logs f4c7e921a033
    Fatal: config file /etc/myapp/config.yaml not found
    exit status 2

    You can also inspect the exit code and any Docker-level error message:

    docker inspect f4c7e921a033 --format '{{.State.ExitCode}} | {{.State.Error}}'

    For crash-looping containers, catch one mid-cycle with:

    watch -n 1 "docker ps -a --filter 'name=myapp'"

    You'll see the container ID changing on every iteration as it dies and respawns.

    How to Fix It

    The log is usually right there in those one or two lines the container managed to emit. Fix the underlying startup failure — mount the missing config, set the required environment variable, make sure dependent services are reachable before starting this one. The diagnostic step is consistent: always query the exited container ID, not a running one.


    Root Cause 3: The Application Is Writing to a File, Not stdout

    This is the most common cause I run into. The application is alive, it's doing work — but

    docker logs
    shows nothing because the app is writing its output to a log file inside the container filesystem, not to standard output. Docker's logging driver only captures stdout and stderr. Everything written to a file is completely invisible to it.

    Why It Happens

    Most applications were designed to run on bare metal or VMs, where writing to

    /var/log/appname/app.log
    was standard practice. When you containerize them without adjusting the logging configuration, they keep doing exactly what they were built to do. Nginx writes to
    /var/log/nginx/access.log
    and
    /var/log/nginx/error.log
    by default. Java apps using Log4j or Logback typically write to a rolling file appender. PHP-FPM logs to a file. None of those end up in
    docker logs
    .

    How to Identify It

    First confirm the container is actually doing work:

    docker stats myapp --no-stream
    CONTAINER ID   NAME    CPU %   MEM USAGE / LIMIT     MEM %   NET I/O
    a3f91c2d4e10   myapp   11.8%   241MiB / 2GiB         11.7%   1.1GB / 940MB

    CPU and memory consumption, network I/O — something is definitely running. Now look inside the container for log files that are actively being written:

    docker exec myapp find /var/log -name "*.log" -newer /proc/1/exe 2>/dev/null
    /var/log/myapp/application.log
    /var/log/myapp/error.log

    Tail one of those to confirm live output:

    docker exec myapp tail -f /var/log/myapp/application.log

    If you see log lines streaming there, you've found the problem. The app is healthy and logging — just not to the right place for Docker to pick it up.

    How to Fix It

    The cleanest solution is redirecting the application's log output to stdout and stderr. For Nginx, add this to your Dockerfile:

    RUN ln -sf /dev/stdout /var/log/nginx/access.log \
        && ln -sf /dev/stderr /var/log/nginx/error.log

    For Java apps using Logback, swap out the RollingFileAppender for a ConsoleAppender targeting System.out. For Python applications, use a StreamHandler pointed at sys.stdout rather than a FileHandler. The principle is the same regardless of language or framework: reconfigure the logging destination, not the logging behavior.

    If you absolutely cannot change the application configuration — legacy app, third-party binary, no rebuild possible — you can tail the file and pipe it to stdout as a wrapper:

    CMD ["sh", "-c", "myapp & tail -f /var/log/myapp/application.log"]

    That works, but it introduces two processes in one container, which complicates signal handling and graceful shutdown. Treat it as a temporary workaround, not a permanent architecture decision.


    Root Cause 4: Logging Driver Is Misconfigured

    Docker supports a range of logging drivers:

    json-file
    ,
    syslog
    ,
    journald
    ,
    gelf
    ,
    fluentd
    ,
    awslogs
    ,
    splunk
    , and others. The
    docker logs
    command only works with
    json-file
    and
    journald
    . If your daemon or container is configured to use any other driver,
    docker logs
    simply cannot read from it — and it tells you so with an error that's easy to miss if you're not looking for it.

    Why It Happens

    Production environments frequently route container logs to a centralized system. The team configures Fluentd, Graylog, or CloudWatch at the daemon level in

    /etc/docker/daemon.json
    , or at the container level in the compose file or run command. This is the right architecture for production — but it breaks the assumption that
    docker logs
    will work on that host. I've watched engineers dig through application source code for an hour because they missed the two-line error that
    docker logs
    returned.

    How to Identify It

    Check the daemon-level logging driver first:

    docker info --format '{{.LoggingDriver}}'
    fluentd

    Then check the specific container's logging configuration:

    docker inspect myapp --format '{{.HostConfig.LogConfig}}'
    {fluentd map[fluentd-address:172.16.10.15:24224 fluentd-async:true tag:myapp]}

    When you run

    docker logs
    against a container using a non-readable driver, you'll see:

    Error response from daemon: configured logging driver does not support reading

    That error is the answer. The logs exist — they're just somewhere else.

    How to Fix It

    If you need

    docker logs
    to work for local debugging, override the logging driver when starting the container:

    docker run --log-driver json-file --log-opt max-size=100m myapp:1.4

    Or add a per-service logging override in your compose file for the dev environment specifically. In production, don't fight the configured driver — go query the actual log destination. If logs are going to Fluentd forwarding to Elasticsearch on sw-infrarunbook-01, use Kibana or the Elasticsearch query API. The logs are there;

    docker logs
    just isn't the right tool for that environment.


    Root Cause 5: Log Rotation Has Consumed the Logs

    The

    json-file
    driver writes container logs to disk on the host. When log rotation is configured with a small size limit and a low file count, older log data gets dropped as files rotate. If your service is high-volume and your rotation settings are aggressive, you might have only a few minutes of log history available — and
    docker logs --since 1h
    returns almost nothing because that data is already gone.

    Why It Happens

    Docker's

    json-file
    driver doesn't buffer logs in memory. It writes directly to files at
    /var/lib/docker/containers/<id>/<id>-json.log
    and reads from those files when you query. If you configure
    max-size=5m
    and
    max-file=2
    , your maximum log retention is 10MB per container. A service that emits even modest log volume — a few hundred lines per second — can burn through that in under a minute. Once a rotated file is dropped, that history is unrecoverable.

    How to Identify It

    Find the log file for your container:

    docker inspect myapp --format '{{.LogPath}}'
    /var/lib/docker/containers/a3f91c2d4e10.../a3f91c2d4e10...-json.log

    Check the size and how many rotated files exist:

    ls -lh /var/lib/docker/containers/a3f91c2d4e10.../
    -rw-r----- 1 root root 5.0M Apr 20 14:30 a3f91c2d4e10...-json.log
    -rw-r----- 1 root root 5.0M Apr 20 14:28 a3f91c2d4e10...-json.log.1

    The active file is already at max size, there's only one backup, and nothing older than two minutes is available. Confirm the rotation settings:

    docker inspect myapp --format '{{.HostConfig.LogConfig.Config}}'
    map[max-file:2 max-size:5m]

    You can also spot this by checking what time range

    docker logs
    actually covers:

    docker logs myapp 2>&1 | head -1
    docker logs myapp 2>&1 | tail -1

    If the oldest line is only a few minutes old for a container that's been running for hours, rotation is eating your history.

    How to Fix It

    Update your Docker daemon defaults on sw-infrarunbook-01 by editing

    /etc/docker/daemon.json
    :

    {
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m",
        "max-file": "10"
      }
    }

    Restart the daemon to apply — note that existing containers keep their old settings until they're recreated:

    systemctl restart docker

    For containers that need higher retention, override per-container in your compose file. And if you genuinely need more than a gigabyte of log history queryable in real time, that's a signal to move to a centralized log aggregation stack rather than relying on local JSON files.


    Root Cause 6: Output Buffering Is Swallowing Logs

    The application is technically writing to stdout — but the output is sitting in a userspace buffer that hasn't been flushed yet. You're watching

    docker logs -f
    and nothing appears, then suddenly a burst of lines arrives all at once. Or the container exits and the logs only appear afterward, because the buffer flushes on process exit.

    Why It Happens

    When a process writes to stdout connected to a terminal, most runtimes use line buffering — every newline triggers a flush. When stdout is connected to a pipe (which is what Docker does internally), the runtime typically switches to full block buffering, accumulating output until an 8KB or 4KB buffer fills up before flushing. Python is the most common offender. A Python script doing

    print()
    calls will silently accumulate output with no flushes visible to
    docker logs
    until the buffer threshold is hit.

    How to Identify It

    The signature pattern is logs that appear in large bursts separated by silent intervals, or logs that only appear after

    docker stop
    . Check whether the Python entrypoint uses unbuffered mode:

    docker inspect myapp --format '{{.Config.Cmd}}'
    [python app.py]

    No

    -u
    flag. Check environment variables for the unbuffering setting:

    docker inspect myapp --format '{{.Config.Env}}' | tr ',' '
    ' | grep PYTHON
    PYTHONPATH=/app

    PYTHONUNBUFFERED
    is missing. That confirms the buffering issue.

    How to Fix It

    Add the environment variable to your Dockerfile:

    ENV PYTHONUNBUFFERED=1

    Or pass it at runtime:

    docker run -e PYTHONUNBUFFERED=1 myapp:1.4

    Alternatively, add the

    -u
    flag to the Python invocation in your CMD or entrypoint. For Ruby, add
    STDOUT.sync = true
    at the top of your entrypoint script. Writing to stderr instead of stdout sidesteps this entirely since stderr is unbuffered in virtually every runtime — though mixing all meaningful log output into stderr has its own downstream complications.


    Root Cause 7: Log Level Is Filtering Out Everything You Need

    Everything in the logging pipeline is correct — stdout, right driver, no rotation issues — but the application is configured to emit only

    ERROR
    level messages. The INFO and DEBUG output you're looking for is filtered at the application level before it ever reaches stdout.
    docker logs
    works fine; it's just faithfully reporting what the application chose to emit.

    Why It Happens

    Production deployments typically suppress verbose logging. An application that runs with

    LOG_LEVEL=error
    in production will appear completely silent during normal operation — no requests logged, no startup info, nothing. When you're troubleshooting, you expect to see traffic or operational output, and you see silence instead. The gap between expectation and reality looks like a Docker problem but is actually an application configuration issue.

    How to Identify It

    Check the environment variables the container was started with:

    docker inspect myapp --format '{{.Config.Env}}' | tr ',' '
    ' | grep -i log
    LOG_LEVEL=error

    Or check raw log count for a long-running container:

    docker logs myapp 2>&1 | wc -l
    4

    Four lines from a container running for six hours. Either nothing happened — unlikely — or most log output is being suppressed.

    How to Fix It

    Restart the container with a more verbose log level for your troubleshooting session:

    docker run -e LOG_LEVEL=debug myapp:1.4

    Be deliberate about this in production. DEBUG logging from a high-traffic service can produce enormous log volume, expose sensitive request data in plaintext, and fill disks quickly. Use it targeted and time-limited.


    Prevention

    Most of these problems are preventable at container design time. Establish a logging baseline in every Dockerfile: configure the application to write to stdout and stderr, set

    PYTHONUNBUFFERED=1
    for Python images, and add explicit symlinks for any application that defaults to file-based logging. Treat logging configuration the same way you treat port exposure and health checks — it's not optional.

    Set sensible log rotation defaults in

    /etc/docker/daemon.json
    on every Docker host from the moment it's provisioned. A 100MB max-size with 10 rotated files per container gives you reasonable retention without disk pressure for most workloads. If you deploy a centralized logging driver like Fluentd or CloudWatch, document it explicitly for the team — engineers shouldn't discover that
    docker logs
    doesn't work by hitting the error in production.

    Build log verification into your deployment validation. After a container comes up, a smoke check that runs

    docker logs <id> 2>&1 | wc -l
    and alerts on zero output will catch the stdout misconfiguration and buffering cases before they become incidents. If your service processes any traffic at startup — health checks, initialization routines, anything — it should emit at least a few log lines. Zero lines is a signal worth acting on.

    Finally, know which tool is right for which environment. In local development and staging,

    docker logs
    is fast and effective. In production with a centralized logging stack, it's the wrong tool — and expecting it to work there creates confusion. Document your log query path per environment so the first thing engineers do during an incident isn't spending ten minutes figuring out where the logs actually are.

    Frequently Asked Questions

    Why does docker logs return nothing even though my container is running?

    The most common reasons are that the application is writing logs to a file inside the container rather than to stdout, or that a non-default logging driver like fluentd or awslogs is configured and docker logs cannot read from it. Check docker inspect <container> and look at the LogConfig section to identify the driver, then use docker exec to look for log files inside the container.

    How do I see logs from a container that already stopped?

    Use docker ps -a to list all containers including exited ones, find the container ID of the stopped instance, and run docker logs <container-id> against it. As long as the container hasn't been removed with docker rm, the logs stored by the json-file driver are still accessible.

    Why do docker logs only show output in bursts with long gaps of silence?

    This is almost always output buffering. When a process writes to a pipe rather than a terminal, runtimes like Python switch to block buffering and accumulate output until the buffer fills. Set PYTHONUNBUFFERED=1 for Python containers, or use the -u flag in your CMD. The logs are being generated — they're just not being flushed to stdout frequently enough for Docker to pick them up in real time.

    Does docker logs work with all logging drivers?

    No. The docker logs command only supports the json-file and journald logging drivers. If your container or daemon is configured to use fluentd, syslog, gelf, awslogs, splunk, or any other driver, docker logs will return an error: 'configured logging driver does not support reading'. In those cases, you need to query the destination system directly — Elasticsearch, CloudWatch, Graylog, etc.

    How do I prevent Docker from rotating away logs I need?

    Configure larger retention limits in /etc/docker/daemon.json using the max-size and max-file options under log-opts. Setting max-size to 100m and max-file to 10 gives you up to 1GB of log history per container. For services where you need longer retention, route logs to a centralized log aggregation system rather than relying on Docker's local file storage.

    How can I tell which log file an application is writing to inside a container?

    Use docker exec <container> find /var/log -name '*.log' -newer /proc/1/exe to find log files that have been written to since the process started. You can also use docker exec <container> ls -lh /var/log/ to browse the log directory, or docker exec <container> lsof | grep '.log' to see which files the process currently has open.

    Related Articles