InfraRunBook
    Back to articles

    Docker Permission Denied Errors

    Docker
    Published: Apr 12, 2026
    Updated: Apr 12, 2026

    Permission denied errors in Docker have multiple distinct root causes that all look identical on the surface. This guide walks through how to diagnose and fix each one, from wrong container users to AppArmor policies and rootless Docker quirks.

    Docker Permission Denied Errors

    Symptoms

    Permission denied errors in Docker are among the most common — and most frustrating — issues you'll encounter day to day. The symptoms vary depending on what's actually going wrong, but the signature message is unmistakable:

    permission denied
    appearing in your container logs, your entrypoint script, or returned directly by the Docker daemon itself.

    You might see any of these:

    open /var/data/config.yml: permission denied
    mkdir /app/tmp: permission denied
    exec /entrypoint.sh: permission denied
    dial unix /var/run/docker.sock: connect: permission denied

    Sometimes the error surfaces at container startup and the container exits immediately with code 1 or 126. Other times the container runs but certain operations fail mid-execution — a write to a mounted volume, a bind to a privileged port, or an attempt to access a kernel feature. In my experience, the hardest cases are when the error only appears under a specific code path that isn't exercised during testing but blows up in production at 2am.

    The important thing to understand is that

    permission denied
    in Docker is almost never one problem — it's a family of problems that look identical on the surface but have completely different root causes and fixes. This article walks through each one systematically so you can diagnose what's actually happening instead of guessing.


    Root Cause 1: Container Running as the Wrong User

    Why It Happens

    Docker containers run as root by default unless the image or runtime overrides it. Many official images — particularly well-hardened ones — are built with a non-root

    USER
    directive (for example,
    www-data
    ,
    node
    ,
    nginx
    , or
    postgres
    ), but that user doesn't own the files baked into the image. Or you've added a bind-mounted volume whose contents are owned by a different UID. Either way, the container user ends up with no permission to read, write, or execute the file in question.

    This is especially common when you switch from running an image as root during development to running it as a non-root user for security hardening in staging or production. What worked fine before suddenly doesn't, because your application was silently relying on root access to write temp files, read restricted configs, or create lock files.

    How to Identify It

    First, check what user the container is actually running as:

    docker run --rm your-image id
    uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)

    Then inspect the ownership and permissions of the problematic path inside the container:

    docker run --rm your-image ls -la /app/
    total 32
    drwxr-xr-x 1 root root 4096 Apr 10 12:00 .
    drwxr-xr-x 1 root root 4096 Apr 10 12:00 ..
    -rw-r--r-- 1 root root  512 Apr 10 12:00 config.yml
    drwx------ 2 root root 4096 Apr 10 12:00 cache

    There's your problem. The

    cache
    directory is owned by root with mode
    700
    . Your process running as uid 1000 cannot touch it.

    How to Fix It

    The cleanest fix is in the Dockerfile itself. Before switching to the non-root user, set up the directory ownership explicitly:

    FROM node:20-alpine
    
    WORKDIR /app
    COPY --chown=node:node . .
    
    RUN mkdir -p /app/cache && chown -R node:node /app/cache
    
    USER node
    
    CMD ["node", "server.js"]

    If you can't rebuild the image — maybe it's a third-party image — you can write a wrapper entrypoint that runs as root to fix permissions, then drops privileges using

    su-exec
    or
    gosu
    before executing the main process:

    #!/bin/sh
    chown -R appuser:appuser /app/cache
    exec su-exec appuser "$@"

    Don't use

    --user root
    as a permanent fix in production. It papers over the real issue and reintroduces exactly the attack surface you were trying to reduce.


    Root Cause 2: Volume Owned by Root

    Why It Happens

    Bind mounts and named volumes are a constant source of permission headaches. When Docker creates a named volume, the data directory on the host is initialized owned by root. When you bind-mount a host directory, it arrives in the container with whatever ownership it has on the host — which is frequently root, especially if the directory was created by a CI pipeline, a root process, or an accidental

    sudo mkdir
    .

    The container process, running as a non-root user, tries to write to that mount point and immediately hits a wall. I've seen this trip up teams repeatedly when they move from a dev environment (where everything runs as root anyway) to a properly hardened production setup and suddenly nothing works.

    How to Identify It

    Exec into the running container and inspect the mount:

    docker exec -it my-container sh -c "ls -la /data"
    total 8
    drwxr-xr-x 2 root root 4096 Apr 11 08:30 .
    drwxr-xr-x 1 root root 4096 Apr 11 08:30 ..

    The directory is root-owned. Your app user can traverse it (the execute bit is set for others) but cannot create files inside it. Confirm on the host:

    ls -la /srv/app-data/
    drwxr-xr-x 2 root root 4096 Apr 11 08:30 /srv/app-data/

    Confirmed. The host directory was created as root and the container has no write access.

    How to Fix It

    If you control the host directory, fix ownership on the host to match the UID the container runs as:

    sudo chown -R 1000:1000 /srv/app-data/

    Or by username if the UID matches a local user:

    sudo chown -R infrarunbook-admin:infrarunbook-admin /srv/app-data/

    For named Docker volumes, fix permissions by running a one-off root container before your workload starts:

    docker run --rm -v myapp-data:/data alpine chown -R 1000:1000 /data

    In Docker Compose you can wire this up as an init service with

    depends_on
    , or use an entrypoint that handles the
    chown
    step as root before dropping to the application user. The pattern is clean: a root init step fixes ownership, then the real workload runs unprivileged with the corrected volume.


    Root Cause 3: AppArmor Policy Blocking

    Why It Happens

    AppArmor is a Linux security module that enforces mandatory access control policies on a per-process basis. On Ubuntu and Debian hosts — which covers the majority of Docker deployments — Docker applies its own default AppArmor profile called

    docker-default
    to every container unless you override it. This profile restricts certain filesystem paths, system calls, and operations that a container process might attempt.

    The tricky part is that AppArmor denials look exactly like ordinary filesystem permission errors. The kernel returns

    EACCES
    (permission denied) regardless of whether the denial came from standard Unix DAC permissions or from AppArmor's MAC layer. You won't know AppArmor is involved unless you think to look. I've spent embarrassing amounts of time staring at file ownership and modes before remembering to check the audit log.

    How to Identify It

    First, confirm AppArmor is active and the Docker profile is loaded:

    sudo aa-status
    apparmor module is loaded.
    35 profiles are loaded.
    33 profiles are in enforce mode.
       docker-default
       /usr/sbin/tcpdump
       ...

    Now check the kernel audit log for AppArmor denials:

    sudo dmesg | grep -i apparmor | tail -20
    [ 4823.112456] audit: type=1400 audit(1712834562.882:47): apparmor="DENIED" operation="open" profile="docker-default" name="/proc/sysrq-trigger" pid=3847 comm="app" requested_mask="w" denied_mask="w" fsuid=1000 ouid=0

    You can also check

    /var/log/syslog
    or run
    sudo journalctl -k | grep apparmor
    . The key field is
    apparmor="DENIED"
    — that's your smoking gun. If you see it, AppArmor is blocking the operation, and the standard Unix permissions are irrelevant to the fix.

    How to Fix It

    The fix depends on what's being denied and why your container legitimately needs it. Don't jump straight to disabling AppArmor entirely — you want the minimum necessary access.

    For debugging, put the container in an unconfined profile temporarily to confirm AppArmor is the cause:

    docker run --security-opt apparmor=unconfined your-image

    If that makes the error go away, AppArmor was definitely the culprit. Now build a proper fix. Set the default profile to complain mode so it logs what it would block without actually blocking:

    sudo aa-complain /etc/apparmor.d/docker-default

    Run your workload, collect the audit log, then generate a custom profile using

    aa-logprof
    :

    sudo aa-logprof

    Once you have a custom profile, load it and reference it at runtime:

    sudo apparmor_parser -r -W /etc/apparmor.d/my-app-profile
    docker run --security-opt apparmor=my-app-profile your-image

    In Docker Compose, apply it under

    security_opt
    :

    services:
      app:
        image: your-image
        security_opt:
          - apparmor=my-app-profile

    Never use

    unconfined
    permanently in production. It's a debugging tool, not a solution.


    Root Cause 4: Missing Linux Capabilities

    Why It Happens

    Docker drops most Linux capabilities by default. A full root process on the host has access to capabilities like

    CAP_NET_ADMIN
    ,
    CAP_SYS_PTRACE
    ,
    CAP_SYS_ADMIN
    , and about 37 others. Docker's default container runtime grants only a subset of roughly 14 capabilities — even if you're running as root inside the container.

    When your container process tries to do something requiring a capability it doesn't have, the kernel returns

    EPERM
    or
    EACCES
    . Common culprits: binding to ports below 1024 requires
    CAP_NET_BIND_SERVICE
    ; changing file ownership across user boundaries requires
    CAP_CHOWN
    or
    CAP_FOWNER
    ; manipulating network interfaces or routing tables requires
    CAP_NET_ADMIN
    ; and anything touching raw sockets requires
    CAP_NET_RAW
    .

    How to Identify It

    Check what capabilities the running container actually has by inspecting the process status:

    docker run --rm your-image cat /proc/self/status | grep -i cap
    CapInh: 0000000000000000
    CapPrm: 00000000a80425fb
    CapEff: 00000000a80425fb
    CapBnd: 00000000a80425fb
    CapAmb: 0000000000000000

    Decode those hex bitmasks with

    capsh
    :

    capsh --decode=00000000a80425fb
    0x00000000a80425fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap

    Cross-reference against what your application actually needs. If it needs

    cap_net_admin
    and you don't see it in that list, that's your problem. You can also strace the failing process to catch the exact syscall being blocked — though you'll need
    SYS_PTRACE
    capability to do this:

    docker run --rm --cap-add SYS_PTRACE your-image strace -e trace=all your-command 2>&1 | grep EPERM

    How to Fix It

    Add only the specific capability your container actually needs. Don't reach for

    --privileged
    as a band-aid — it grants every capability, disables seccomp filtering, and disables AppArmor confinement. That's not a fix; it's tearing down all your defenses at once.

    docker run --cap-add NET_ADMIN your-image

    For binding to privileged ports without full root, you only need

    NET_BIND_SERVICE
    :

    docker run --cap-add NET_BIND_SERVICE your-image

    In Docker Compose, the right pattern is to drop everything first and then add back only what's required:

    services:
      app:
        image: your-image
        cap_drop:
          - ALL
        cap_add:
          - NET_BIND_SERVICE
          - CHOWN

    Starting from

    cap_drop: ALL
    and adding back explicitly makes the security posture visible and auditable. Anyone reading the Compose file can immediately see exactly what the service is allowed to do at the kernel level.


    Root Cause 5: Rootless Docker

    Why It Happens

    Rootless Docker — where the Docker daemon itself runs as a non-root user rather than as root — is increasingly standard as teams harden their infrastructure. It's a meaningful security improvement. But it introduces an entire category of permission issues that simply don't exist in conventional Docker setups, and engineers who haven't worked with user namespaces before are often baffled by the behavior.

    In rootless mode, the daemon runs inside a user namespace. The user namespace maps the container's root (uid 0) to your actual host user — say, uid 1000. This means what appears to be "root" inside the container is an unprivileged user on the host. Anything that requires genuine root on the host — certain filesystem mounts, some storage drivers, hardware device access — will fail. And because the Docker socket lives at a different path (

    $XDG_RUNTIME_DIR/docker.sock
    rather than
    /var/run/docker.sock
    ), tools that hardcode the socket path will fail to connect with a permission denied error that's actually a socket-not-found problem in disguise.

    The other surprise is volume ownership. When a container process running as uid 0 (container root) writes a file, that file on the host is owned by uid 1000 (your host user). When a container process running as uid 1000 (a non-root app user) writes a file, it lands on the host owned by uid 101000 — the start of your subuid range plus 1000. This catches almost everyone off guard the first time.

    How to Identify It

    Confirm you're running rootless Docker:

    docker info | grep -i rootless
     rootlesskit
     rootless: true

    Check the socket location and whether

    DOCKER_HOST
    is set correctly:

    echo $DOCKER_HOST
    unix:///run/user/1000/docker.sock

    If

    DOCKER_HOST
    is unset or pointing to
    /var/run/docker.sock
    while rootless is active, any tool trying to contact the daemon will get:

    dial unix /var/run/docker.sock: connect: permission denied

    Inspect your subuid mapping to understand the UID translation:

    cat /etc/subuid | grep infrarunbook-admin
    infrarunbook-admin:100000:65536

    And confirm what the user namespace looks like from inside a container:

    docker run --rm alpine cat /proc/self/uid_map
             0       1000          1
             1     100000      65536

    That output tells you uid 0 inside the container maps to uid 1000 on the host, and uids 1 through 65536 inside the container map to 100000 through 165535 on the host. So a non-root app running as uid 1000 inside the container creates files owned by uid 101000 on the host.

    How to Fix It

    Set

    DOCKER_HOST
    so the CLI and any tools can find the rootless daemon:

    export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock

    Add this to your shell profile so it persists across sessions. For systemd-managed rootless Docker, you can also enable socket activation:

    systemctl --user enable --now docker.socket

    For volume permission issues, fix host directory ownership to match the mapped UID. If your application runs as uid 1000 inside the container, you need host ownership at uid 101000 (subuid start + container uid):

    sudo chown -R 101000:101000 /srv/app-data/

    If your storage driver doesn't support rootless overlayfs properly — this happens on older kernels without native unprivileged overlayfs support — switch to

    fuse-overlayfs
    by editing
    ~/.config/docker/daemon.json
    :

    {
      "storage-driver": "fuse-overlayfs"
    }

    Then restart the user-level daemon:

    systemctl --user restart docker

    Root Cause 6: SELinux Context Mismatch

    Why It Happens

    On RHEL, CentOS, Fedora, Rocky Linux, and Amazon Linux hosts, SELinux is often active in enforcing mode. Docker is SELinux-aware and labels container processes with the

    container_t
    type, but if you bind-mount a host directory that has the wrong SELinux context, the container process will get permission denied even when the standard Unix permission check passes. DAC says yes; MAC says no; the process fails.

    How to Identify It

    Search the audit log for AVC denials:

    sudo ausearch -m avc -ts recent | grep denied
    type=AVC msg=audit(1712834831.032:108): avc:  denied  { write } for  pid=4291 comm="app" name="data" dev="sda1" ino=524311 scontext=system_u:system_r:container_t:s0:c123,c456 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=dir permissive=0

    The

    tcontext
    shows
    user_home_t
    — that's a home directory context. Container processes are not allowed to write to directories labeled
    user_home_t
    . The fix is to relabel the directory correctly.

    How to Fix It

    Relabel the host directory with the container-accessible context:

    sudo chcon -Rt svirt_sandbox_file_t /srv/app-data/

    Or use Docker's built-in relabeling flags on the mount —

    :z
    for shared (multiple containers) or
    :Z
    for private (single container):

    docker run -v /srv/app-data:/data:Z your-image

    The

    :Z
    flag tells Docker to relabel the host directory automatically with a private, container-specific SELinux label. Use
    :z
    when multiple containers need to share the same volume — it applies a shared label instead. Never use
    :Z
    on a directory shared between multiple containers; they'll fight over the private label and one will lose access.


    Root Cause 7: Docker Socket Permission Denied

    Why It Happens

    Mounting the Docker socket into a container is a common pattern for CI runners, container management tools, and monitoring agents that need to control or observe other containers. The socket file on the host is owned by root and group

    docker
    . If the process inside the container isn't running as root or in the docker group — which is exactly the scenario in a hardened, non-root container — you'll get an immediate connection error that looks like a permission problem but is really a group membership problem.

    How to Identify It

    ls -la /var/run/docker.sock
    srw-rw---- 1 root docker 0 Apr 11 08:00 /var/run/docker.sock
    docker exec -it ci-runner id
    uid=1001(runner) gid=1001(runner) groups=1001(runner)

    The runner user isn't in the

    docker
    group, and the socket is mode
    660
    , so the process can't read or write it.

    How to Fix It

    Rather than hardcoding a group name that may not exist inside the image, pass the actual GID of the socket dynamically using

    --group-add
    :

    docker run -v /var/run/docker.sock:/var/run/docker.sock \
      --group-add $(stat -c '%g' /var/run/docker.sock) \
      your-ci-image

    This works regardless of whether the group is called

    docker
    or something else on the host, and regardless of whether that GID maps to anything meaningful inside the image. The process gets the supplemental GID it needs to access the socket.

    Be aware that any container with access to the Docker socket can spin up new containers with root on the host — it's effectively a privilege escalation path. Use it deliberately and scope access carefully.


    Prevention

    Most Docker permission errors are preventable if you bake the right habits into your workflow from the beginning rather than retrofitting them when things break.

    Build your images with explicit non-root users and set up directory ownership in the Dockerfile before switching users. Use

    COPY --chown
    so files land with the right owner from the start, not as root. Test your images early with
    --user 1000:1000
    overrides — don't wait until production deployment to discover what your app was silently relying on as root.

    For volumes, establish a convention on your team: know the UID your container runs as, and either create host directories owned by that UID before mounting, or use an init container pattern to fix ownership at startup. Don't rely on the behavior of Docker-created volumes without verifying what ownership they get. Document this in your runbooks so the next engineer doesn't have to rediscover it.

    On hosts with AppArmor or SELinux active, audit your containers in complain or permissive mode before enabling enforce. Catch the denials during staging and build profiles that explicitly grant what's needed. The

    :Z
    bind mount flag is a reasonable default habit on SELinux systems — use it unless you know you need shared access.

    Follow least privilege for Linux capabilities consistently. Start every service definition with

    cap_drop: ALL
    in Compose and add back only what's required. Keep a note in your infrastructure runbook of which capabilities each service needs and why — that context is invaluable during security audits and incident investigations six months from now.

    In rootless Docker environments, document the UID mapping and include it in your team's onboarding materials. Engineers unfamiliar with user namespaces will hit volume ownership issues repeatedly until someone explains the subuid arithmetic. A quick reference — container uid 0 maps to host uid 1000, container uid 1000 maps to host uid 101000 — saves hours of confusion.

    Finally, when you hit a permission denied error, resist the temptation to reach for

    --privileged
    or
    chmod 777
    . Both fixes work. Both trade away real security for temporary convenience, and both have a way of staying in place long after you intended to replace them. Take the extra ten minutes to identify the actual root cause and apply the minimal fix. The audit trail will be cleaner, the container will be safer, and you won't be explaining the
    --privileged
    flag to a security reviewer later.

    Frequently Asked Questions

    Why does my Docker container get permission denied even though I'm running as root inside the container?

    Running as root inside a container doesn't mean having full host root privileges. Docker drops most Linux capabilities by default, AppArmor or SELinux may be enforcing policies, and in rootless Docker mode, container root is actually mapped to an unprivileged host user. Check kernel audit logs for AppArmor or SELinux denials, and verify which capabilities the container has using /proc/self/status and capsh --decode.

    How do I fix permission denied when writing to a Docker bind mount?

    The bind-mounted host directory is most likely owned by root. Check with ls -la on the host path. Fix it by chown-ing the directory to match the UID the container process runs as — for example, sudo chown -R 1000:1000 /srv/app-data/. On SELinux systems, also apply the correct label using chcon -Rt svirt_sandbox_file_t or use the :Z volume mount flag.

    What is the difference between --privileged and --cap-add in Docker?

    --privileged grants all Linux capabilities, disables seccomp filtering, and disables AppArmor confinement. It is effectively equivalent to running without any container security controls. --cap-add adds a specific named capability while leaving all other security mechanisms intact. Always use --cap-add with the minimum required capability instead of --privileged.

    Why do I get 'dial unix /var/run/docker.sock: permission denied' in rootless Docker?

    In rootless Docker, the socket lives at $XDG_RUNTIME_DIR/docker.sock (typically /run/user/1000/docker.sock), not at /var/run/docker.sock. Tools expecting the traditional path will fail. Fix it by setting DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock in your shell profile.

    How can I tell if AppArmor is causing a Docker permission denied error?

    Run sudo dmesg | grep apparmor or sudo journalctl -k | grep apparmor and look for lines containing apparmor="DENIED". If you see them with profile="docker-default", AppArmor is blocking the operation. You can confirm by temporarily running the container with --security-opt apparmor=unconfined — if the error disappears, AppArmor was the cause. Then build a proper custom profile rather than leaving it unconfined permanently.

    Related Articles