Symptoms
When Docker volume mounts go wrong, you'll typically see one of a handful of failure patterns. The container might start but immediately exit with a non-zero status. You might see a blank or empty directory inside the container where you expected your application data. The app inside the container could throw "file not found" errors even though the files clearly exist on the host. Or Docker itself refuses to start the container at all, throwing a mount error before the process ever gets going.
The specific error messages vary wildly depending on the root cause. Some show up in
docker logs, some in
docker runstderr, and some only surface when you
docker inspectthe container. Here are the common failure messages you'll encounter:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /opt/solvethenetwork/configs.
docker: Error response from daemon: error while creating mount source path '/var/data/app': mkdir /var/data/app: permission denied.
container_linux.go: starting container process caused "process_linux.go: container init caused \"rootfs_linux.go: mounting \"/host/path\" to rootfs at \"/container/path\" caused \\\"mount through fs_path, error: /host/path: permission denied\\\"\""
These errors are frustrating because they're often vague and the actual root cause isn't obvious from the message alone. Let me walk through the most common causes I've seen in production environments, along with exactly how to identify and fix each one.
Root Cause 1: Wrong Path in Bind Mount
A bind mount ties a specific host filesystem path to a path inside the container. Get the host path wrong and Docker either refuses to start the container or silently mounts an empty directory — depending on your Docker version and configuration.
Why It Happens
This is almost always a typo, a path that existed during development but not on the target host, or a case where someone confused the host path with the container path in the
-vflag. The syntax is
host_path:container_path, and it's easy to swap them or mistype a directory name. In my experience, this comes up constantly when teams copy
docker runcommands from development laptops to production servers without verifying that the paths actually exist on the target machine. A path like
/home/infrarunbook-admin/app/configsthat exists on a developer's workstation absolutely will not exist on sw-infrarunbook-01 in production.
How to Identify It
Docker will usually tell you directly with an error like this:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /opt/solvethenetwork/configs.
If the container starts but the directory is empty inside, inspect the actual mount points:
docker inspect app-container --format '{{json .Mounts}}' | python3 -m json.tool
Which gives you output like this:
[
{
"Type": "bind",
"Source": "/opt/solvethenetwork/configs",
"Destination": "/app/configs",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
Then verify the source path actually exists on the host:
ls -la /opt/solvethenetwork/configs
If you get
No such file or directory, that's your problem.
How to Fix It
Verify and correct the host path before launching the container. On sw-infrarunbook-01, the correct workflow looks like this:
# Verify the path exists
stat /opt/solvethenetwork/configs
# If it doesn't, create it and set ownership
mkdir -p /opt/solvethenetwork/configs
chown infrarunbook-admin:infrarunbook-admin /opt/solvethenetwork/configs
# Then relaunch with the correct path
docker run -d \
-v /opt/solvethenetwork/configs:/app/configs \
--name app-container \
my-app:latest
If the path exists but you swapped host and container paths in the
-vargument, just flip them. In Compose files, double-check the
volumes:block — the left side of the colon is always the host path, the right side is always the container path. Always.
Root Cause 2: Permission Denied
This is probably the root cause I see most often in real environments. The container process runs as a specific user — often not root — and the host directory isn't readable or writable by that user's UID. The container starts, the mount succeeds, and then the application quietly dies trying to open a file it can't access.
Why It Happens
Docker bind mounts preserve the host filesystem's ownership and permissions as-is. If the host directory is owned by
root:rootwith mode
700, and your container process runs as UID 1000, it will fail to read or write that directory. The container sees the same UID and GID numbers as the host because Linux user namespaces are not remapped by default in Docker. A file owned by UID 0 on the host is owned by UID 0 inside the container too — there's no automatic translation happening. This catches people off guard because they expect Docker to somehow handle the mismatch.
How to Identify It
The container will either fail to start or fail silently, with errors showing up in application logs rather than Docker's own output. Check both:
docker logs app-container
You might see something like:
Error opening config file: open /app/configs/app.yaml: permission denied
FATAL: could not read configuration, exiting
To dig deeper, check the permissions on the host path and compare them to the UID the container process runs as:
# Check host directory ownership and permissions
ls -la /opt/solvethenetwork/
# drwx------ 2 root root 4096 Apr 17 09:00 configs/
# Find out what UID the container process runs as
docker inspect app-container --format '{{.Config.User}}'
# Or exec into a running container and check
docker exec -it app-container id
# uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
UID 1000 trying to read a directory owned by root with
700permissions — that's your problem right there. The numbers don't lie.
How to Fix It
You have a few options depending on your constraints. The most direct fix is to align host directory ownership with the container's UID:
# Change ownership to match the container's UID
chown -R 1000:1000 /opt/solvethenetwork/configs
# Or relax permissions if changing ownership isn't appropriate
chmod 755 /opt/solvethenetwork/configs
chmod 644 /opt/solvethenetwork/configs/*.yaml
The cleanest long-term solution is to build your container image with a known, fixed UID and make sure host directories are provisioned with matching ownership. In your Dockerfile:
FROM ubuntu:22.04
RUN groupadd -g 1000 appuser && useradd -u 1000 -g appuser appuser
USER appuser
Then in your infrastructure provisioning — Ansible, Terraform, a shell script — create the host directories owned by UID 1000 before the container ever starts. This makes the relationship explicit and reproducible across every environment.
Root Cause 3: SELinux Blocking
SELinux is silent and merciless. If you're on a RHEL, CentOS, Rocky Linux, or Fedora host with SELinux in enforcing mode, bind mounts will fail in ways that look exactly like permission errors — but
ls -lashows the permissions are perfectly fine. This trips up engineers who are used to Debian or Ubuntu systems where SELinux isn't enabled by default.
Why It Happens
SELinux uses security contexts — also called labels — to control access beyond standard Unix permissions. Files on the host have an SELinux context, and Docker containers run with a specific SELinux context, typically something in the
svirt_sandbox_file_tfamily. When the container process tries to access a host path labeled with a context that the container's policy doesn't allow, the kernel denies access at the MAC layer, after standard DAC permission checks have already passed. Standard permissions can be wide open —
777even — and SELinux will still block it. The result looks like a permission error, but
chmodand
chownwon't fix it.
How to Identify It
First, confirm SELinux is active and in enforcing mode:
getenforce
# Enforcing
Then check the audit log for denials. On sw-infrarunbook-01, use
ausearch:
ausearch -m avc -ts recent
Or grep the audit log directly:
grep "denied" /var/log/audit/audit.log | tail -20
You'll see entries like:
type=AVC msg=audit(1713340800.123:456): avc: denied { read } for pid=12345 comm="app" name="app.yaml" dev="sda1" ino=123456 scontext=system_u:system_r:svirt_sandbox_file_t:s0:c1,c2 tcontext=unconfined_u:object_r:var_t:s0 tclass=file permissive=0
That
deniedentry with
svirt_sandbox_file_tin the source context is your smoking gun. The container is running with the sandbox label but the target file carries
var_t, which the sandbox policy isn't permitted to read.
How to Fix It
Docker provides a shorthand for correct SELinux relabeling through the
:zand
:Zmount options. Use
:zfor content shared between multiple containers, and
:Zfor content private to a single container — which is the most common case:
docker run -d \
-v /opt/solvethenetwork/configs:/app/configs:Z \
--name app-container \
my-app:latest
In Docker Compose, add the option directly to the volume definition:
services:
app:
volumes:
- /opt/solvethenetwork/configs:/app/configs:Z
You can also relabel directories manually with
chconif you prefer not to use the Docker shorthand:
chcon -Rt svirt_sandbox_file_t /opt/solvethenetwork/configs
Do not disable SELinux to work around this. Setting
SELINUX=permissivein
/etc/selinux/configremoves a meaningful security layer from your host and is not a fix — it's surrendering. Use the proper relabeling approach. It takes thirty seconds and doesn't compromise the system.
Root Cause 4: Volume Driver Failure
Named volumes in Docker can use custom volume drivers — plugins that provide storage backends like NFS, GlusterFS, Ceph, or cloud block storage. When a volume driver fails, crashes, or loses connectivity to its backend, volumes fail to mount and containers can't start. This is a particularly nasty failure mode because the error message often doesn't tell you much about what went wrong on the storage side.
Why It Happens
Volume driver plugins run as separate processes and communicate with the Docker daemon via a local socket API. If the plugin crashes, is not installed on the current host, or loses network connectivity to its storage backend, Docker can't fulfill the mount request. The daemon knows what volume to mount but can't reach the driver responsible for providing it. This scenario is especially common in Docker Swarm clusters, where a service task can migrate to a node that never had the volume plugin installed — or had it installed with different credentials.
How to Identify It
The error usually names the socket it can't reach:
docker: Error response from daemon: error while mounting volume with options map[]: VolumeDriver.Mount: dial unix /run/docker/plugins/rexray.sock: connect: no such file or directory.
For network-backed volumes where the driver is present but the storage backend is unreachable:
docker: Error response from daemon: error while mounting volume '/var/lib/docker/volumes/app-data': VolumeDriver.Mount: mount failed: exit status 32
Check which plugins are installed and their enabled status on sw-infrarunbook-01:
docker plugin ls
# Output:
ID NAME DESCRIPTION ENABLED
a1b2c3d4e5f6 rexray/ebs:latest REX-Ray EBS Volume Plugin false
A plugin listed as
falsein the ENABLED column isn't running. Also check the Docker daemon logs for plugin-related errors:
journalctl -u docker -n 100 --no-pager | grep -i plugin
For NFS-backed volumes, verify the NFS server is reachable from the host and the export is accessible:
showmount -e 192.168.10.50
mount | grep nfs
How to Fix It
For a disabled or crashed plugin, disable and re-enable it to force a restart:
docker plugin disable rexray/ebs:latest
docker plugin enable rexray/ebs:latest
For a missing plugin on a node, install it with the appropriate driver-specific configuration:
docker plugin install rexray/ebs:latest \
EBS_ACCESSKEY=AKIAIOSFODNN7EXAMPLE \
EBS_SECRETKEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
EBS_REGION=us-east-1
If the plugin is fundamentally broken and you need to recover access to data quickly, migrate to a local volume temporarily while you fix the storage backend:
# Create a local replacement and copy existing data over
docker volume create app-data-local
docker run --rm \
-v app-data:/source:ro \
-v app-data-local:/dest \
busybox sh -c "cp -av /source/. /dest/"
# Start container with local volume while you fix the driver
docker run -d \
-v app-data-local:/data \
--name app-container \
my-app:latest
Root Cause 5: Named Volume Not Created
This one is surprisingly common and genuinely easy to miss — because Docker's behavior here is inconsistent enough to be misleading. You reference a named volume in your
docker runcommand or Compose file, but the volume doesn't exist yet, and Docker silently auto-creates it empty. The container starts without any error, your application fails trying to read data that isn't there, and you spend an hour debugging what looks like an application bug.
Why It Happens
With
docker run, Docker auto-creates a named volume if it doesn't exist. This sounds convenient but it's a footgun. The container starts, the volume gets created empty, and your application discovers it has no data. Worse, if you intend to use
docker volume createwith specific options — a custom driver, labels, driver-specific options — simply referencing the volume in a run command won't apply those options. Docker creates a plain local volume instead of the NFS-backed or cloud-backed volume you intended. I've seen this cause hours of debugging when a team migrates a stack to a new host and assumes named volumes carry over automatically. They don't.
How to Identify It
List existing volumes and compare against what your container expects:
docker volume ls
DRIVER VOLUME NAME
local nginx-logs
local redis-data
If
app-datais missing from the list but your container expects it to contain data, inspect the actual volume to see what Docker created:
docker volume inspect app-data
[
{
"CreatedAt": "2026-04-17T09:00:00Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/app-data/_data",
"Name": "app-data",
"Options": {},
"Scope": "local"
}
]
If you needed
"Driver": "rexray/ebs"and got
"Driver": "local", Docker silently created the wrong volume type when the container first started. The data you expected isn't there because the volume was never populated — it was just created empty on this host.
How to Fix It
Explicitly create the volume with the correct parameters before starting any container that depends on it. Never rely on auto-creation for anything beyond the simplest local development workflows:
# Create with default local driver
docker volume create app-data
# Create with a specific driver and driver options
docker volume create \
--driver rexray/ebs \
--opt size=20 \
--opt volumetype=gp3 \
--label env=production \
--label team=infrarunbook \
app-data
In Docker Compose, declare volumes explicitly in the top-level
volumes:block with their full configuration rather than relying on implicit creation:
volumes:
app-data:
driver: local
driver_opts:
type: nfs
o: addr=192.168.10.50,rw,hard,intr
device: ":/exports/app-data"
If you need to restore data into a newly created volume from a backup archive:
docker run --rm \
-v app-data:/data \
-v /opt/solvethenetwork/backups:/backups:ro \
busybox tar xzf /backups/app-data-backup.tar.gz -C /data
Root Cause 6: Read-Only Filesystem or Mount Options
Sometimes a volume mounts successfully — no errors at startup,
docker inspectshows the mount present — but the container can't write to it at runtime. The cause is either an explicit
:roflag on the volume spec, or the host filesystem itself is mounted read-only.
Why It Happens
The
:rooption in a volume spec is intentional in many cases — config files, TLS certificates, secrets. But if that option ends up applied to a volume that needs write access, you'll get application-level write errors that don't look like mount problems at first glance. Separately, if the host filesystem hosting a bind mount is itself mounted read-only — recovery mode boot, a read-only NFS export, or an intentionally hardened partition — writes will fail regardless of what Docker's own mount options say.
How to Identify It
Check the mount's
RWfield in
docker inspectoutput:
docker inspect app-container --format '{{json .Mounts}}' | python3 -m json.tool
Look for
"RW": false:
{
"Type": "bind",
"Source": "/opt/solvethenetwork/data",
"Destination": "/app/data",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
For host filesystem read-only status, check
mountoutput on sw-infrarunbook-01:
mount | grep /opt/solvethenetwork
# /dev/sda1 on /opt/solvethenetwork type ext4 (ro,relatime)
The
roflag in the host mount options means nothing on that partition can be written, regardless of Docker configuration.
How to Fix It
Remove
:rofrom the volume spec in your run command or Compose file if write access is actually needed. For a read-only host filesystem, remount it writable:
# Remount read-write (takes effect immediately, does not survive reboot)
mount -o remount,rw /opt/solvethenetwork
# Verify the remount succeeded
mount | grep /opt/solvethenetwork
# /dev/sda1 on /opt/solvethenetwork type ext4 (rw,relatime)
# To make it permanent, edit /etc/fstab and remove the 'ro' option,
# then verify with: mount -a
Prevention
Preventing volume mount problems is mostly about being explicit and verifying assumptions before containers start. The biggest category of failures comes from implicit behavior — Docker auto-creating volumes, paths existing on one host but not another, permissions that work on a developer's laptop but not in production. The antidote is to make everything explicit and check it automatically as part of your deployment process.
Always verify host paths exist before writing Compose files or run commands. Make path validation part of your deployment scripts on sw-infrarunbook-01 rather than a manual pre-flight check that gets skipped under pressure:
#!/bin/sh
REQUIRED_PATHS="/opt/solvethenetwork/configs /opt/solvethenetwork/data /var/log/app"
for path in $REQUIRED_PATHS; do
if [ ! -d "$path" ]; then
echo "ERROR: Required path $path does not exist on $(hostname)" >&2
exit 1
fi
done
echo "All required paths verified on $(hostname)."
Declare volumes explicitly in Compose files rather than relying on implicit creation. The top-level
volumes:block forces you to specify the driver, options, and labels upfront. This prevents silent auto-creation of wrong volume types and makes your storage configuration reviewable in code alongside everything else.
For SELinux hosts — which should be any RHEL-based host you run in production — make the
:Zlabel option standard practice for all bind mounts. Encode this in your team's Compose templates and Ansible roles so it's the default, not something that relies on individual engineers remembering to add it to every mount definition.
Use
docker volume inspectand
docker inspectas part of your deployment verification steps, not just as a debugging tool when things break. A quick automated check after each deployment to confirm that mounts are present, pointing to the correct source paths, and reporting
"RW": truewhere write access is expected will catch configuration drift before users encounter it.
Finally, test volume mounts in your CI/CD pipeline alongside your application logic. A simple step that exec's into a freshly started container and attempts to write a test file to each mounted path catches permission issues and SELinux misconfigurations in staging rather than production. Linters and type checkers won't catch a missing
:Zoption or a wrong UID — only actually running the mount will tell you if it works.
