InfraRunBook
    Back to articles

    Docker Image Pull Errors

    Docker
    Published: Apr 11, 2026
    Updated: Apr 11, 2026

    A practical troubleshooting guide covering every major Docker image pull failure — from authentication and rate limits to network timeouts and missing tags — with real CLI commands and fixes.

    Docker Image Pull Errors

    Symptoms

    You run

    docker pull
    — or trigger a container start, a Compose stack, or a CI/CD pipeline — and something breaks before a single layer lands on disk. The failure surface is wide. Sometimes Docker produces a clean, specific error message. Sometimes it just hangs for two minutes and then dies with a vague I/O error. The common thread is that the image never arrives, and your workload stays dead.

    Here's what these failures look like across the most common scenarios:

    Error response from daemon: pull access denied for myimage, repository does not exist or may require 'docker login'
    Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading.
    Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.1.1:53: no such host
    Error response from daemon: manifest for nginx:1.99 not found: manifest unknown: manifest unknown

    Each of those points to a completely different underlying cause. Treating them all the same is how you waste 45 minutes restarting the Docker daemon when the real problem is an expired registry token. This guide walks through every major failure mode — what's actually happening, how to confirm it, and how to fix it.


    Root Cause 1: Registry Authentication Failure

    This is the most common pull error I've seen across environments of every size. Authentication failures happen when Docker can't prove your identity to the registry — either because you've never logged in on that host, your session has expired, your credentials are wrong, or a credential helper has broken silently. The daemon doesn't always distinguish between "auth failed" and "image doesn't exist" in its error output, which makes this one particularly annoying to diagnose.

    The error typically looks like one of these:

    Error response from daemon: pull access denied for registry.solvethenetwork.com/api-service, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
    Error response from daemon: unauthorized: authentication required

    To confirm authentication is the actual problem rather than a missing image, try pulling a known-public image from the same registry. Then inspect your stored credentials:

    cat ~/.docker/config.json

    If

    config.json
    is empty, missing the registry entry entirely, or references a credential helper that isn't installed on the current host, you've found your culprit. On Linux CI runners, I've seen this constantly: the
    config.json
    contains
    "credsStore": "desktop"
    because someone generated it on a Mac, but the runner on sw-infrarunbook-01 has no
    docker-credential-desktop
    binary — so every pull silently fails authentication.

    The interactive fix is straightforward:

    docker login registry.solvethenetwork.com

    For CI/CD pipelines, never bake credentials into the runner image or config files. Inject them as secrets and log in as an explicit pipeline step:

    echo "$REGISTRY_PASSWORD" | docker login registry.solvethenetwork.com \
      -u infrarunbook-admin \
      --password-stdin

    If you're using AWS ECR, be aware that the authentication token expires every 12 hours. I've seen this bite teams whose long-running CI workers pull fine at 9 AM and start failing by mid-afternoon. The fix is to call

    aws ecr get-login-password
    before every pull or set up a cron-based token refresh on the runner:

    aws ecr get-login-password --region us-east-1 | docker login \
      --username AWS \
      --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com

    Root Cause 2: Network Timeout

    Network timeouts during pulls are frustrating because Docker's retry behavior is minimal and the error messages can be maddeningly vague. The pull either hangs indefinitely before dying at the TCP layer, or dies immediately because DNS resolution fails before a connection is even attempted.

    Typical timeout output:

    Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp 54.236.113.205:443: i/o timeout
    error pulling image configuration: download failed after attempts=6: dial tcp: lookup registry-1.docker.io on 192.168.1.53:53: no such host

    That second error is a DNS failure disguised as a network problem. Start your diagnosis there. Test resolution from the Docker host directly:

    nslookup registry-1.docker.io
    dig registry-1.docker.io

    If DNS resolves fine from the host but fails from within Docker, the daemon is using a different resolver than the OS. Check

    /etc/docker/daemon.json
    for a hardcoded DNS entry:

    cat /etc/docker/daemon.json
    {
      "dns": ["192.168.10.5"]
    }

    If that internal resolver is down, unreachable from the Docker network namespace, or doesn't forward external queries, you'll get exactly this failure. Add a reliable fallback:

    {
      "dns": ["192.168.10.5", "1.1.1.1"]
    }

    Then restart the daemon:

    systemctl restart docker
    .

    For connectivity failures that aren't DNS — a firewall blocking port 443, a transparent proxy intercepting TLS, or an overly strict egress security group — test the path directly from the host:

    curl -v https://registry-1.docker.io/v2/

    If curl hangs or gets refused, the problem is upstream of Docker entirely. In corporate environments, the HTTP proxy is a frequent culprit. The Docker daemon doesn't inherit proxy environment variables from the shell; you have to inject them into the systemd service:

    mkdir -p /etc/systemd/system/docker.service.d
    cat > /etc/systemd/system/docker.service.d/http-proxy.conf <<EOF
    [Service]
    Environment="HTTP_PROXY=http://192.168.1.100:3128"
    Environment="HTTPS_PROXY=http://192.168.1.100:3128"
    Environment="NO_PROXY=localhost,127.0.0.1,registry.solvethenetwork.com"
    EOF
    systemctl daemon-reload && systemctl restart docker

    The

    NO_PROXY
    entry matters. Without it, pulls from your internal registry go through the corporate proxy and often fail with a different TLS error on top of everything else.


    Root Cause 3: Image Not Found

    An image-not-found error means Docker reached the registry and authenticated, but the specific image path doesn't exist in that registry's catalog. The error message Docker produces is unhelpfully identical to an auth failure in many cases:

    Error response from daemon: pull access denied for registry.solvethenetwork.com/ghost-service, repository does not exist or may require 'docker login'
    Error response from daemon: manifest unknown: manifest unknown

    The first message is Docker's generic failure response — it covers both auth rejections and missing repositories. The second is more diagnostic: the registry's routing layer found the request but the content doesn't exist, which usually means the image was deleted, never pushed, or lives under a different path than you're using.

    Confirm auth is working first by pulling any known-good image from the same registry. Then query the registry API directly to check whether the repository exists at all:

    curl -u infrarunbook-admin:$TOKEN \
      https://registry.solvethenetwork.com/v2/ghost-service/tags/list

    A 404 or empty result confirms the repository doesn't exist under that name. Common causes include a typo in the image path, the image being deleted during a registry cleanup or garbage collection pass, or a build pipeline that failed silently and never completed the push step.

    Check your pipeline logs. Verify the push step ran and exited cleanly. If you're running a registry with scheduled GC (Harbor does this, for example), check whether a recent GC run swept an untagged manifest you were referencing by digest. To see everything that actually exists in the registry catalog:

    curl -u infrarunbook-admin:$TOKEN \
      https://registry.solvethenetwork.com/v2/_catalog

    Don't guess at the correct image name. Query first, then update your

    docker pull
    command, Dockerfile
    FROM
    line, or Compose file to match what's actually in the registry.


    Root Cause 4: Wrong Tag

    Wrong tag errors are closely related to image-not-found, but the distinction matters for diagnosis: the repository itself exists, but the specific tag you're requesting doesn't. The error is:

    Error response from daemon: manifest for nginx:1.99.99 not found: manifest unknown: manifest unknown

    In my experience, this comes up in a handful of repeating scenarios. The most common is developers pinning to a specific patch version — say

    nginx:1.25.3
    — and then that tag being removed when the upstream maintainer reorganizes their release tagging. It also happens when teams use mutable convenience tags like
    :stable
    or
    :release
    that get reassigned or deleted after a deployment, without anyone realizing a downstream system still references the old one.

    To identify the problem, check what tags are actually available for the image. For public Docker Hub images:

    curl -s "https://registry.hub.docker.com/v2/repositories/library/nginx/tags/?page_size=50" \
      | python3 -m json.tool | grep '"name"'

    For a private registry at solvethenetwork.com:

    curl -u infrarunbook-admin:$TOKEN \
      https://registry.solvethenetwork.com/v2/nginx/tags/list

    Once you can see the available tags, update your references accordingly. But if reproducibility matters — and in production, it should — stop using tags for pinning. Tags are mutable; they can be silently reassigned to a different image. Use digest pinning instead:

    docker pull nginx@sha256:a4c4106df5b4e4dab97f72f7f4f09bc1e72ab2f06a5e5e43b3e5a7c5f09ac3f

    Digests are immutable — they're a cryptographic hash of the manifest content. If the image changes, the digest changes. Pinning to a digest in your Dockerfile or deployment manifests means you'll always get exactly the build you tested against.


    Root Cause 5: Registry Rate Limit Hit

    Docker Hub introduced pull rate limits in late 2020 and they've been catching teams off guard ever since. Anonymous pulls are capped at 100 per 6 hours per public IP address. Authenticated free-tier accounts get 200 per 6 hours. Hit the cap and every pull attempt fails with:

    Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

    In CI environments, this is a particularly nasty problem. Every build job on a shared runner pulls from the same public IP. A team running 30 parallel builds on sw-infrarunbook-01 can exhaust the anonymous limit in minutes. And because the limit is per-IP, all your pipelines fail simultaneously.

    You can check your current rate limit status without burning a full pull by requesting a manifest from Docker's test repository and reading the response headers:

    TOKEN=$(curl -s "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | python3 -c "import sys,json; print(json.load(sys.stdin)['token'])")
    
    curl -s --head -H "Authorization: Bearer $TOKEN" \
      https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest \
      | grep -i ratelimit

    The response headers tell you exactly where you stand:

    ratelimit-limit: 100;w=21600
    ratelimit-remaining: 4;w=21600

    Four pulls left and it's 10 AM on a Monday. That's a crisis waiting to happen.

    There are three practical fixes, in order of how much they actually solve the problem. First, authenticate your pulls — even a free Docker Hub login doubles your limit and ties it to the account rather than the IP:

    docker login -u infrarunbook-admin

    Second, set up a pull-through cache registry on your internal network. Harbor, Nexus Repository Manager, and the open-source Docker Registry all support this. Your build workers pull from

    192.168.10.20:5000
    , cache misses fetch from Docker Hub transparently, and cached hits never touch the rate limit at all. Configure workers to use it via
    /etc/docker/daemon.json
    :

    {
      "registry-mirrors": ["https://registry.solvethenetwork.com"]
    }

    Third — and the cleanest long-term solution — mirror the specific images you depend on into your own registry and reference those directly. Your builds become completely immune to Docker Hub availability issues, rate limits, and surprise tag deletions. A daily sync job that pulls, retags, and pushes your dependency images takes about 30 minutes to set up and saves you hours of firefighting later.


    Root Cause 6: TLS Certificate Issues

    Self-signed or internally-issued certificates on private registries cause Docker to refuse connections outright:

    Error response from daemon: Get "https://registry.solvethenetwork.com/v2/": x509: certificate signed by unknown authority
    Error response from daemon: Get "https://registry.solvethenetwork.com/v2/": x509: certificate has expired or is not yet valid

    The correct fix is to install your CA certificate into Docker's per-registry trust store on the Docker host. No daemon restart required — Docker reads this directory automatically:

    mkdir -p /etc/docker/certs.d/registry.solvethenetwork.com
    cp internal-ca.crt /etc/docker/certs.d/registry.solvethenetwork.com/ca.crt

    The directory name must match the registry hostname exactly, including the port if it's non-standard (e.g.,

    registry.solvethenetwork.com:5000
    ). The alternative — adding the registry to
    insecure-registries
    in
    daemon.json
    — disables TLS verification entirely for that host. Don't do that in production; it makes your image supply chain trivially spoofable.

    For expired certificates, the fix is to renew the cert on the registry server, not to work around it on the client side. Check expiry with:

    echo | openssl s_client -connect registry.solvethenetwork.com:443 2>/dev/null | openssl x509 -noout -dates

    Root Cause 7: Disk Space Exhaustion

    This one is easy to overlook because Docker's error message when it runs out of local disk space isn't always obvious. The pull downloads correctly from the network, but writing layers to the storage driver fails mid-stream:

    failed to register layer: Error processing tar file(exit status 1): write /usr/bin/app: no space left on device

    Check available space on the partition where Docker stores its data, which is usually

    /var/lib/docker
    :

    df -h /var/lib/docker

    If it's above 85%, that's your problem. Reclaim space with Docker's built-in prune commands. On a CI host where containers are short-lived, this is usually safe to run aggressively:

    docker image prune -a
    docker container prune
    docker volume prune

    On production hosts, be more conservative. Run

    docker system df
    first to see exactly what's consuming space before pruning anything. Avoid
    docker system prune -a --volumes
    on any server running stateful containers — it will remove volumes not attached to a currently running container, which includes stopped containers whose data you may still need.


    Prevention

    Most pull failures are avoidable with a modest amount of upfront infrastructure work. The payoff is high — a pull error in the middle of a deployment is one of the most frustrating incidents to debug under pressure.

    Run a pull-through cache for Docker Hub. Whether you deploy Harbor at

    192.168.10.20
    or a lightweight nginx-based cache, this single change eliminates rate limit exposure and makes your builds resilient to Docker Hub outages. Configure all Docker hosts to use it via
    registry-mirrors
    in
    daemon.json
    and you're done — existing image references don't need to change.

    Mirror your critical images. Any image a production workload depends on should also live in your own registry. If Docker Hub removes an image, changes a tag, or goes down, you're completely unaffected. A scheduled job that syncs a curated list of upstream images daily takes less than an hour to set up with a basic shell script and cron.

    Use digest pinning for production deployments. Tags are mutable — they can be silently reassigned to a different image at any time. Pin production manifests to the digest SHA instead of a tag name. This applies to your Dockerfiles, Kubernetes manifests, and Compose files alike.

    Automate credential refresh for short-lived tokens. If you use ECR, token refresh should be part of your runner initialization sequence, not something done once at setup time. Add the

    aws ecr get-login-password
    call as the first step in every pipeline that pulls from ECR. For static credentials on private registries, rotate them on a regular schedule and update the corresponding secrets in your CI platform at the same time.

    Monitor disk utilization on Docker hosts. Set alerts at 70% and 85% capacity on the

    /var/lib/docker
    partition. The 70% threshold gives you time to clean up calmly; by 85% you're one large image pull away from a failed deployment. Incorporate
    docker image prune -a
    into your regular maintenance cron on sw-infrarunbook-01 to keep dangling image accumulation under control.

    Add a registry connectivity health check to your pipelines. A single

    docker pull registry.solvethenetwork.com/healthcheck:latest
    step at the start of every pipeline catches authentication problems, network issues, and rate limit exhaustion before they fail your real workload. When it fails, the error is isolated and immediately obvious — not buried in a multi-step build log after three minutes of wasted work.

    Centralize Docker daemon configuration management. Use configuration management (Ansible, Puppet, whatever you're running) to keep

    /etc/docker/daemon.json
    consistent across all your Docker hosts. DNS settings, registry mirrors, log driver configuration, and insecure registry entries should be defined in source control and applied consistently — not manually edited per-host and forgotten about.

    Frequently Asked Questions

    Why does Docker say 'pull access denied' even when the image exists?

    Docker uses the same error message for both authentication failures and missing images. If you're not logged in to the registry, or your session has expired, Docker can't confirm whether the image exists — so it returns this generic denial. Run 'docker login' for the relevant registry first, then retry the pull.

    How do I check how many Docker Hub pulls I have left before hitting the rate limit?

    Request a token from auth.docker.io scoped to the ratelimitpreview/test repository, then send a HEAD request to that manifest endpoint. The response headers 'ratelimit-limit' and 'ratelimit-remaining' tell you exactly how many pulls remain in the current 6-hour window without consuming a real pull.

    What's the difference between 'image not found' and 'manifest unknown'?

    'Image not found' or 'pull access denied' typically means the repository path doesn't exist in the registry or you lack permission to see it. 'Manifest unknown' means the registry found the repository but the specific tag or digest you requested doesn't exist — the image was likely deleted, never pushed, or you have a typo in the tag name.

    Why do Docker pull timeouts happen in CI but not on developer machines?

    CI runners often run in restricted network environments — behind stricter firewalls, using different DNS resolvers, or requiring a proxy that developer laptops bypass. The Docker daemon on a CI runner also doesn't inherit proxy environment variables from the shell; you have to explicitly set HTTP_PROXY and HTTPS_PROXY in the Docker systemd service configuration.

    Is it safe to use 'insecure-registries' in daemon.json to fix TLS errors?

    Only on isolated lab or development networks where you fully control all traffic. Marking a registry as insecure disables TLS certificate verification entirely for that host, making it trivial for an attacker on the same network to serve a malicious image. The correct fix is to install your CA certificate into /etc/docker/certs.d/<registry-hostname>/ on the Docker host.

    Related Articles