InfraRunBook
    Back to articles

    Linux Package Install Failures

    Linux
    Published: Apr 20, 2026
    Updated: Apr 20, 2026

    A practical troubleshooting guide covering the most common Linux package install failures — repository connectivity, GPG key errors, dependency conflicts, full disks, and lock file contention — with real commands and outputs for APT and DNF systems.

    Linux Package Install Failures

    Symptoms

    You run

    apt install
    or
    dnf install
    and instead of a clean progress bar, you get a screen full of red text, warnings, and a final
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    that tells you almost nothing useful. Package install failures are among the most common issues on Linux servers, and they have a way of showing up at the worst possible moment — mid-deployment, during an incident, or right before a maintenance window closes.

    The symptoms vary depending on what's actually wrong. You might see connection timeouts when the package manager tries to reach a repository. You might get a cryptic warning about signatures that couldn't be verified. In other cases the resolver just refuses to proceed because two packages are fighting over a shared library. Or the install starts successfully and then abruptly dies with a write error because the disk filled up mid-extraction. And then there's the classic lock file error, where apt insists something else is already running even when your terminal shows no other sessions open.

    Regardless of the surface error, these failures all have identifiable root causes and straightforward fixes. Let's work through each one.


    Repository Unreachable

    Why It Happens

    Package managers pull metadata and packages from remote HTTP or HTTPS repositories. If those repositories are unreachable — because of a firewall rule blocking outbound traffic, a DNS resolution failure, a downed upstream mirror, or a misconfigured proxy — the install fails before it even touches a package file. In my experience, this is the most common failure on freshly provisioned servers where network configuration hasn't been properly validated. It also shows up on hardened environments where all outbound traffic is routed through a proxy that nobody configured the package manager to use.

    How to Identify It

    On a Debian or Ubuntu system, running

    apt update
    will show connection failures like this:

    infrarunbook-admin@sw-infrarunbook-01:~$ sudo apt update
    Err:1 http://archive.ubuntu.com/ubuntu jammy InRelease
      Could not connect to archive.ubuntu.com:80 (185.125.190.36). - connect (110: Connection timed out)
    W: Some index files failed to download. They have been ignored, or old ones used instead.

    On RHEL, Rocky, or AlmaLinux with DNF the error looks different but means the same thing:

    [infrarunbook-admin@sw-infrarunbook-01 ~]$ sudo dnf install curl
    Error: Failed to download metadata for repo 'baseos': Cannot prepare internal mirrorlist: No URLs in mirrorlist

    Start by testing basic connectivity directly from the server:

    curl -v --max-time 10 http://archive.ubuntu.com/ubuntu/dists/jammy/Release
    ping -c 3 8.8.8.8
    dig archive.ubuntu.com
    ip route show

    If the ping reaches 8.8.8.8 but DNS fails, you have a resolver problem. If even the ping times out, check your default route and firewall rules. Also check whether a proxy is required but not configured:

    env | grep -i proxy
    cat /etc/apt/apt.conf.d/proxy.conf 2>/dev/null
    grep proxy /etc/dnf/dnf.conf

    How to Fix It

    If DNS is broken, fix

    /etc/resolv.conf
    or your network manager configuration. If the upstream mirror is down, switch to a working one. On Ubuntu, edit
    /etc/apt/sources.list
    to point at a different mirror:

    sudo sed -i 's|http://archive.ubuntu.com|http://mirror.solvethenetwork.com|g' /etc/apt/sources.list
    sudo apt update

    For environments where outbound traffic goes through a proxy, configure APT to use it:

    echo 'Acquire::http::Proxy "http://10.0.1.5:3128";' | sudo tee /etc/apt/apt.conf.d/01proxy
    echo 'Acquire::https::Proxy "http://10.0.1.5:3128";' | sudo tee -a /etc/apt/apt.conf.d/01proxy

    For DNF, add the proxy setting directly to

    /etc/dnf/dnf.conf
    :

    proxy=http://10.0.1.5:3128

    On completely air-gapped systems, your only option is a local mirror. Tools like

    apt-mirror
    or
    reposync
    can create one, and
    apt-cacher-ng
    works well as a caching proxy for mixed environments.


    GPG Key Mismatch

    Why It Happens

    Every legitimate repository signs its metadata and packages with a GPG private key. Your package manager verifies those signatures against a trusted public key before it will use anything from that repository. If the public key is missing, expired, or has been rotated by the repository operator without much announcement, you'll hit a hard verification failure. I've seen this most often when someone adds a third-party repo by copying a single

    sources.list
    line from the internet without following the full setup instructions that include importing the key. It also happens after a distribution upgrade where old key files don't carry over cleanly.

    How to Identify It

    APT reports this during

    apt update
    with a clear
    NO_PUBKEY
    message:

    infrarunbook-admin@sw-infrarunbook-01:~$ sudo apt update
    W: An error occurred during the signature verification. The repository is not updated.
    GPG error: https://packages.solvethenetwork.com/repo stable InRelease: The following
    signatures couldn't be verified because the public key is not available:
    NO_PUBKEY B53DC80D13EDEF05
    W: Failed to fetch https://packages.solvethenetwork.com/repo/dists/stable/InRelease
    W: Some index files failed to download. They have been ignored, or old ones used instead.

    That key ID at the end —

    B53DC80D13EDEF05
    — is your target. On DNF-based systems the failure appears at install time:

    [infrarunbook-admin@sw-infrarunbook-01 ~]$ sudo dnf install htop
    warning: /var/cache/dnf/epel-9/packages/htop-3.2.2-1.el9.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3228467c: NOKEY
    Error: GPG check FAILED

    To see which keys are currently trusted on your system:

    # Modern Debian/Ubuntu approach
    ls /etc/apt/trusted.gpg.d/
    gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --list-keys
    
    # RHEL/Rocky
    rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n'

    How to Fix It

    The modern approach on Debian/Ubuntu systems is to download the key directly from the repository operator and place it in

    /etc/apt/trusted.gpg.d/
    as a dearmored binary GPG file. Avoid the old
    apt-key add
    method — it's deprecated and will generate warnings on newer systems.

    curl -fsSL https://packages.solvethenetwork.com/gpg.key \
      | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/solvethenetwork.gpg
    sudo apt update

    If you need to import by key ID from a keyserver:

    sudo gpg --keyserver keyserver.ubuntu.com --recv-keys B53DC80D13EDEF05
    sudo gpg --export B53DC80D13EDEF05 | sudo tee /etc/apt/trusted.gpg.d/solvethenetwork.gpg > /dev/null
    sudo apt update

    For DNF, import the vendor's public key directly:

    sudo rpm --import https://packages.solvethenetwork.com/RPM-GPG-KEY-solvethenetwork
    sudo dnf install htop

    Never set

    gpgcheck=0
    in a repo file as a permanent fix. It silences the error by disabling the security check entirely, which means you're installing unverified packages. Use it only for isolated testing, and revert it immediately after.


    Dependency Conflict

    Why It Happens

    Package managers do dependency resolution automatically, but the resolver can only work with what's available in your enabled repositories. When two packages require different, incompatible versions of a shared library, or when the package you want conflicts with something already installed, the resolver hits a dead end and refuses to proceed. This is especially common when you mix packages from different repositories — the distro's official repo plus a vendor's custom repo — that were built against different versions of the same runtime. I've also seen it frequently after partial upgrades, where a system update was interrupted and the package database is in an inconsistent state with some packages upgraded and their dependents still at the old version.

    How to Identify It

    APT is fairly explicit about what's conflicting:

    infrarunbook-admin@sw-infrarunbook-01:~$ sudo apt install libssl1.1
    Reading package lists... Done
    Building dependency tree... Done
    The following packages have unmet dependencies:
     libssl1.1 : Conflicts: libssl3 but 3.0.2-0ubuntu1.10 is to be installed
    E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.

    Or the more frustrating variant where apt just tells you the package state is broken without saying why:

    infrarunbook-admin@sw-infrarunbook-01:~$ sudo apt install python3-requests
    E: Unable to correct problems, you have held broken packages.

    DNF gives you more detail upfront:

    [infrarunbook-admin@sw-infrarunbook-01 ~]$ sudo dnf install package-foo
    Error:
     Problem: package-foo-2.1-1.el9.x86_64 requires libbar.so.2()(64bit), but none of the
              providers can be installed
      - conflicting requests
      - nothing provides libbar.so.2()(64bit) needed by package-foo-2.1-1.el9.x86_64

    Use these commands to investigate further before trying any fix:

    # Check overall package state
    sudo apt-get check
    sudo dpkg --audit
    
    # Show version candidates and pinning
    apt-cache policy libssl1.1
    apt-cache showpkg libssl1.1
    
    # DNF: show dependency tree
    sudo dnf deplist package-foo
    sudo dnf repoquery --requires package-foo

    How to Fix It

    If the system is in a broken state from a previous interrupted install, the first thing to try is letting the package manager finish what it started:

    sudo dpkg --configure -a
    sudo apt --fix-broken install

    If the conflict is between the package you want and something already installed, figure out whether you actually need the installed package. If you can remove it safely, do so:

    sudo apt remove conflicting-package
    sudo apt install target-package

    When repo-mixing causes version conflicts, APT pinning gives you fine-grained control over which repository wins for a given package. Create a preferences file under

    /etc/apt/preferences.d/
    :

    cat /etc/apt/preferences.d/solvethenetwork-pin
    Package: libssl*
    Pin: release o=solvethenetwork
    Pin-Priority: 100

    A priority of 100 means "use this version only if it's the only option." 500 is the default for installed packages. 1001 forces a downgrade if necessary. Get the pinning wrong and you'll make conflicts worse, so test with

    apt-cache policy
    after adding any preferences file.

    On DNF systems, version locking prevents a package from being upgraded into a conflicting state:

    sudo dnf install python3-dnf-plugin-versionlock
    sudo dnf versionlock add conflicting-package-1.2.3-4.el9.x86_64

    Disk Full

    Why It Happens

    Package managers need disk space at every stage of an install: downloading the package archive to the local cache, extracting it, and writing the installed files to their destinations. If the filesystem hosting

    /var
    ,
    /tmp
    , or
    /
    runs out of space at any point, the install dies mid-operation and often leaves things in a partially installed state. On servers with separate partitions — which is the right way to set things up — it's almost always
    /var
    that fills up first, since that's where package caches, log files, journal data, container layers, and application data all compete for the same blocks.

    How to Identify It

    The error message is usually unmistakable:

    infrarunbook-admin@sw-infrarunbook-01:~$ sudo apt install nginx
    Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 nginx amd64 1.18.0-6ubuntu14 [3,592 B]
    Fetched 3,592 B in 0s (18.7 kB/s)
    E: Write error - write (28: No space left on device)
    E: IO Error saving source cache, aborting

    Sometimes it fails at a later stage during extraction:

    dpkg: error: error creating new backup file '/var/lib/dpkg/status-old': No space left on device

    Verify with

    df
    and then find what's consuming the space:

    infrarunbook-admin@sw-infrarunbook-01:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda2        20G   20G     0 100% /var
    tmpfs           3.9G     0  3.9G   0% /dev/shm
    
    du -sh /var/* 2>/dev/null | sort -rh | head -20
    du -sh /var/cache/apt/archives/
    du -sh /var/log/
    sudo journalctl --disk-usage

    How to Fix It

    The fastest recovery is cleaning the package cache, which frequently holds gigabytes of downloaded archives that are no longer needed:

    # APT: aggressive cache cleanup
    sudo apt clean
    sudo apt autoclean
    sudo apt autoremove --purge
    
    # DNF: purge everything
    sudo dnf clean all

    apt clean
    wipes every downloaded package from
    /var/cache/apt/archives/
    . On a server that's been running for months without a cache flush, this alone can recover 5-10 GB. After freeing space, re-run your install. Then figure out what actually caused the filesystem to fill up so it doesn't happen again. Common culprits beyond the package cache:

    # Old kernel images accumulating on Ubuntu/Debian
    dpkg --list | grep linux-image
    sudo apt autoremove --purge
    
    # Journal eating space
    sudo journalctl --disk-usage
    sudo journalctl --vacuum-size=500M
    
    # Large individual log files
    find /var/log -name '*.log' -size +100M -ls

    If the partition is genuinely undersized for your workload, extend it. On LVM — which you should be using on any managed server — this is non-disruptive:

    sudo lvextend -L +10G /dev/mapper/ubuntu--vg-var--lv
    sudo resize2fs /dev/mapper/ubuntu--vg-var--lv
    df -h /var

    Lock File Held by Another Process

    Why It Happens

    Linux package managers use lock files to serialize access to the package database. Only one process should be modifying installed packages at a time — concurrent modifications would corrupt the dpkg or rpm database. APT holds

    /var/lib/dpkg/lock-frontend
    ,
    /var/lib/dpkg/lock
    , and
    /var/lib/apt/lists/lock
    while it's running. DNF uses
    /var/run/dnf.pid
    . The lock gets held whenever any apt or dnf process is active — which includes background processes like
    unattended-upgrades
    ,
    packagekit
    , or
    apt-daily.service
    that run on a schedule and are invisible unless you look for them. A stale lock left behind by a crashed process is the other scenario, though it's less common.

    How to Identify It

    APT gives you the PID of the holding process right in the error:

    infrarunbook-admin@sw-infrarunbook-01:~$ sudo apt install vim
    E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 3842 (unattended-upgr)
    N: Be aware that removing the lock file is not a solution and may break your system.
    E: Unable to lock the directory /var/lib/apt/lists/

    Use that PID to find out exactly what's running:

    infrarunbook-admin@sw-infrarunbook-01:~$ ps aux | grep 3842
    root  3842  0.0  1.2 123456 50432 ?  Ss  09:14  0:03 /usr/bin/python3 /usr/bin/unattended-upgrade

    That's

    unattended-upgrades
    doing its job. Entirely legitimate — you should wait, not fight it. To distinguish a live process from a stale lock:

    # Check which process actually holds the file descriptor
    sudo lsof /var/lib/dpkg/lock-frontend
    sudo lsof /var/lib/apt/lists/lock
    
    # For DNF
    cat /var/run/dnf.pid
    ps -p $(cat /var/run/dnf.pid 2>/dev/null) 2>/dev/null || echo "Process not found — stale lock"

    If

    lsof
    returns nothing but the lock file exists, the lock is stale and safe to remove.

    How to Fix It

    If the process is legitimate, wait for it to finish. Killing

    unattended-upgrades
    mid-run can leave packages half-configured. Monitor it:

    sudo tail -f /var/log/unattended-upgrades/unattended-upgrades.log
    # Or just watch for the process to disappear
    watch -n 5 'ps aux | grep -E "apt|dpkg|unattended"'

    If it's a stale lock — confirmed by

    lsof
    returning nothing — remove it and reconfigure any partially installed packages:

    sudo lsof /var/lib/dpkg/lock-frontend  # must return nothing
    sudo rm /var/lib/dpkg/lock-frontend
    sudo rm /var/lib/dpkg/lock
    sudo rm /var/lib/apt/lists/lock
    sudo dpkg --configure -a
    sudo apt update

    For DNF on RHEL-based systems:

    DNFPID=$(cat /var/run/dnf.pid 2>/dev/null)
    if [ -n "$DNFPID" ] && ! ps -p "$DNFPID" > /dev/null 2>&1; then
      echo "Stale DNF lock, PID $DNFPID is not running"
      sudo rm -f /var/run/dnf.pid
    fi
    sudo dnf install vim

    Never blindly delete lock files without first confirming no process holds them. Killing an active package manager mid-operation is one of the surest ways to corrupt your package database and turn a minor inconvenience into a recovery exercise.


    Corrupted Package Cache

    Why It Happens

    The local package cache — downloaded

    .deb
    or
    .rpm
    files plus repository metadata — can become corrupted during interrupted downloads, unexpected system shutdowns, or filesystem errors. APT stores metadata in
    /var/lib/apt/lists/
    and downloaded packages in
    /var/cache/apt/archives/
    . DNF uses
    /var/cache/dnf/
    . When the package manager tries to use a truncated or partially written metadata file, it fails with a parse error rather than a network error, which can be confusing if you're not expecting it.

    How to Identify It

    infrarunbook-admin@sw-infrarunbook-01:~$ sudo apt update
    Reading package lists... Error!
    E: Encountered a section with no Package: header
    E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_jammy_main_binary-amd64_Packages
    E: The package lists or status file could not be parsed or opened.

    The path in that error points directly at the corrupted file. On DNF:

    [infrarunbook-admin@sw-infrarunbook-01 ~]$ sudo dnf install vim
    Error: Failed to download metadata for repo 'baseos': repomd.xml GPG signature verification error: Bad GPG signature

    How to Fix It

    Wipe the cache and rebuild it from scratch:

    # APT
    sudo rm -rf /var/lib/apt/lists/*
    sudo apt clean
    sudo apt update
    
    # DNF
    sudo dnf clean all
    sudo rm -rf /var/cache/dnf/*
    sudo dnf makecache

    If cache corruption keeps recurring, check for underlying filesystem or storage errors. Corrupted writes are often a symptom, not the root cause:

    sudo dmesg | grep -iE '(error|ata|blk|I\/O)' | tail -30
    cat /var/log/syslog | grep -i 'I/O error'
    sudo smartctl -H /dev/sda

    Prevention

    Most of these failures are avoidable with deliberate habits at provisioning time rather than reactive troubleshooting during an incident. The repository configuration is the most important thing to get right upfront. Every repo you add should be documented and ideally managed through configuration management — Ansible, Puppet, or even a simple shell provisioning script in version control. If you can't explain why a repo is present on a server, it probably shouldn't be. Mystery repos are the source of GPG failures two years after the server was set up by someone who's since left the team.

    Disk space on

    /var
    needs active monitoring, not just a one-time sizing decision at provisioning. Set alerts at 80% usage, not 95%. Configure
    logrotate
    properly for any application writing to
    /var/log
    . Cap the systemd journal so it doesn't grow without bound:

    # /etc/systemd/journald.conf
    SystemMaxUse=2G
    SystemKeepFree=500M

    On servers running

    unattended-upgrades
    , configure it to clean up its own cache after each run. Check
    /etc/apt/apt.conf.d/20auto-upgrades
    :

    APT::Periodic::AutocleanInterval "7";
    APT::Periodic::Download-Upgradeable-Packages "1";
    APT::Periodic::Unattended-Upgrade "1";

    In automated pipelines and CI/CD systems, always run the update step immediately before the install step in the same script. Stale metadata in a cached Docker layer is a constant source of "package not found" failures that work fine locally but break in CI:

    sudo apt-get update && sudo apt-get install -y --no-install-recommends \
      curl \
      vim \
      htop

    Finally, build the habit of auditing package state periodically on long-running servers — especially after any interrupted upgrade or maintenance window:

    # Debian/Ubuntu: check for broken packages and unconfigured installs
    sudo apt-get check
    sudo dpkg --audit
    
    # RHEL/Rocky
    sudo dnf check

    Catching a partially broken package state before it becomes a problem is far easier than debugging it in the middle of your next deployment. These are fast commands — run them as part of a periodic health check or hook them into your monitoring pipeline and save yourself the 2am incident.

    Frequently Asked Questions

    How do I fix 'E: Could not get lock /var/lib/dpkg/lock-frontend' on Ubuntu?

    First identify what process holds the lock with 'sudo lsof /var/lib/dpkg/lock-frontend'. If it's a legitimate process like unattended-upgrades, wait for it to finish. If the PID is stale (the process no longer exists), remove the lock files with 'sudo rm /var/lib/dpkg/lock-frontend /var/lib/dpkg/lock /var/lib/apt/lists/lock' then run 'sudo dpkg --configure -a && sudo apt update'.

    What does 'NO_PUBKEY' mean in apt update output?

    It means APT found a repository whose metadata is signed with a GPG key that isn't in your trusted keyring. The key ID is printed in the error. Import it with 'curl -fsSL <repo-gpg-url> | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/repo.gpg' then re-run 'sudo apt update'.

    How do I fix broken package dependencies in apt?

    Run 'sudo dpkg --configure -a' to finish any interrupted installs, then 'sudo apt --fix-broken install' to resolve unmet dependencies. Use 'apt-cache policy <package>' to inspect version candidates and pinning state if you're mixing repositories.

    Why does apt fail with 'No space left on device' and how do I recover?

    The filesystem holding /var is full. Run 'df -h' to confirm, then free space immediately with 'sudo apt clean && sudo apt autoremove --purge'. After the install succeeds, investigate what consumed the space using 'du -sh /var/* | sort -rh | head -20' and address the root cause — often old logs or accumulated package cache.

    How do I fix 'GPG check FAILED' when installing an RPM package with dnf?

    Import the repository's GPG public key with 'sudo rpm --import <URL-to-GPG-key>'. The key URL is usually listed in the repository's setup documentation. After importing, retry the install. Never permanently set gpgcheck=0 in a repo file — it disables signature verification entirely and is a security risk.

    Related Articles