InfraRunBook
    Back to articles

    Jenkins Build Agent Not Connecting

    CI/CD
    Published: Apr 19, 2026
    Updated: Apr 19, 2026

    Step-by-step runbook for diagnosing and fixing Jenkins build agents that refuse to connect, covering every common cause from missing JARs to firewall blocks and Java version mismatches.

    Jenkins Build Agent Not Connecting

    Symptoms

    If you landed here, your Jenkins agent is probably sitting in the Nodes list as "offline," your build queue is growing, and the launch log is showing something unhelpful. Maybe the agent process on the remote machine exits immediately. Maybe it connects for three seconds and drops. Maybe the Jenkins UI just shows a spinning indicator that never resolves.

    Here are the most common things you'll see before tracking down the root cause:

    • The agent appears as "offline" or stuck in "pending" under Manage Jenkins → Nodes and Clouds
    • Builds accumulate in the queue with the message "Waiting for next available executor"
    • The agent launch log shows
      java.net.ConnectException
      , an HTTP 401, or an IOException
    • On the agent machine, running the launch command returns a non-zero exit code with no useful output
    • Jenkins shows the agent as briefly connected, then it drops within seconds

    There's no single cause for this. I've tracked down agent connectivity failures that turned out to be a firewall rule somebody added last Tuesday, a wrong URL hardcoded in a startup script two years ago, a Java 8 runtime left behind after a Jenkins upgrade, and once, a DNS entry that hadn't propagated after a datacenter migration. Work through each cause below methodically — you'll find it.


    Root Cause 1: Agent JAR Not Downloaded or Corrupted

    Why It Happens

    Jenkins inbound agents — the kind that connect back to the controller rather than being launched by SSH — work by running a file called

    agent.jar
    . Jenkins serves this JAR from its own web interface at
    /jnlpJars/agent.jar
    . When you provision a new agent or automate the launch process, the typical pattern is to
    curl
    or
    wget
    that JAR from the controller before running it. If that download fails — due to a transient network error, an untrusted TLS certificate, a redirect that wasn't followed, or a partial write — the JAR on disk is either missing entirely or silently corrupted. The agent process then exits immediately, and you're left wondering what went wrong.

    How to Identify It

    On the agent machine, first confirm whether the JAR exists and has a plausible file size:

    ls -lh /opt/jenkins-agent/agent.jar

    The real JAR is typically 1–4 MB depending on your Jenkins version. A zero-byte file or a file under 100 KB is corrupted. Try running it directly and watch the output:

    java -jar /opt/jenkins-agent/agent.jar
    Error: Unable to access jarfile /opt/jenkins-agent/agent.jar

    A corrupted JAR might produce this instead:

    java -jar /opt/jenkins-agent/agent.jar
    Exception in thread "main" java.lang.reflect.InvocationTargetException
      at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    Caused by: java.util.zip.ZipException: zip file is empty

    How to Fix It

    Re-download the JAR directly from your Jenkins controller. Use the controller's actual address:

    curl -fsSL http://10.10.1.50:8080/jnlpJars/agent.jar -o /opt/jenkins-agent/agent.jar

    If Jenkins is running HTTPS with a self-signed certificate, you'll need to either trust the cert in your Java truststore or temporarily use

    --insecure
    for the download step:

    curl -fsSL --insecure https://jenkins.solvethenetwork.com/jnlpJars/agent.jar \
      -o /opt/jenkins-agent/agent.jar

    After downloading, confirm the file is a valid JAR:

    file /opt/jenkins-agent/agent.jar
    /opt/jenkins-agent/agent.jar: Java archive data (JAR)
    
    ls -lh /opt/jenkins-agent/agent.jar
    -rw-r--r-- 1 infrarunbook-admin infrarunbook-admin 3.2M Apr 19 10:14 /opt/jenkins-agent/agent.jar

    If the download itself is failing, the problem is likely a firewall or a wrong URL — keep reading.


    Root Cause 2: Firewall Blocking the JNLP Port

    Why It Happens

    Jenkins uses a dedicated TCP port for inbound agent connections — port 50000 by default. This is entirely separate from the web UI port (typically 8080 or 443). Most network environments allow standard web ports outbound without a second thought, but port 50000 is unusual enough that it gets silently dropped by default firewall policies, AWS security groups, or Azure NSG rules. In my experience, this is the most common cause of agent connection failures on fresh cloud deployments. Someone opens port 8080 for the Jenkins UI and assumes that's enough.

    There's an additional wrinkle: if Jenkins is configured to use a random JNLP port (the default after installation), the port changes every time Jenkins restarts. That makes firewall rules nearly impossible to maintain. Check your setting under Manage Jenkins → Security → TCP port for inbound agents — if it says "Random," change it to a fixed port now.

    How to Identify It

    From the agent machine, test direct TCP connectivity to the JNLP port on the controller:

    nc -zv 10.10.1.50 50000
    nc: connect to 10.10.1.50 port 50000 (tcp) failed: Connection timed out

    "Connection timed out" means a firewall is silently dropping the packet. "Connection refused" means the port is reachable but nothing is listening — a different problem. On the controller itself, verify Jenkins is actually listening on that port:

    ss -tlnp | grep 50000
    LISTEN 0  50  0.0.0.0:50000  0.0.0.0:*  users:(("java",pid=12348,fd=89))

    If that command returns nothing, Jenkins isn't listening on any agent port — check the System Configuration page and make sure inbound agents are enabled with a fixed port.

    How to Fix It

    On the controller host, open port 50000 in the local firewall:

    # firewalld
    firewall-cmd --permanent --add-port=50000/tcp
    firewall-cmd --reload
    
    # iptables
    iptables -A INPUT -p tcp --dport 50000 -j ACCEPT
    iptables-save > /etc/iptables/rules.v4

    If you're in a cloud environment, update the security group or NSG inbound rules to allow TCP 50000 from the agent subnet (e.g., 10.10.1.0/24). If you want to sidestep this problem entirely, consider switching to WebSocket-based agents — available since Jenkins 2.217. Agents connect over port 443 or 80 using WebSocket, which passes through nearly every corporate firewall without special rules. Enable it under Manage Jenkins → Security → Agent protocols by ticking the WebSocket option.


    Root Cause 3: Wrong Jenkins URL Configured

    Why It Happens

    Jenkins needs to know its own public URL so it can tell agents where to connect. This URL lives under Manage Jenkins → System → Jenkins URL. The problem is that the installer defaults this to

    http://localhost:8080
    . When an agent on a remote machine gets the connection instructions, it tries to connect to localhost on its own machine — which is obviously not where Jenkins lives. The connection fails immediately, and the error message isn't always clear about why.

    This also surfaces after infrastructure changes: Jenkins gets moved to a new IP or hostname, the URL in System Configuration doesn't get updated, and agents that were working fine the day before suddenly can't connect. I've also seen it happen when someone copies a startup script from one environment to another and the hardcoded URL still points to a staging controller.

    How to Identify It

    Start by checking what URL Jenkins thinks it has. Go to Manage Jenkins → System → Jenkins URL and look at the value. Then, from the agent machine, try to reach that URL:

    curl -v http://10.10.1.50:8080/
    *   Trying 10.10.1.50:8080...
    * Connected to 10.10.1.50 (10.10.1.50) port 8080
    > GET / HTTP/1.1
    < HTTP/1.1 200 OK

    If you see a connection refused or a name resolution failure, the URL is wrong. Also inspect the agent launch command that Jenkins generates — go to Manage Jenkins → Nodes → [your agent] → Status page and look at the connection instructions. Specifically check the

    -url
    parameter:

    java -jar agent.jar -url http://localhost:8080/ -secret abc123... -name build-agent -workDir "/opt/jenkins-agent"

    Seeing

    localhost
    or an old hostname in that command is a clear confirmation.

    How to Fix It

    Update the Jenkins URL in Manage Jenkins → System → Jenkins URL to the address agent machines can actually reach:

    http://10.10.1.50:8080/
    # or with a resolvable hostname:
    https://jenkins.solvethenetwork.com/

    Save the configuration. The agent launch command will immediately reflect the corrected URL. If you're running the agent via a static script or a systemd unit file on the agent machine, update the

    -url
    argument there too, then restart the service. After updating, always verify curl connectivity from the agent before relaunching the agent process.


    Root Cause 4: Credential (Secret Token) Mismatch

    Why It Happens

    Every Jenkins agent node has a unique HMAC-based secret token that Jenkins generates when the node is created. The agent must present this token when connecting — Jenkins uses it to verify the connection is legitimate and from the right machine. If the token on the agent side doesn't match what Jenkins has on record, the connection is rejected with a 401.

    There are several ways this gets out of sync. You delete an agent node and recreate it with the same name — Jenkins generates a new secret, but your startup script still has the old one. You restore Jenkins from a backup that predates the current agent configuration. Or — and I see this more than I'd like — someone copies the launch command from one agent's config page and uses it to configure a different agent. Every node has its own unique secret; they aren't transferable.

    How to Identify It

    Look at the agent's launch log on the agent machine. A credential mismatch produces output like this:

    INFO: Connecting to http://10.10.1.50:8080/ with secret: 7f3a91bc...
    SEVERE: Failed to obtain http://10.10.1.50:8080/computer/build-agent/slave-agent.jnlp
    hudson.remoting.jnlp.Main$AbortException: Failed to obtain ...
    Status code: 401

    A 401 is the giveaway. On the controller side, the Jenkins system log will show:

    WARNING: An attempt to connect an agent with an incorrect secret was made.
    Agent name: build-agent
    Remote address: 10.10.1.51

    How to Fix It

    Get the correct secret directly from the Jenkins UI. Navigate to Manage Jenkins → Nodes → [your agent name] and look at the connection instructions on that node's status page:

    Run from agent command line:
    java -jar agent.jar -url http://10.10.1.50:8080/ \
      -secret 9d2e7a1f4bc83e50a291... \
      -name build-agent \
      -workDir "/opt/jenkins-agent"

    Copy that exact secret and update the agent's startup configuration on the agent machine. If you're using a systemd service on sw-infrarunbook-01, store the secret in a protected environment file rather than directly in the unit file:

    # /etc/jenkins-agent/secrets.env (mode 0600, owned by infrarunbook-admin)
    JENKINS_AGENT_SECRET=9d2e7a1f4bc83e50a291...
    
    sudo systemctl daemon-reload
    sudo systemctl restart jenkins-agent

    Never reuse secrets between agent nodes. Never commit secrets to version control. The secrets.env file should be readable only by the service account running the agent process.


    Root Cause 5: Java Version Incompatibility

    Why It Happens

    Jenkins has been steadily raising its Java requirements over the past few years. Jenkins 2.357 (mid-2022) dropped Java 8 support on agents. Jenkins 2.361 requires Java 11 or Java 17 — Java 16 and other odd-numbered releases are explicitly not supported. If the agent machine is running Java 8, or some unsupported in-between version, the agent.jar will fail to start with a class version error before it ever tries to connect to the controller.

    In my experience, this bites teams most often right after a Jenkins controller upgrade. The controller gets updated to the latest LTS, which requires a newer Java, but the agent machines are still running whatever Java was on them when they were provisioned years ago. The controller is happy; the agents are dead.

    How to Identify It

    On the agent machine, check the installed Java version:

    java -version
    java version "1.8.0_352"
    Java(TM) SE Runtime Environment (build 1.8.0_352-b08)
    Java HotSpot(TM) 64-Bit Server VM (build 25.352-b08, mixed mode)

    If you try to run agent.jar with that runtime, you'll see something like this:

    java -jar /opt/jenkins-agent/agent.jar -url http://10.10.1.50:8080/ ...
    Exception in thread "main" java.lang.UnsupportedClassVersionError:
      hudson/remoting/Launcher has been compiled by a more recent version of
      the Java Runtime (class file version 55.0), this version of the Java
      Runtime only recognizes class file versions up to 52.0

    Class file version 55.0 corresponds to Java 11; version 52.0 is Java 8. That error is telling you exactly what's happening. For reference: class file version 61.0 = Java 17, 65.0 = Java 21. Check the controller's Java requirements under Manage Jenkins → System Information and look at the

    java.version
    property to confirm what version the controller itself is running.

    How to Fix It

    Install a compatible Java version on the agent machine. Java 17 is the safe choice for any Jenkins LTS release from 2022 onward:

    # RHEL / CentOS / Rocky Linux
    sudo dnf install java-17-openjdk -y
    
    # Debian / Ubuntu
    sudo apt-get update && sudo apt-get install openjdk-17-jre -y
    
    # Verify
    java -version
    openjdk version "17.0.9" 2023-10-17
    OpenJDK Runtime Environment (build 17.0.9+9)
    OpenJDK 64-Bit Server VM (build 17.0.9+9, mixed mode, sharing)

    If multiple Java versions are installed, make sure the correct one is active. On systems using

    update-alternatives
    :

    sudo update-alternatives --config java
    There are 2 choices for the alternative java.
    
      Selection    Path                                           Priority   Status
    ----------------------------------------------------------------------
    * 0            /usr/lib/jvm/java-17-openjdk-amd64/bin/java   1711      auto mode
      1            /usr/lib/jvm/java-8-openjdk-amd64/bin/java     1081      manual mode
    
    Press  to keep the current choice[*], or type selection number: 0

    After switching, verify with

    java -version
    one more time, then relaunch the agent process.


    Root Cause 6: Agent Node Stuck in a Stale Disconnected State

    Why It Happens

    Jenkins occasionally holds a stale connection record for an agent. The agent process on the remote machine dies — an OOM kill, an unplanned reboot, a network blip — but Jenkins doesn't immediately recognize the connection as gone. It sits in a limbo state where it thinks the connection is still active (or just barely timed out) and may not immediately accept a new inbound connection from the agent. You'll see the node listed as "offline" in the UI, but relaunching the agent process on the machine doesn't seem to make it connect.

    How to Identify It

    On the agent machine, confirm there's no leftover agent process running:

    ps aux | grep agent.jar
    infrarunbook-admin  8823  0.0  0.0  14432  1024 pts/0  S+  10:41  0:00 grep agent.jar

    Nothing running, but Jenkins still shows the node as pending or offline with a recent timestamp. The agent log in Jenkins will typically show the last successful connection and then nothing new despite the agent process being restarted.

    How to Fix It

    From the Jenkins UI, navigate to Manage Jenkins → Nodes → [agent name], click Disconnect to force Jenkins to clear the stale state, and then relaunch the agent process fresh on the agent machine:

    sudo systemctl restart jenkins-agent
    # or manually:
    java -jar /opt/jenkins-agent/agent.jar \
      -url http://10.10.1.50:8080/ \
      -secret 9d2e7a1f4bc83e50a291... \
      -name build-agent \
      -workDir /opt/jenkins-agent 2>&1 | tee /var/log/jenkins-agent/launch.log

    Piping to tee gives you a persistent log for the next time this happens. The node should come online within 10–15 seconds of the agent process starting.


    Root Cause 7: DNS Resolution Failure on the Agent

    Why It Happens

    If your Jenkins URL uses a hostname rather than an IP address — which is the right call for production — the agent machine must be able to resolve that hostname. In isolated build environments, private subnets, or after a DNS infrastructure change, resolution can silently fail. The agent.jar tries to connect, the DNS lookup returns NXDOMAIN, and the connection attempt fails immediately. The error isn't always obvious about it being a DNS problem.

    How to Identify It

    From the agent machine, try resolving the Jenkins hostname directly:

    nslookup jenkins.solvethenetwork.com
    Server:   127.0.0.53
    Address:  127.0.0.53#53
    
    ** server can't find jenkins.solvethenetwork.com: NXDOMAIN

    Then confirm the controller is reachable by IP to rule out everything else:

    curl -v http://10.10.1.50:8080/
    * Trying 10.10.1.50:8080...
    * Connected to 10.10.1.50 (10.10.1.50) port 8080
    < HTTP/1.1 200 OK

    If IP works but the hostname doesn't resolve, it's purely DNS.

    How to Fix It

    Short-term: add a hosts entry on the agent machine to unblock yourself:

    echo "10.10.1.50 jenkins.solvethenetwork.com" >> /etc/hosts

    Long-term: fix the actual DNS record. Add an A record in your internal DNS zone for

    jenkins.solvethenetwork.com
    pointing to
    10.10.1.50
    . After updating, verify from the agent:

    dig jenkins.solvethenetwork.com +short
    10.10.1.50

    Remove the hosts entry once DNS is working correctly so you don't end up with a hidden override that bites someone six months from now.


    Prevention

    Most of these issues are avoidable with a bit of setup discipline. Here's what I recommend for any production Jenkins agent deployment.

    Pin a fixed JNLP port. Change the TCP port for inbound agents from "Random" to a specific value — 50000 is the convention. Document that port. This makes firewall rules stable and predictable across Jenkins restarts. A random port and a firewall rule are fundamentally incompatible.

    Use WebSocket agents where your network allows it. WebSocket agents communicate over port 443 or 80 using the Upgrade mechanism, which passes through nearly every corporate firewall and proxy without special rules. Enable it in Manage Jenkins → Security → Agent protocols and add

    -webSocket
    to the agent launch command. This eliminates the JNLP port problem entirely.

    Manage secrets with an environment file, not hardcoded scripts. Store the agent secret in a file with mode 0600, owned by the service account, and reference it as an environment variable in the systemd unit:

    [Service]
    EnvironmentFile=/etc/jenkins-agent/secrets.env
    ExecStart=/usr/bin/java -jar /opt/jenkins-agent/agent.jar \
      -url https://jenkins.solvethenetwork.com/ \
      -secret ${JENKINS_AGENT_SECRET} \
      -name build-agent \
      -workDir /opt/jenkins-agent
    Restart=on-failure
    RestartSec=10

    The

    Restart=on-failure
    directive alone will handle most transient connection drops automatically.

    Bake the Java version into your provisioning. Whether you're using Ansible, Terraform user data, or a golden base image, include the Java installation step explicitly and pin the version. Don't rely on whatever

    java
    happens to be present. When you upgrade Jenkins, update the agent provisioning playbook at the same time.

    Run connectivity checks before you connect the agent. After provisioning a new agent machine, run

    nc -zv 10.10.1.50 50000
    and
    curl http://10.10.1.50:8080/jnlpJars/agent.jar
    manually before attempting to bring it online in Jenkins. Thirty seconds of testing saves thirty minutes of debugging.

    Set a sensible Jenkins URL immediately after installation. The first thing you should do on a fresh Jenkins install — before creating any agents — is set the correct Jenkins URL in System Configuration. Use the hostname or IP that remote machines can reach, not localhost. This single step prevents the most common misconfiguration I see on new deployments.

    Monitor agent health proactively. Use the Jenkins Monitoring plugin or expose metrics via the Prometheus plugin and alert when agent count drops below your expected baseline. A build queue that's backed up for twenty minutes during a release window is a much bigger problem than a 2 AM pager alert the moment an agent first goes offline.

    Frequently Asked Questions

    Where do I find the Jenkins agent secret token?

    Navigate to Manage Jenkins → Nodes and Clouds → [your agent name]. On the node's status page, Jenkins displays the full connection command including the -secret parameter. Copy that exact value — each node has a unique secret that can't be shared between agents.

    What Java version do Jenkins agents need?

    Jenkins 2.361 and later require Java 11 or Java 17 on agent machines. Java 8 is no longer supported. Java 17 is the recommended choice for any Jenkins LTS release from 2022 onward. Check the class file version error in the agent launch output — version 52.0 means Java 8, 55.0 means Java 11, 61.0 means Java 17.

    Can I use WebSocket instead of the JNLP port for agent connections?

    Yes. Since Jenkins 2.217, agents can connect over WebSocket using standard port 443 or 80, bypassing the need to open TCP port 50000. Enable it under Manage Jenkins → Security → Agent protocols, then add -webSocket to the agent launch command. This is the preferred approach in environments with strict firewall rules.

    Why does my Jenkins agent connect and then immediately disconnect?

    Immediate disconnects after a brief connection are often caused by a credential mismatch (Jenkins rejects the secret after the handshake begins), a Java version incompatibility that causes the remote classloader to fail, or a network issue that drops the TCP connection mid-handshake. Check the agent launch log for IOException or UnsupportedClassVersionError messages immediately after the connection attempt.

    How do I run a Jenkins agent as a systemd service so it reconnects automatically?

    Create a unit file at /etc/systemd/system/jenkins-agent.service with ExecStart pointing to your java -jar agent.jar command, set Restart=on-failure and RestartSec=10, and store the agent secret in a separate EnvironmentFile with mode 0600. Run systemctl daemon-reload && systemctl enable --now jenkins-agent to activate it.

    Related Articles