InfraRunBook
    Back to articles

    TLS Handshake Failure Troubleshooting

    Security
    Published: Apr 14, 2026
    Updated: Apr 14, 2026

    A practical, command-driven guide to diagnosing and resolving TLS handshake failures, covering protocol mismatches, broken certificate chains, SNI issues, cipher incompatibility, and mutual TLS problems.

    TLS Handshake Failure Troubleshooting

    Symptoms

    TLS handshake failures surface in a dozen different ways depending on where in your stack the problem lives. The most common thing you'll see on the command line is curl throwing one of these:

    curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to api.solvethenetwork.com:443
    curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
    curl: (60) SSL certificate problem: unable to get local issuer certificate
    curl: (35) error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure

    In nginx or Apache error logs, you're more likely to see something like:

    SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls1_process_heartbeat:peer error no certificate) while SSL handshaking
    SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol

    Browsers speak their own dialect. Chrome raises ERR_SSL_PROTOCOL_ERROR, ERR_SSL_VERSION_OR_CIPHER_MISMATCH, or ERR_CERT_AUTHORITY_INVALID. Firefox shows SSL_ERROR_RX_RECORD_TOO_LONG or SEC_ERROR_UNKNOWN_ISSUER. Java applications surface javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure buried in a stack trace. In every case the underlying problem is the same: the client and server couldn't agree on how to communicate securely, or one side presented credentials the other side refused to accept.

    Before you start chasing root causes, run a quick baseline with

    openssl s_client
    . It's the single most useful TLS diagnostic tool in existence, and I reach for it before anything else:

    openssl s_client -connect api.solvethenetwork.com:443 -servername api.solvethenetwork.com

    Read the full output methodically. The Verify return code at the bottom tells you whether the certificate chain validated. The Protocol line tells you what TLS version negotiated. The Cipher line tells you what cipher suite was selected. Between those three fields, you can usually narrow the problem down before you even open a log file.


    Root Cause 1: Protocol Version Mismatch

    Why It Happens

    Protocol version mismatches are increasingly common now that TLS 1.0 and 1.1 are disabled by default in OpenSSL 3.x and in most modern Linux distributions. A server configured with

    ssl_protocols TLSv1.2 TLSv1.3;
    will outright reject any client that only offers TLS 1.0. The alert the server sends back is protocol_version (70), which curl translates into that cryptic SSL_ERROR_SYSCALL message. In my experience, this bites hardest when an old Java 7 application tries to reach a hardened endpoint, when a legacy hardware load balancer strips newer protocol offers from the ClientHello, or when a developer manually sets
    SSLProtocol -all +TLSv1
    in Apache on a staging box that then starts receiving production traffic.

    How to Identify It

    Force specific protocol versions with

    s_client
    and observe which ones fail:

    # Test TLS 1.0 — expect failure on hardened servers
    openssl s_client -connect api.solvethenetwork.com:443 -tls1
    
    # Test TLS 1.1
    openssl s_client -connect api.solvethenetwork.com:443 -tls1_1
    
    # Test TLS 1.2 — should succeed on any modern server
    openssl s_client -connect api.solvethenetwork.com:443 -tls1_2
    
    # Test TLS 1.3
    openssl s_client -connect api.solvethenetwork.com:443 -tls1_3

    A version mismatch fails immediately — the connection closes before any certificate is exchanged:

    CONNECTED(00000003)
    140234567890:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:
    ssl/record/rec_layer_s3.c:1544:SSL alert number 70
    ---
    no peer certificate available
    ---
    No client certificate CA names sent
    ---
    SSL handshake has read 7 bytes and written 116 bytes
    Verify return code: 0 (ok)

    That SSL alert number 70 is the protocol_version alert defined in RFC 5246. Notice "no peer certificate available" — the server didn't even bother sending a certificate because it killed the handshake at the ClientHello stage. Seven bytes read confirms this: that's just the TLS alert record itself.

    How to Fix It

    On the server side, ensure your nginx configuration explicitly states the accepted TLS versions:

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    On the client side, if you're dealing with an older Java application, add the JVM flag

    -Djdk.tls.client.protocols=TLSv1.2,TLSv1.3
    at startup, or update
    /etc/java-11-openjdk/security/java.security
    to enable the appropriate protocol. For curl-based integrations, enforce a minimum version with
    --tlsv1.2
    . The server configuration is almost always the right place to draw the line — don't widen protocol support just to accommodate a client you should be updating instead.


    Root Cause 2: Cipher Suite Incompatibility

    Why It Happens

    Even when both sides agree on a TLS version, the handshake fails if they share no overlapping cipher suites. The server sends a handshake_failure alert (40) and the client gets a frustratingly opaque error. This is most common when a server has been hardened to ECDHE-only forward-secret suites while a client — often an older embedded system, IoT device, or legacy enterprise Java application — only supports RSA key exchange ciphers like

    TLS_RSA_WITH_AES_128_CBC_SHA
    . The diagnostic process requires mapping what the server accepts against what the client offers.

    How to Identify It

    Use nmap's

    ssl-enum-ciphers
    script to get a complete list of what the server accepts:

    nmap --script ssl-enum-ciphers -p 443 api.solvethenetwork.com
    PORT    STATE SERVICE
    443/tcp open  https
    | ssl-enum-ciphers:
    |   TLSv1.2:
    |     ciphers:
    |       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
    |       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
    |       TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
    |_  least strength: A

    If you know your client's cipher list, you can simulate it directly with

    s_client
    :

    openssl s_client -connect api.solvethenetwork.com:443 \
      -cipher "AES128-SHA:AES256-SHA" \
      -tls1_2

    A cipher incompatibility returns:

    CONNECTED(00000003)
    error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
    ---
    no peer certificate available

    That's your confirmation. The server received a ClientHello listing only RSA key-exchange ciphers and found nothing it was willing to use.

    How to Fix It

    The right fix is updating the client to support modern ECDHE cipher suites — not weakening the server. That said, if you're dealing with a third-party integration you can't control, you may need to front it with an nginx proxy that accepts the weaker cipher from the legacy client on the ingress side while using a strong cipher suite on the backend connection. For nginx, a well-balanced cipher string that covers virtually all reasonable modern clients:

    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers on;

    Root Cause 3: Certificate Chain Broken

    Why It Happens

    A broken certificate chain is one of the most operationally frustrating TLS failures because it works fine in most browsers — which aggressively cache intermediate CA certificates from previous sessions — but fails consistently in curl, Python requests, Java, Go's net/http, and every other programmatic client. What happens is the server is only serving its leaf (end-entity) certificate without the intermediate CA certificates needed to build a path to a trusted root. The client can't complete the chain, so it rejects the connection even though the root CA is trusted.

    I've also seen the reverse: someone concatenates the chain in the wrong order — leaf, root, intermediate — and certain strict TLS implementations reject it because the chain must be presented in order from leaf to root.

    How to Identify It

    openssl s_client -connect api.solvethenetwork.com:443 -servername api.solvethenetwork.com

    A broken chain shows only depth 0 in the certificate chain section:

    Certificate chain
     0 s:CN = api.solvethenetwork.com
       i:CN = SolveTheNetwork Intermediate CA
    ---
    Verify return code: 21 (unable to verify the first certificate)

    A healthy chain includes the intermediate:

    Certificate chain
     0 s:CN = api.solvethenetwork.com
       i:CN = SolveTheNetwork Intermediate CA
     1 s:CN = SolveTheNetwork Intermediate CA
       i:CN = SolveTheNetwork Root CA
    ---
    Verify return code: 0 (ok)

    You can also manually verify a local cert file against a CA bundle:

    openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt \
      -untrusted /etc/nginx/ssl/intermediate.crt \
      /etc/nginx/ssl/api.solvethenetwork.com.crt
    
    /etc/nginx/ssl/api.solvethenetwork.com.crt: OK

    How to Fix It

    In nginx, the

    ssl_certificate
    directive must point to a full chain file — not just the leaf certificate. Build it by concatenating leaf first, then intermediates in order toward the root:

    cat /etc/nginx/ssl/api.solvethenetwork.com.crt \
        /etc/nginx/ssl/intermediate.crt \
        > /etc/nginx/ssl/fullchain.crt

    Then update the server block:

    ssl_certificate     /etc/nginx/ssl/fullchain.crt;
    ssl_certificate_key /etc/nginx/ssl/api.solvethenetwork.com.key;

    Do not include the root CA in the chain you serve. Every TLS client has root CAs built into its trust store. Serving the root just adds unnecessary bytes to every handshake without improving anything. Intermediates only.


    Root Cause 4: Client Certificate Required (Mutual TLS)

    Why It Happens

    Some endpoints require mutual TLS — the server presents its certificate, and the client must present one too, signed by a CA the server trusts. This is standard in internal service meshes, zero-trust architectures, and B2B API integrations. When a client doesn't send a certificate — because nobody told it one was required, the cert path is misconfigured, or the cert itself expired — the server either sends a Certificate Required alert (42) or closes the connection after sending its CertificateRequest message and receiving nothing in return.

    How to Identify It

    Connect to the endpoint without a client certificate and look for the Acceptable client certificate CA names block in the s_client output:

    openssl s_client -connect internal-api.solvethenetwork.com:8443 \
      -servername internal-api.solvethenetwork.com
    ---
    Acceptable client certificate CA names
    CN = SolveTheNetwork Internal CA
    ---
    Requested Signature Algorithms: ECDSA+SHA256:RSA+SHA256:ECDSA+SHA384:RSA+SHA384
    Shared Requested Signature Algorithms: ECDSA+SHA256:RSA+SHA256
    ---
    SSL handshake has read 3291 bytes and written 390 bytes
    Verify return code: 0 (ok)
    ---
    CONNECTED(00000003)
    write:errno=104

    That Acceptable client certificate CA names block is the tell. The server is demanding a certificate signed by

    SolveTheNetwork Internal CA
    , and the
    write:errno=104
    (connection reset by peer) confirms it dropped the connection when we didn't supply one.

    How to Fix It

    Supply the client certificate and private key to confirm the credential pair works before deploying it anywhere:

    openssl s_client \
      -connect internal-api.solvethenetwork.com:8443 \
      -servername internal-api.solvethenetwork.com \
      -cert /etc/ssl/client/infrarunbook-admin.crt \
      -key /etc/ssl/client/infrarunbook-admin.key

    For curl-based callers:

    curl --cert /etc/ssl/client/infrarunbook-admin.crt \
         --key /etc/ssl/client/infrarunbook-admin.key \
         https://internal-api.solvethenetwork.com:8443/health

    On the server side, configuring mutual TLS in nginx requires three directives:

    ssl_client_certificate /etc/nginx/ssl/internal-ca.crt;
    ssl_verify_client      on;
    ssl_verify_depth       2;

    If you need to make client certs optional for some paths but required for others, set

    ssl_verify_client optional
    at the server level and use
    $ssl_client_verify
    in individual location blocks to enforce it selectively. This avoids running two separate virtual hosts with different TLS configurations for the same domain.


    Root Cause 5: SNI Mismatch

    Why It Happens

    Server Name Indication is the TLS extension that lets a single server — one IP address — host multiple domains with different certificates. The client includes the target hostname in the ClientHello, and the server selects the matching certificate before the handshake continues. Without SNI, or when the SNI value doesn't match any configured virtual host, the server falls back to its default certificate. That default is often the wrong one — maybe it's the certificate for

    sw-infrarunbook-01.solvethenetwork.com
    when the client is trying to reach
    api.solvethenetwork.com
    .

    SNI mismatches show up most often when clients connect via IP address directly, when older HTTP clients don't send the SNI extension, or when an intermediate proxy rewrites the TLS SNI value to something the backend doesn't have a virtual host configured for.

    How to Identify It

    Compare a connection made with and without the

    -servername
    flag:

    # Without SNI — server returns its default cert
    openssl s_client -connect 192.168.10.45:443
    
    # With correct SNI — server should return the right cert
    openssl s_client -connect 192.168.10.45:443 -servername api.solvethenetwork.com

    If the Subject CN in the first connection differs from your target hostname, that's your SNI problem. You can also use curl's

    --resolve
    flag to force a DNS mapping without bypassing SNI:

    curl -vI --resolve "api.solvethenetwork.com:443:192.168.10.45" \
      https://api.solvethenetwork.com 2>&1 | grep -A5 "Server certificate"
    
    * Server certificate:
    *  subject: CN=api.solvethenetwork.com
    *  start date: Jan  1 00:00:00 2026 GMT
    *  expire date: Jan  1 00:00:00 2027 GMT
    *  issuer: CN=SolveTheNetwork Intermediate CA
    *  SSL certificate verify ok.

    How to Fix It

    On the server, ensure each virtual host has an explicit

    server_name
    directive that matches the certificate's Common Name or Subject Alternative Names:

    server {
        listen 443 ssl;
        server_name api.solvethenetwork.com;
    
        ssl_certificate     /etc/nginx/ssl/api-fullchain.crt;
        ssl_certificate_key /etc/nginx/ssl/api.key;
    }
    
    server {
        listen 443 ssl default_server;
        server_name sw-infrarunbook-01.solvethenetwork.com;
    
        ssl_certificate     /etc/nginx/ssl/default-fullchain.crt;
        ssl_certificate_key /etc/nginx/ssl/default.key;
    
        # Reject requests with unknown SNI
        return 444;
    }

    On the client side, SNI is sent automatically by curl whenever you use a hostname. If you're connecting via a raw IP, use

    --resolve
    to preserve the hostname in the SNI extension. For HAProxy backends, verify that SNI pass-through is configured correctly — by default HAProxy terminates TLS and won't forward the original SNI unless you use
    ssl-passthrough
    or explicitly set
    sni
    in the backend configuration.


    Root Cause 6: Certificate Expiration

    Why It Happens

    Certificates have a hard expiry encoded in the

    NotAfter
    field. Once that date passes, every validating TLS client rejects the connection. Expiry failures are among the most operationally common TLS issues — not because they're hard to understand, but because certificate lifecycle management is often treated as an afterthought. Renewals slip, monitoring isn't wired up, and suddenly Monday morning brings a wave of alert tickets. Let's Encrypt's 90-day validity window has made this worse in some ways: shorter validity means more frequent renewals, and more chances for automation to silently fail.

    How to Identify It

    # Check expiry date on a live endpoint
    echo | openssl s_client -connect api.solvethenetwork.com:443 \
      -servername api.solvethenetwork.com 2>/dev/null \
      | openssl x509 -noout -dates
    
    notBefore=Jan  1 00:00:00 2025 GMT
    notAfter=Jan  1 00:00:00 2026 GMT
    
    # Check expiry on a local cert file
    openssl x509 -in /etc/nginx/ssl/api.solvethenetwork.com.crt -noout -dates
    
    # Pass/fail test: is the cert valid for at least another 30 days?
    openssl x509 -in /etc/nginx/ssl/api.solvethenetwork.com.crt \
      -noout -checkend 2592000 \
      && echo "OK: cert valid for more than 30 days" \
      || echo "WARNING: cert expires within 30 days"

    How to Fix It

    Renew the certificate, rebuild the full chain, and reload nginx without dropping connections:

    # Back up the old cert before replacing it
    cp /etc/nginx/ssl/api.solvethenetwork.com.crt \
       /etc/nginx/ssl/api.solvethenetwork.com.crt.$(date +%Y%m%d)
    
    # Install renewed cert and chain, then test config before reloading
    nginx -t && systemctl reload nginx

    If you're using certbot, the renewal itself is usually fine — but verify that the deploy hook is actually reloading your service. I've seen many environments where certbot renewed the cert on disk and quietly logged success, but nginx kept serving the expired certificate because the

    --deploy-hook
    was never configured.

    # Check certbot timer and confirm hook is set
    systemctl status certbot.timer
    certbot renew --dry-run --deploy-hook "systemctl reload nginx"

    Root Cause 7: Clock Skew

    Why It Happens

    X.509 certificates carry both a

    NotBefore
    and
    NotAfter
    date. If a client's system clock is significantly wrong — either behind (making a freshly issued cert look "not yet valid") or ahead (making a valid cert look expired) — the TLS library rejects it. Clock drift is especially common in virtual machines that were suspended and resumed without an NTP sync, containers that don't inherit reliable time from the host, and embedded devices without persistent RTC or NTP configuration.

    How to Identify It

    timedatectl status
    
                   Local time: Mon 2026-04-14 12:00:00 UTC
               Universal time: Mon 2026-04-14 12:00:00 UTC
                     RTC time: Mon 2026-04-14 12:00:00
                    Time zone: UTC (UTC, +0000)
    System clock synchronized: yes
                  NTP service: active
              RTC in local TZ: no

    If you see System clock synchronized: no, that's your lead. Cross-reference the system time against the certificate's validity window. The openssl error in this case is unambiguous:

    verify error:num=9:certificate is not yet valid
    # or
    verify error:num=10:certificate has expired

    How to Fix It

    # Force an immediate NTP sync with chrony
    chronyc makestep
    
    # Or enable and restart systemd-timesyncd
    timedatectl set-ntp true
    systemctl restart systemd-timesyncd
    
    # Confirm sync
    timedatectl status

    For Docker containers, the container clock is inherited from the host kernel — fix the host's NTP and the containers follow automatically. For systemd-nspawn containers or certain Kubernetes configurations where clock namespace isolation is in play, verify that the node's NTP is synchronized and that no

    --private-users-chown
    or namespace flag is isolating the time namespace.


    Root Cause 8: Hostname / SAN Mismatch

    Why It Happens

    A certificate's Subject Common Name or Subject Alternative Names must match the hostname the client is connecting to. If you're connecting to

    api.solvethenetwork.com
    but the certificate only has a SAN for
    www.solvethenetwork.com
    , the TLS client will reject it even though the chain validates perfectly. Modern TLS implementations ignore the CN field entirely for hostname verification and rely solely on SANs — so issuing a certificate with only a CN and no SAN entries will fail in current versions of Chrome, curl, and Go's TLS stack.

    How to Identify It

    # Inspect the SAN and CN on a live cert
    echo | openssl s_client -connect api.solvethenetwork.com:443 \
      -servername api.solvethenetwork.com 2>/dev/null \
      | openssl x509 -noout -text \
      | grep -A2 "Subject Alternative Name"
    
                X509v3 Subject Alternative Name:
                    DNS:api.solvethenetwork.com, DNS:solvethenetwork.com
    
    # Check the CN
    echo | openssl s_client -connect api.solvethenetwork.com:443 \
      -servername api.solvethenetwork.com 2>/dev/null \
      | openssl x509 -noout -subject
    
    subject=CN = api.solvethenetwork.com

    curl's verbose output gives you the human-readable hostname check result directly:

    curl -v https://api.solvethenetwork.com 2>&1 | grep -i "ssl\|host"
    
    * SSL: certificate subject name 'www.solvethenetwork.com' does not match target host name 'api.solvethenetwork.com'

    How to Fix It

    Reissue the certificate with the correct SAN entries. If you're generating certificates with openssl directly, you must use a config file or

    -addext
    flag — the
    -subj
    flag alone won't add SANs:

    openssl req -new -key /etc/nginx/ssl/api.key \
      -out /etc/nginx/ssl/api.csr \
      -subj "/CN=api.solvethenetwork.com" \
      -addext "subjectAltName=DNS:api.solvethenetwork.com,DNS:solvethenetwork.com"

    Submit that CSR to your CA, install the issued certificate, rebuild the full chain, and reload nginx. If your internal CA is running on Vault, make sure the PKI role's

    allowed_domains
    and
    allow_subdomains
    settings cover the SANs you need before requesting the cert.


    Prevention

    Most TLS handshake failures are preventable. The highest-return investment is certificate expiry monitoring. Deploy Prometheus with the

    blackbox_exporter
    ssl probe targeting every public and internal TLS endpoint, alert at 30 days remaining, and page at 7 days. Certificate rotation should never be an emergency — if it is, your monitoring is broken.

    For protocol and cipher configuration, follow the Mozilla SSL Configuration Generator and pin your stack to the Intermediate profile as a minimum. Review it annually. Package upgrades — particularly OpenSSL major versions — have been known to shift default cipher lists in ways that silently break old clients. Test your TLS configuration after every nginx, OpenSSL, or Java runtime upgrade.

    Build SNI sanity checks into your deployment pipeline. After any certificate deployment to

    sw-infrarunbook-01.solvethenetwork.com
    , run a smoke test that connects via both hostname and direct IP and verifies that the correct certificate CN and SAN are presented. A 10-line shell script using
    openssl s_client
    catches most deployment mistakes before users hit them.

    For internal services using mutual TLS, maintain a certificate inventory mapping each client certificate to the service that uses it, its expiry date, and the team responsible for renewal. mTLS client certificates are forgotten far more often than server certificates — there's no browser security warning to alert users, just a silent service-to-service failure in the middle of the night.

    Log TLS negotiation details at your ingress. Nginx can emit the negotiated protocol and cipher in the access log:

    log_format tls_detail '$remote_addr - $remote_user [$time_local] '
                          '"$request" $status $body_bytes_sent '
                          '$ssl_protocol $ssl_cipher $ssl_client_verify';
    
    access_log /var/log/nginx/access.log tls_detail;

    Parsing those fields over time gives you a real picture of what TLS versions and ciphers your actual client population is using. That data lets you make evidence-based decisions about deprecating TLS 1.2 or removing older cipher suites — rather than guessing and hoping nothing breaks.

    TLS failures are almost never mysterious once you have the right diagnostic output in front of you. The key is reaching for

    openssl s_client
    as your first tool, reading the full output rather than skimming for a familiar error string, and working through the verify return code, certificate chain, and negotiated protocol systematically before drawing conclusions.

    Frequently Asked Questions

    What is the fastest way to diagnose a TLS handshake failure?

    Run `openssl s_client -connect hostname:443 -servername hostname` and examine three things: the certificate chain depth, the Verify return code at the bottom, and the Protocol/Cipher lines. Together these three fields narrow down most root causes before you need to look at any log files.

    Why does a TLS connection work in Chrome but fail in curl or Java?

    Browsers aggressively cache intermediate CA certificates from prior sessions, so a broken certificate chain often works in browsers but fails in programmatic clients. Check whether the server is serving the full chain including intermediate CA certificates, not just the leaf certificate.

    How do I check whether my server is serving the correct certificate for a given hostname?

    Use `openssl s_client -connect IP:443 -servername hostname` and compare the Subject CN and SAN fields against the hostname you expect. Connecting with and without the `-servername` flag will reveal whether an SNI mismatch is causing the server to return the wrong certificate.

    What does 'SSL alert number 70' mean?

    Alert number 70 is the protocol_version alert defined in RFC 5246. The server is rejecting the client's offered TLS version as unsupported. This happens when a client offers only TLS 1.0 or 1.1 to a server configured to accept TLS 1.2 and above.

    How can I monitor certificate expiry before it causes an outage?

    Use `openssl x509 -noout -checkend 2592000` in a cron job or wire up Prometheus blackbox_exporter with an ssl probe. Alert at 30 days remaining and page at 7 days. If using certbot, verify that the deploy hook is correctly reloading your web server after renewal.

    Related Articles