Symptoms
SSL handshake failures in Nginx terminate the connection before a single HTTP byte is exchanged. The TLS negotiation collapses during or immediately after the ClientHello/ServerHello phase, leaving clients with cryptic error messages and leaving operators staring at terse log entries. Common symptoms include:
- Browsers display ERR_SSL_PROTOCOL_ERROR, ERR_CERT_DATE_INVALID, or SSL_ERROR_RX_RECORD_TOO_LONG
- curl exits with SSL connect error, SSL peer certificate or SSH remote key was not OK, or SSL handshake failed
- Nginx error log entries containing SSL_do_handshake() failed, no shared cipher, or sslv3 alert certificate expired
- Java applications throwing javax.net.ssl.SSLHandshakeException: PKIX path building failed
- Intermittent 502 or 504 responses when Nginx proxies to an upstream over HTTPS
- Connections hang at the TLS negotiation phase and eventually time out with no HTTP response
- Mobile or IoT clients silently failing while desktop browsers succeed (cipher or protocol version gap)
Diagnosing which phase of the handshake failed is the critical first step. The openssl s_client command is the primary tool for that triage — it exposes exactly what the server presented and where verification failed.
Root Cause 1: Certificate Mismatch
Why It Happens
A certificate mismatch occurs when the Common Name (CN) or Subject Alternative Names (SANs) recorded in the X.509 certificate do not include the hostname the client is connecting to. The TLS specification mandates this check, and every modern browser and TLS library enforces it strictly. Mismatches arise from several operational mistakes: a wildcard certificate applied to a subdomain at the wrong depth (e.g., *.solvethenetwork.com does not cover deep.sub.solvethenetwork.com), a multi-domain certificate missing a newly added hostname, or Nginx serving the wrong virtual host certificate because server_name directives are misconfigured or a reload was not performed after a configuration change.
How to Identify It
Retrieve the certificate's CN and SANs directly from the live server:
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 \
-servername sw-infrarunbook-01.solvethenetwork.com 2>/dev/null \
| openssl x509 -noout -subject -ext subjectAltName
Example output revealing a mismatch:
subject=CN = portal.solvethenetwork.com
X509v3 Subject Alternative Name:
DNS:portal.solvethenetwork.com, DNS:www.solvethenetwork.com
If you connected to api.solvethenetwork.com but the certificate covers only portal.solvethenetwork.com and www.solvethenetwork.com, the handshake fails. curl will report:
curl: (60) SSL: no alternative certificate subject name matches target host name 'api.solvethenetwork.com'
More details here: https://curl.se/docs/sslcerts.html
The Nginx error log for a client-reported certificate alert looks like:
2026/04/06 09:14:22 [error] 1492#1492: *83 SSL_do_handshake() failed
(SSL: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate)
while SSL handshaking, client: 10.10.5.22, server: 0.0.0.0:443
How to Fix It
Reissue the certificate with all required hostnames listed as SANs. Using Certbot with the Nginx plugin:
certbot certonly --nginx \
-d solvethenetwork.com \
-d www.solvethenetwork.com \
-d api.solvethenetwork.com \
-d portal.solvethenetwork.com
Alternatively, if multiple Nginx server blocks are serving the same IP and the wrong certificate is being selected, ensure each block's server_name matches the certificate it references:
server {
listen 443 ssl;
server_name api.solvethenetwork.com;
ssl_certificate /etc/nginx/ssl/api.solvethenetwork.com.crt;
ssl_certificate_key /etc/nginx/ssl/api.solvethenetwork.com.key;
}
After updating, validate and reload:
nginx -t && systemctl reload nginx
Root Cause 2: Protocol Version Mismatch
Why It Happens
TLS protocol version mismatches occur when the server and client cannot agree on a common TLS version during the ClientHello/ServerHello exchange. This failure happens in both directions. An Nginx instance configured with ssl_protocols TLSv1.3; only will reject any client that tops out at TLS 1.2. Conversely, an Nginx server that has disabled TLS 1.0 and 1.1 (which is correct security practice per RFC 8996) will refuse connections from older Java runtimes (Java 7 defaults to TLS 1.0), embedded IoT firmware, or legacy enterprise middleware that never shipped TLS 1.2 support. The mismatch is a hard failure — there is no fallback once both sides have declared their supported version ranges.
How to Identify It
Force a specific protocol version with openssl to probe what Nginx accepts:
# Force TLS 1.0 — should fail on a hardened server
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 -tls1
# Force TLS 1.2 — should succeed
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 -tls1_2
# Force TLS 1.3 — should succeed if OpenSSL >= 1.1.1
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 -tls1_3
A version mismatch produces output like the following, where the cipher is (NONE) and the byte counts are minimal:
CONNECTED(00000003)
140594085787456:error:1409442E:SSL routines:ssl3_read_bytes:
tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1543:
SSL alert number 70
---
no peer certificate available
---
New, (NONE), Cipher is (NONE)
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Start Time: 1743933600
Timeout : 7200 (sec)
Verify return code: 0 (ok)
The corresponding Nginx error log entry:
2026/04/06 10:22:01 [error] 1492#1492: *91 SSL_do_handshake() failed
(SSL: error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version)
while SSL handshaking, client: 192.168.10.15, server: 0.0.0.0:443
Use nmap to enumerate which protocols Nginx is advertising:
nmap --script ssl-enum-ciphers -p 443 sw-infrarunbook-01.solvethenetwork.com
Example output showing enabled protocols:
PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_256_GCM_SHA384 - A
|_ least strength: A
How to Fix It
Set ssl_protocols in /etc/nginx/nginx.conf (or a shared include file) to cover the versions that balance security requirements with client compatibility:
# Recommended for most deployments: TLS 1.2 and 1.3 only
ssl_protocols TLSv1.2 TLSv1.3;
# For environments with legacy clients that require TLS 1.1 (not recommended):
# ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
Reload Nginx and re-verify:
nginx -t && systemctl reload nginx
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 -tls1_2 2>/dev/null \
| grep Protocol
Expected output:
Protocol : TLSv1.2
Root Cause 3: Cipher Suite Incompatibility
Why It Happens
Even when server and client agree on a TLS version, the handshake fails if there is no cipher suite both sides list as acceptable. Nginx passes its cipher configuration directly to OpenSSL via the ssl_ciphers directive. Overly aggressive hardening — stripping all non-AEAD cipher suites — breaks older Java applications running on JVM 8 update 60 or earlier, Android 4.x clients, and legacy enterprise middleware that was never updated beyond AES-CBC-SHA cipher families. The reverse also causes failures: a cipher list that includes RC4 or 3DES may be rejected by modern clients enforcing RFC 8996 restrictions. The error is explicit in the Nginx log and points directly to the root cause.
How to Identify It
The Nginx error log will contain the string no shared cipher:
2026/04/06 11:05:33 [error] 1492#1492: *104 SSL_do_handshake() failed
(SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher)
while SSL handshaking, client: 10.10.5.30, server: 0.0.0.0:443
Enumerate the ciphers the server currently accepts:
nmap --script ssl-enum-ciphers -p 443 sw-infrarunbook-01.solvethenetwork.com 2>/dev/null \
| grep -E "TLS_|SSL_"
For Java clients, enable SSL debug logging to see what cipher suites the client is offering:
java -Djavax.net.debug=ssl:handshake -jar /opt/app/client.jar 2>&1 | grep "Cipher Suites"
Sample Java debug output revealing its supported cipher list:
Cipher Suites: [TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
TLS_RSA_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_RC4_128_MD5]
If none of those overlap with what Nginx accepts, the handshake fails immediately.
How to Fix It
Adopt the Mozilla intermediate cipher configuration, which covers the widest compliant client base without enabling broken suites:
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:
ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:
ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:
DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
For legacy Java 8 or embedded systems that only support RSA key exchange:
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!SRP:!CAMELLIA;
ssl_prefer_server_ciphers on;
After updating, reload and confirm the negotiated cipher:
nginx -t && systemctl reload nginx
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 -tls1_2 2>/dev/null \
| grep "Cipher is"
Expected output:
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Root Cause 4: Expired Certificate
Why It Happens
Every X.509 certificate contains a notBefore and a notAfter timestamp. When the current UTC time passes the notAfter value, every compliant TLS client rejects the certificate with a fatal alert — there is no override at the TLS layer. This is the single most common SSL handshake failure in production environments. It results from broken auto-renewal pipelines (Certbot's HTTP-01 challenge blocked by a WAF rule, DNS validation failing due to a propagation delay, a systemd timer that silently failed), manually issued certificates with no calendar reminder, or certificate chains where an intermediate CA expired before the leaf certificate was renewed. Note that the Nginx error log shows the client-side alert message rather than the expiry date itself, so cross-referencing with the certificate file is necessary.
How to Identify It
Check expiry against the live server:
echo | openssl s_client \
-connect sw-infrarunbook-01.solvethenetwork.com:443 \
-servername sw-infrarunbook-01.solvethenetwork.com 2>/dev/null \
| openssl x509 -noout -dates
Example output showing an expired certificate:
notBefore=Jan 1 00:00:00 2025 GMT
notAfter=Apr 1 00:00:00 2026 GMT
Check the certificate file on disk directly:
openssl x509 -enddate -noout -in /etc/nginx/ssl/solvethenetwork.com.crt
Bulk check all certificate files under the Nginx SSL directory:
for cert in /etc/nginx/ssl/*.crt; do
echo -n "$cert: "
openssl x509 -enddate -noout -in "$cert" 2>/dev/null || echo "UNREADABLE"
done
The Nginx error log entry for an expired certificate (reported as a client alert):
2026/04/06 08:00:14 [error] 1492#1492: *12 SSL_do_handshake() failed
(SSL: error:14094415:SSL routines:ssl3_read_bytes:sslv3 alert certificate expired)
while SSL handshaking, client: 10.10.8.44, server: 0.0.0.0:443
How to Fix It
For Let's Encrypt certificates managed by Certbot, force immediate renewal:
certbot renew --force-renewal --nginx \
-d solvethenetwork.com \
-d www.solvethenetwork.com
systemctl reload nginx
For manually managed certificates, deploy the new certificate bundle and key, then verify and reload:
cp /tmp/new_solvethenetwork.crt /etc/nginx/ssl/solvethenetwork.com.crt
cp /tmp/new_solvethenetwork.key /etc/nginx/ssl/solvethenetwork.com.key
chmod 600 /etc/nginx/ssl/solvethenetwork.com.key
chown root:root /etc/nginx/ssl/solvethenetwork.com.key
nginx -t && systemctl reload nginx
Confirm the new expiry date:
echo | openssl s_client \
-connect sw-infrarunbook-01.solvethenetwork.com:443 2>/dev/null \
| openssl x509 -noout -dates
Root Cause 5: SNI Not Configured
Why It Happens
Server Name Indication (SNI) is a TLS extension defined in RFC 6066 that allows a client to include the target hostname in the ClientHello message, before any certificate is exchanged. Nginx uses the SNI value to select the correct virtual host and its associated certificate when multiple HTTPS server blocks share the same IP address and port. When SNI is absent from the client request — or when Nginx has no matching server_name and no properly configured default_server — the server either presents the first certificate loaded (which may be wrong), presents no certificate at all, or fails the handshake with an error about a missing ssl_certificate. SNI absence is encountered with older HTTP clients, some legacy Java HttpsURLConnection implementations, and environments where an L4 load balancer or NAT device strips TLS extensions before forwarding to Nginx.
How to Identify It
Compare openssl behavior with and without the -servername flag:
# Without SNI — Nginx selects the default_server certificate
openssl s_client -connect 10.10.1.50:443 2>/dev/null \
| openssl x509 -noout -subject
# With SNI — Nginx selects the correct vhost certificate
openssl s_client -connect 10.10.1.50:443 \
-servername api.solvethenetwork.com 2>/dev/null \
| openssl x509 -noout -subject
If the two subjects differ, SNI routing is working but no-SNI clients get the wrong certificate. Nginx error log entry when no ssl_certificate is available for a connection with no SNI match:
2026/04/06 13:45:01 [error] 1492#1492: *201
no "ssl_certificate" is defined in server listening on SSL port
while SSL handshaking, client: 192.168.20.11, server: 0.0.0.0:443
Confirm which server block is acting as default_server:
grep -rn "default_server" /etc/nginx/sites-enabled/ /etc/nginx/conf.d/
Capture a ClientHello packet to verify whether the SNI extension is present:
tcpdump -i eth0 -w /tmp/tls_debug.pcap port 443 &
curl -k https://10.10.1.50/healthz
kill %1
# Analyse in Wireshark: TLS > ClientHello > Extensions > server_name
How to Fix It
Define a catch-all default server block on port 443 that handles no-SNI clients gracefully. Pair it with all named virtual host blocks using explicit server_name values:
# Catch-all for unknown SNI or no-SNI clients
server {
listen 443 ssl default_server;
server_name _;
ssl_certificate /etc/nginx/ssl/default.solvethenetwork.com.crt;
ssl_certificate_key /etc/nginx/ssl/default.solvethenetwork.com.key;
# Drop the connection cleanly
return 444;
}
# Named virtual host — only matches when SNI includes this hostname
server {
listen 443 ssl;
server_name api.solvethenetwork.com;
ssl_certificate /etc/nginx/ssl/api.solvethenetwork.com.crt;
ssl_certificate_key /etc/nginx/ssl/api.solvethenetwork.com.key;
...
}
If the problematic client cannot be upgraded to send SNI, route it to a dedicated IP address bound to a separate Nginx server block where SNI selection is not needed.
Root Cause 6: Incomplete Certificate Chain (Missing Intermediates)
Why It Happens
Certificates issued by commercial CAs and Let's Encrypt are not signed directly by a root CA. They are signed by an intermediate CA whose certificate must also be presented during the handshake so the client can build a trusted chain to the root. Nginx requires the operator to bundle the full chain manually — it does not auto-fetch intermediates via Authority Information Access (AIA). When only the leaf certificate is configured in ssl_certificate, clients that do not cache the intermediate (curl with strict verify, Java TrustManagers, many mobile applications) fail with an unknown issuer error. Desktop browsers often succeed spuriously because they cache intermediates from previous visits to other sites — this masks the problem in browser testing while API clients and mobile apps fail in production.
How to Identify It
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 \
-servername sw-infrarunbook-01.solvethenetwork.com 2>/dev/null \
| grep -A 10 "Certificate chain"
Incomplete chain — only depth 0 is present:
Certificate chain
0 s:CN = sw-infrarunbook-01.solvethenetwork.com
i:CN = R11, O = Let's Encrypt, C = US
---
Verify return code: 20 (unable to get local issuer certificate)
Complete chain — depth 0 (leaf) and depth 1 (intermediate):
Certificate chain
0 s:CN = sw-infrarunbook-01.solvethenetwork.com
i:CN = R11, O = Let's Encrypt, C = US
1 s:CN = R11, O = Let's Encrypt, C = US
i:CN = ISRG Root X1, O = Internet Security Research Group, C = US
---
Verify return code: 0 (ok)
How to Fix It
Concatenate the leaf certificate and all intermediate certificates into a single PEM bundle:
cat /etc/letsencrypt/live/solvethenetwork.com/cert.pem \
/etc/letsencrypt/live/solvethenetwork.com/chain.pem \
> /etc/nginx/ssl/solvethenetwork.com-fullchain.pem
# Alternatively, use Certbot's pre-assembled fullchain.pem directly:
ssl_certificate /etc/letsencrypt/live/solvethenetwork.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/solvethenetwork.com/privkey.pem;
Verify the assembled chain validates against the system CA bundle:
openssl verify -CAfile /etc/ssl/certs/ca-certificates.crt \
/etc/nginx/ssl/solvethenetwork.com-fullchain.pem
Expected output:
/etc/nginx/ssl/solvethenetwork.com-fullchain.pem: OK
Root Cause 7: Private Key Mismatch
Why It Happens
Nginx validates at startup that the public key embedded in the configured certificate matches the private key file. If they do not match — a situation that arises when a certificate is renewed with a new key pair but the Nginx configuration still references the old key path, or when the wrong key file is deployed by an automation script — Nginx will fail to start or emit a hard error on reload. In certain scenarios involving nginx -s reload (SIGHUP), existing workers may continue serving the old (still-matching) key while new workers fail, causing intermittent handshake failures that are difficult to reproduce.
How to Identify It
Compare the modulus of the certificate and the key — they must produce identical MD5 hashes:
openssl x509 -noout -modulus -in /etc/nginx/ssl/solvethenetwork.com.crt | md5sum
openssl rsa -noout -modulus -in /etc/nginx/ssl/solvethenetwork.com.key | md5sum
If the two hashes differ, the key does not match the certificate. Nginx startup error:
nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/solvethenetwork.com.key")
failed (SSL: error:0B080074:x509 certificate routines:
X509_check_private_key:key values mismatch)
How to Fix It
Scan all available key files to find the one that pairs with the certificate:
CERT_MOD=$(openssl x509 -noout -modulus -in /etc/nginx/ssl/solvethenetwork.com.crt | md5sum)
for k in /etc/nginx/ssl/*.key; do
KEY_MOD=$(openssl rsa -noout -modulus -in "$k" 2>/dev/null | md5sum)
[ "$KEY_MOD" = "$CERT_MOD" ] && echo "Matching key: $k"
done
Update ssl_certificate_key to point to the matched key file, then reload:
nginx -t && systemctl reload nginx
Prevention
The most effective way to handle SSL handshake failures is to prevent them entirely through automation, monitoring, and configuration discipline. Apply the following practices across every Nginx deployment:
- Automate certificate renewal and verify the timer. Use Certbot's systemd timer or a cron job to renew certificates at least 30 days before expiry. Verify the timer is active and has not silently failed:
systemctl status certbot.timer
andjournalctl -u certbot.service --since yesterday
. - Monitor certificate expiry externally. Deploy the Prometheus blackbox exporter and alert on
probe_ssl_earliest_cert_expiry
dropping below 14 days, or use a dedicated check such ascheck_ssl_cert
in Nagios/Icinga pointed at each virtual hostname. - Centralise TLS parameters in a shared include. Store ssl_protocols, ssl_ciphers, ssl_prefer_server_ciphers, ssl_session_cache, and ssl_session_timeout in a single file (e.g.,
/etc/nginx/includes/tls_params.conf
) and include it in every server block. This eliminates per-virtual-host drift. - Always deploy full certificate chains. Never configure ssl_certificate to point to only the leaf certificate. Use Certbot's
fullchain.pem
or concatenate manually and runopenssl verify
before deployment. - Run testssl.sh after every TLS configuration change. Install testssl.sh and run it against each hostname before promoting configuration changes to production:
./testssl.sh sw-infrarunbook-01.solvethenetwork.com
. All critical checks should return green. - Define a default_server with a valid certificate on every HTTPS port. This prevents obscure handshake errors for clients that omit SNI and gives you a clean rejection path.
- Validate cert/key pairs in your deployment pipeline. Add a CI/CD step that runs the modulus MD5 comparison before any certificate is deployed to production hosts.
- Enable OCSP stapling to avoid handshake failures caused by CA OCSP responder unavailability and to reduce TLS latency:
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/ssl/solvethenetwork.com-fullchain.pem;
resolver 10.10.1.1 valid=300s;
resolver_timeout 5s;
- Log TLS metadata in every access log. Adding
$ssl_protocol
,$ssl_cipher
, and$ssl_server_name
to your Nginx log format makes future incident diagnosis dramatically faster:
log_format tls_detail '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'$ssl_protocol $ssl_cipher "$ssl_server_name"';
access_log /var/log/nginx/access.log tls_detail;
Frequently Asked Questions
Q: How do I quickly determine whether an SSL handshake is succeeding from the command line?
A: Run
openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 -servername sw-infrarunbook-01.solvethenetwork.com. A successful handshake displays Verify return code: 0 (ok) and a populated Certificate chain block. Any non-zero verify return code maps directly to an X.509 or TLS error — code 10 is an expired certificate, code 20 is an untrusted issuer, and code 62 is a hostname mismatch (on newer OpenSSL versions).
Q: What is the difference between ssl_certificate and ssl_trusted_certificate in Nginx?
A: ssl_certificate is the PEM file Nginx sends to connecting clients — it must contain the leaf certificate followed by all intermediate CAs. ssl_trusted_certificate is used only for two internal purposes: verifying OCSP stapling responses, and validating client certificates when mutual TLS is configured. It should point to a CA bundle, not to the server's own leaf certificate.
Q: Why does my site work in Chrome but fail in curl or a Java application?
A: Chrome aggressively caches intermediate certificates using AIA (Authority Information Access) fetching, so it can complete the chain even when the server only sends the leaf certificate. By contrast, curl with strict verification and Java TrustManagers perform chain validation without fetching missing intermediates. Run
curl -v https://sw-infrarunbook-01.solvethenetwork.comto see the real error and fix the chain at the server, so both clients succeed without workarounds.
Q: How do I enable TLS 1.3 in Nginx?
A: Set
ssl_protocols TLSv1.2 TLSv1.3;in your configuration. TLS 1.3 is available when Nginx was compiled against OpenSSL 1.1.1 or later. Verify with
nginx -V 2>&1 | grep OpenSSL. Note that TLS 1.3 cipher suites are fixed by the protocol specification and are not configurable via ssl_ciphers — only TLS 1.2 and earlier ciphers are controlled by that directive.
Q: Nginx reloads successfully but SSL connections still fail immediately afterward — why?
A: A reload (SIGHUP) spawns new worker processes with the updated configuration while existing workers finish their current connections. If you replaced a certificate file in-place, verify the new file is a valid PEM:
openssl x509 -noout -text -in /etc/nginx/ssl/solvethenetwork.com.crt. Also check whether a CDN, upstream load balancer (e.g., at 10.10.1.1), or client-side TLS session resumption cache is presenting a stale certificate. A full service restart (
systemctl restart nginx) forces all workers to reload.
Q: What does "no shared cipher" mean and how do I fix it quickly?
A: no shared cipher means the client and Nginx have zero cipher suites in common — the intersection of their cipher lists is empty. The fastest diagnostic path is to temporarily broaden your cipher list to
HIGH:!aNULL:!eNULL, confirm the client connects, check the
$ssl_cipheraccess log field to see which cipher it negotiated, and then selectively add that cipher to your production cipher string. Never leave a broad cipher list in production; narrow it once the required cipher is identified.
Q: How do I test that SNI routing is working correctly across all my Nginx virtual hosts?
A: Loop over all your hostnames with openssl and verify each returns the expected certificate:
for host in solvethenetwork.com api.solvethenetwork.com portal.solvethenetwork.com; do
subject=$(openssl s_client -connect 10.10.1.50:443 \
-servername $host 2>/dev/null \
| openssl x509 -noout -subject 2>/dev/null)
echo "$host -> $subject"
done
Each line should show the correct certificate CN for the corresponding hostname. Any that return a mismatched subject indicate an SNI routing problem in Nginx.
Q: Can I capture detailed TLS handshake information from Nginx for debugging?
A: Nginx does not expose deep TLS internals through its own logging, but you can add
$ssl_protocol,
$ssl_cipher,
$ssl_server_name, and
$ssl_client_verifyto your access log format to capture handshake outcomes. For packet-level tracing, use
tcpdump -i eth0 -w /tmp/tls.pcap port 443and analyse the capture in Wireshark with the TLS dissector. If you need to decrypt the capture for inspection, set the
SSLKEYLOGFILE=/tmp/tls_keys.logenvironment variable in the curl client and point Wireshark at that file.
Q: My Nginx upstream proxy_pass over HTTPS is failing with SSL errors — is this the same problem?
A: The underlying TLS mechanics are identical, but the Nginx directives for upstream SSL are separate from the frontend directives. For upstream connections, use
proxy_ssl_protocols,
proxy_ssl_ciphers,
proxy_ssl_certificate,
proxy_ssl_certificate_key,
proxy_ssl_trusted_certificate, and
proxy_ssl_verify on. A common mistake is hardening the global ssl_protocols directive and assuming it also governs proxy SSL — it does not. The proxy directives must be set independently.
Q: How do I script a certificate expiry check across multiple hostnames?
A: Use a shell loop that queries each host directly and compares the expiry date against a threshold:
WARN_DAYS=14
for domain in solvethenetwork.com api.solvethenetwork.com portal.solvethenetwork.com; do
expiry=$(echo | openssl s_client \
-connect ${domain}:443 \
-servername ${domain} 2>/dev/null \
| openssl x509 -noout -enddate 2>/dev/null \
| cut -d= -f2)
exp_epoch=$(date -d "$expiry" +%s 2>/dev/null)
now_epoch=$(date +%s)
days_left=$(( (exp_epoch - now_epoch) / 86400 ))
echo "$domain: expires $expiry ($days_left days)"
[ $days_left -lt $WARN_DAYS ] && echo " WARNING: renewal required"
done
Integrate this into a monitoring cron job running as infrarunbook-admin and pipe warnings to your alerting channel.
Q: What is HSTS and why does it make an expired certificate worse than other TLS failures?
A: HTTP Strict Transport Security (HSTS), delivered via the Strict-Transport-Security response header, instructs browsers to refuse plain HTTP connections and to refuse bypassing TLS certificate errors for the specified max-age period. When a certificate expires on an HSTS-protected domain, browsers that have cached the HSTS policy will not display a bypass button — users are completely locked out until the certificate is replaced. Never configure a long HSTS max-age (e.g., 31536000 seconds) until your certificate renewal pipeline has been proven reliable over multiple renewal cycles.
Q: After fixing an SSL handshake issue, how do I confirm the full TLS configuration is correct before returning traffic?
A: Run
testssl.shfor a comprehensive audit that checks protocol versions, cipher grades, certificate chain completeness, expiry, HSTS policy, OCSP stapling status, and known vulnerabilities (BEAST, POODLE, Heartbleed, ROBOT). Install it and run it against the target host:
git clone --depth 1 https://github.com/drwetter/testssl.sh /opt/testssl
/opt/testssl/testssl.sh sw-infrarunbook-01.solvethenetwork.com
All critical checks should return green (OK) or informational before the host is returned to production load.
