What a Certificate Chain Actually Is
If you've ever run
openssl s_clientagainst a server and watched a chain of three certificates scroll past, you've already observed a PKI in action — you just might not have known what you were looking at. Let me break it down properly.
PKI — Public Key Infrastructure — is the broader ecosystem: the policies, procedures, software, hardware, and the certificates themselves that enable verifiable trust in digital communications. A TLS certificate chain is the specific artifact that PKI produces: an ordered list of X.509 certificates, each one cryptographically vouching for the next, connecting your server's leaf certificate back to a root certificate that clients already trust.
The chain has three tiers. At the bottom is the leaf certificate — the one bound to your server or service, containing the hostname or IP the client is connecting to. In the middle are one or more intermediate CAs (Certificate Authorities). At the top is the root CA. Clients trust the root because it was installed deliberately into their OS or browser trust store. They trust your leaf because it's signed by an intermediate, which is signed by another intermediate or the root directly, which is ultimately signed by that trusted root. Remove any link from that chain and verification fails — full stop.
Every certificate in the chain is an X.509 document. It contains a public key, metadata (subject, issuer, serial number, validity period), a list of SANs (Subject Alternative Names) for leaf certs, extension fields controlling allowed usages, and a digital signature produced by the CA above it. The root CA is self-signed — its issuer and subject fields are identical — which is exactly why it must be explicitly trusted rather than verified. You can't verify a root cert the way you verify everything else in the chain. You simply decide to trust it, and that decision anchors everything downstream.
How the Three-Tier Hierarchy Works
The three-tier model exists for a very specific operational reason. You never want your root CA online and actively signing leaf certificates day-to-day. The root is precious in a way nothing else in your infrastructure is. Compromise the root key and you compromise every certificate it has ever signed or will ever sign — potentially millions of certificates across thousands of systems. So the root stays offline. Air-gapped. On a hardware security module (HSM), in a physically secured room, with strict access controls and audit logs on every access.
The root CA's only job is to sign intermediate CA certificates. Those intermediates do the actual day-to-day signing work. If an intermediate is compromised, you revoke it, publish the revocation, and issue a new one signed by the still-intact root. The blast radius is contained. The root is never touched. This is the core design rationale for the three-tier architecture, and it's why any internal PKI worth running follows the same pattern even if you only have a handful of services.
Intermediate CAs also enable policy separation in ways that matter operationally. You might have one intermediate dedicated to internal HTTPS services, another for client authentication certificates issued to employees, and a third for code signing. Each can carry different validity periods, different key usage extensions, and be operated by different teams under different policies. The root CA policy can be strict and locked down. Intermediate CA operations can be delegated to the teams that need them without ever touching the root.
The TLS Handshake and Chain Validation
During a TLS 1.3 handshake, the server presents its certificate chain in the Certificate message. Specifically, the server sends its leaf certificate followed by any intermediate certificates. It doesn't send the root — the client is expected to already have it. This is an important detail that trips people up when debugging chain problems.
The client then performs chain validation in sequence. First, it checks that the leaf certificate's signature was produced by the private key corresponding to the public key in the next certificate up the chain. It does this by taking the leaf cert's TBSCertificate (the to-be-signed portion), hashing it with the algorithm specified in the signature algorithm field, and verifying the signature using the public key from the issuer certificate. Then it repeats this process for each subsequent certificate in the chain until it reaches a certificate whose issuer is present in the local trust store. At that final step, it verifies using the trusted root's public key directly.
Along the way, the client also checks validity periods (notBefore and notAfter fields), verifies that the Basic Constraints extension marks CA certificates as
CA:TRUE, checks that path length constraints aren't violated, and validates key usage extensions — is this certificate actually authorized to sign other certificates? A certificate can be cryptographically valid and still fail validation because an extension says it isn't permitted to do what you're asking it to do.
Revocation checking adds another layer. CRL Distribution Points (CDP) in the certificate tell clients where to download a Certificate Revocation List. OCSP responder URIs tell clients where to make a real-time revocation query. OCSP stapling — where the server fetches its own OCSP response and includes it in the TLS handshake — is the operationally correct approach for public-facing services because it avoids clients making individual OCSP queries to the CA, which has both performance and privacy implications.
Here's what chain inspection looks like in practice:
openssl s_client -connect mail.solvethenetwork.com:443 -showcerts
The output shows each certificate the server presented:
Certificate chain
0 s:CN = mail.solvethenetwork.com
i:CN = SolveTheNetwork Intermediate CA R1
1 s:CN = SolveTheNetwork Intermediate CA R1
i:CN = SolveTheNetwork Root CA
Certificate 0 is the leaf. Certificate 1 is the intermediate. The root isn't there — and it shouldn't be. To decode any individual cert and see its full contents:
openssl x509 -in leaf.pem -noout -text
Pay close attention to the extensions section. You'll see
CA:FALSEon the leaf (it can't sign other certs),
CA:TRUEwith path length constraints on the intermediates and root, and Key Usage and Extended Key Usage fields that specify exactly what each certificate is authorized to do — serverAuth, clientAuth, codeSigning, and so on. These aren't decorative; clients enforce them during validation.
Building an Internal PKI from Scratch
In my experience, infrastructure teams avoid building an internal PKI until it becomes unavoidable — and then they do it poorly because they never planned for it. Scattered self-signed certificates aren't a PKI. They're technical debt with a built-in expiry date, and managing them at any reasonable scale is a nightmare. If you're running internal services — a service mesh, Kubernetes ingress, internal APIs, mTLS between microservices — you need a real internal CA.
Here's the raw OpenSSL flow, because understanding the mechanics matters even if you end up running step-ca or HashiCorp Vault's PKI secrets engine in production.
First, create the root CA. Keep this key encrypted and offline after creation:
# Generate root CA private key (4096-bit, AES-256 encrypted)
openssl genrsa -aes256 -out root-ca.key 4096
# Self-sign the root CA certificate (10-year validity)
openssl req -new -x509 -days 3650 \
-key root-ca.key \
-out root-ca.crt \
-subj "/CN=SolveTheNetwork Root CA/O=SolveTheNetwork/C=US"
Create the intermediate CA and sign it with the root:
# Generate intermediate CA key
openssl genrsa -aes256 -out intermediate.key 4096
# Create a CSR for the intermediate
openssl req -new \
-key intermediate.key \
-out intermediate.csr \
-subj "/CN=SolveTheNetwork Intermediate CA R1/O=SolveTheNetwork/C=US"
# Sign the intermediate with the root (5-year validity)
openssl x509 -req -days 1825 \
-in intermediate.csr \
-CA root-ca.crt \
-CAkey root-ca.key \
-CAcreateserial \
-out intermediate.crt \
-extensions v3_ca \
-extfile openssl.cnf
Now issue a leaf certificate for
sw-infrarunbook-01. The leaf key has no passphrase — the service needs to read it at startup without human interaction:
# Generate leaf private key
openssl genrsa -out sw-infrarunbook-01.key 2048
# Create CSR
openssl req -new \
-key sw-infrarunbook-01.key \
-out sw-infrarunbook-01.csr \
-subj "/CN=sw-infrarunbook-01.solvethenetwork.com"
# Sign with intermediate, including SANs via extension file
openssl x509 -req -days 365 \
-in sw-infrarunbook-01.csr \
-CA intermediate.crt \
-CAkey intermediate.key \
-CAcreateserial \
-out sw-infrarunbook-01.crt \
-extfile san.cnf \
-extensions req_ext
The
san.cnfextension file specifies Subject Alternative Names — these are required for modern TLS; the CN field alone isn't enough:
[req_ext]
subjectAltName = @alt_names
[alt_names]
DNS.1 = sw-infrarunbook-01.solvethenetwork.com
DNS.2 = sw-infrarunbook-01
IP.1 = 10.10.1.50
When you deploy this cert to Nginx or any TLS-aware service, bundle the leaf with the intermediate. The server presents both; clients need the intermediate to build the chain:
cat sw-infrarunbook-01.crt intermediate.crt > fullchain.crt
Distribute the root CA to your Linux hosts so they trust the chain:
# Debian / Ubuntu
cp root-ca.crt /usr/local/share/ca-certificates/solvethenetwork-root-ca.crt
update-ca-certificates
# RHEL / Rocky / AlmaLinux
cp root-ca.crt /etc/pki/ca-trust/source/anchors/solvethenetwork-root-ca.crt
update-ca-trust extract
Verify the chain is complete and valid before deploying:
openssl verify -CAfile root-ca.crt -untrusted intermediate.crt sw-infrarunbook-01.crt
If that returns
sw-infrarunbook-01.crt: OK, your chain is good. If it says anything else, debug it before the service goes live.
Why This Architecture Matters Beyond HTTPS
The chain-of-trust model solves a fundamental problem: how do two parties that have never communicated before agree that a public key belongs to who it claims? Without PKI, there's no scalable answer. You'd need out-of-band key exchange for every pair of systems, which works for a handful of connections and collapses completely at any real scale.
With PKI, the answer is delegation. I trust the root because I installed it deliberately. The root says trust this intermediate. The intermediate says trust this leaf. The leaf says I am
sw-infrarunbook-01.solvethenetwork.comand here's my public key. Each assertion is cryptographically bound to the one above it. An attacker can't forge a certificate for your hostname without the intermediate's private key, and can't forge the intermediate without the root's private key. The security of the entire system rests on the security of the root — which is exactly why it stays offline.
Certificate Transparency (CT) logs extend this trust model to the public CA ecosystem. Every publicly-trusted CA is required to log every certificate it issues to publicly auditable, append-only logs. This means if an attacker tricks a public CA into issuing a certificate for
solvethenetwork.comthat you didn't request, that fraudulent certificate shows up in the CT logs. Tools like
certspotteror the crt.sh search interface let you monitor for unexpected certificates against your domains. I've seen this catch real misissuance events that would otherwise have gone unnoticed.
Common Misconceptions That Cause Real Outages
I've untangled enough PKI disasters to have a reliable catalog of the mistakes that keep recurring. These aren't edge cases — they're common enough that I expect to see at least one of them every time I inherit a new environment.
"The root CA needs to be reachable." No. Clients don't communicate with the root CA during TLS validation. They use the root certificate — the public document — which lives in their trust store. The root CA server itself can and should be completely offline. The only thing that needs to be reachable is the intermediate CA's CRL distribution point or OCSP responder, and even those are only consulted if revocation checking is configured and active.
"Self-signed certificates are fine for internal services." This one is pernicious because it sounds pragmatic. A self-signed certificate isn't using PKI — it's skipping it entirely. You can't revoke an individual self-signed cert without touching every client that trusts it. You can't delegate issuance. You can't build automated renewal around a scattered collection of individual trust anchors. The right answer is a proper internal CA with the root distributed to your client trust stores. It's more work upfront, but it's actually manageable at scale. Self-signed certs are manageable for exactly one service and become a nightmare for ten.
"Missing intermediates don't matter if the root is trusted." They matter enormously. In my experience, more TLS failures come from servers configured to serve only the leaf certificate — without bundling the intermediate — than from any other single configuration error. The client can't build the chain without the intermediate, and most TLS clients won't fetch it automatically even if the Authority Information Access extension tells them where to find it. Always bundle leaf plus all intermediates in your fullchain file. Always verify with
openssl verifybefore deploying.
"Long certificate validity periods reduce operational overhead." They reduce one kind of overhead while creating a much worse kind. A two-year certificate that gets compromised on day one is a liability for 729 more days. The CA/Browser Forum is progressively shortening maximum validity periods for publicly-trusted certificates — 47-day maximums are being actively discussed as of early 2026. The correct response to short validity periods isn't to resist them; it's to automate renewal so it's not a manual operation at all. If you're manually renewing certificates, you've already lost. ACME, cert-manager, Vault PKI — pick one and automate it.
"Certificate pinning is a solid defense against chain-of-trust attacks." Pinning is a valid additional control in certain narrow contexts, but it creates severe operational risk. I've seen production outages last hours because a pin rotation wasn't coordinated correctly across all client versions in the field. If you implement pinning, you must have backup pins configured before you rotate, you must have a clear rollback path, and you must test the rotation in a staging environment that accurately reflects your client distribution. The infamous outages caused by failed pin rotations — services going dark for all users while engineers scramble — are a better argument against casual use of pinning than anything I could write.
Certificate Lifecycle Monitoring
A PKI you don't monitor is a time bomb. Certificates expire. When they do, things break — sometimes quietly (internal services returning cryptic connection errors), sometimes loudly (your public-facing site showing a browser security warning to every user while your phone rings off the hook).
At minimum, monitor expiry dates and alert with enough lead time to act. Thirty days before expiry is a reasonable threshold for internal certs; ninety days gives you room to handle organizational friction if your renewal process involves humans. For automated ACME-based systems, monitor that the renewal process is actually succeeding — not just that it's configured. A renewal job that's been silently failing for three weeks isn't a functioning system.
Check a certificate's expiry from the command line:
# Check a local cert file
openssl x509 -in sw-infrarunbook-01.crt -noout -enddate
# Check a live service directly
echo | openssl s_client -connect sw-infrarunbook-01.solvethenetwork.com:443 2>/dev/null \
| openssl x509 -noout -enddate
For anything beyond a handful of certificates, use a dedicated lifecycle tool. Cert-manager in Kubernetes handles ACME and internal CA issuance automatically and integrates with Vault, Let's Encrypt, and step-ca. For bare metal and VMs, HashiCorp Vault's PKI secrets engine gives you automated issuance, short-lived certificates, audit trails, and CRL publishing in a single package. The PKI conversation doesn't end at issuance — you're managing renewal, key rotation, intermediate CA expiry (those have validity periods too), CRL publishing, and eventual root CA rotation. Get the fundamentals right from the start, automate everything you can, keep the root offline, and document what you actually built. Everything else follows from that.
