InfraRunBook
    Back to articles

    Palo Alto SSL Decryption Not Working

    Palo Alto
    Published: Apr 16, 2026
    Updated: Apr 16, 2026

    SSL decryption on Palo Alto firewalls fails for several distinct reasons — policy mismatches, untrusted CAs, certificate pinning, HSM faults, and resource exhaustion. This runbook walks through every root cause with real CLI commands and actionable fixes.

    Palo Alto SSL Decryption Not Working

    Symptoms

    SSL decryption on a Palo Alto firewall fails in ways that range from obvious to infuriatingly subtle. The clearest signal is users reporting certificate errors — browsers throwing NET::ERR_CERT_AUTHORITY_INVALID or the Firefox equivalent SEC_ERROR_UNKNOWN_ISSUER. That's actually one of the better failure modes, because at least it confirms decryption is attempting to work. The worse scenario is traffic silently bypassing decryption entirely, flowing through as encrypted sessions with no decryption log entries, and no one notices until a security audit surfaces it.

    Other symptoms you'll encounter in the field: applications like Microsoft Teams, Zoom, or Outlook refusing to connect on networks where SSL decryption is enabled, while a browser on the same machine works fine. You'll see traffic log entries where the Decrypted column is blank for sessions that should be hitting your policy. Sometimes only users on one subnet are affected — a strong hint that zone or address matching in the decryption policy is off. And then there's the intermittent failure pattern where everything works fine until 9 AM when the office fills up, and then sessions start timing out. That one's almost always a performance story.

    Before digging into root causes, do two things. First, pull up Monitor > Logs > Traffic and add the Decrypted column to the view — look at whether sessions you expect to be decrypted are actually showing as decrypted. Second, check Monitor > Logs > Decryption. If that log is empty while encrypted traffic is clearly flowing, decryption isn't happening at all, which narrows your investigation significantly.


    Root Cause 1: Decryption Policy Not Matching

    This is the most common cause. Palo Alto evaluates decryption policies top-down, exactly like security policies. If your decryption rule has the wrong source zone, destination zone, address object, or user criteria, the traffic won't match it — and unmatched traffic flows through without decryption. The firewall won't log a missed match for decryption policies, so the absence of a decryption log entry is easy to overlook.

    In my experience, the most frequent mismatch is a correct source address paired with a stale zone name — the zone got renamed or restructured during a migration and no one updated the decryption policy. Another common one: the decryption policy references a URL category, but the traffic is going to an IP address that the firewall hasn't resolved to that category yet.

    The fastest way to identify this is the policy test command. SSH into the firewall and run:

    infrarunbook-admin@sw-infrarunbook-01> test ssl-decrypt policy-match source 10.10.1.50 destination 172.217.14.196 destination-port 443 protocol tcp

    If the policy matches, you'll see output like this:

    Rule Name:    Decrypt-Outbound-General
    Action:       decrypt
    Profile:      Decryption-Profile-Default
    Type:         ssl-forward-proxy

    If you get no output, or the message no matching rule found, the traffic isn't hitting any decryption rule. At that point, pull the actual traffic log entry for the failing session and note the exact source zone and destination zone values. Those need to match exactly what's configured in your decryption rule — not what you think they should be, but what the firewall actually sees.

    You can also check policy hit counts from operational mode:

    infrarunbook-admin@sw-infrarunbook-01> show running security-policy | match "Decrypt"

    Or from the web UI under Policies > Decryption, check the hit count column. Zero hits on a rule you expect to be active is a red flag. The fix is straightforward: match source and destination zones exactly to what the traffic log shows, use address groups if you're handling many subnets, and place more specific rules above broader ones. Don't forget to commit after changes.


    Root Cause 2: Root CA Not Trusted by Client

    When SSL forward proxy decryption is working correctly, the firewall intercepts the TLS handshake, establishes its own session with the destination server, and re-signs the server's certificate using your internal forward-proxy CA. The client then validates this re-signed certificate against its local trust store. If your forward-proxy CA isn't in that trust store, every single SSL-decrypted connection throws a certificate error — which is exactly what users see.

    This is why deployment order matters so much. You push the CA certificate to all endpoints before enabling SSL decryption, not after. I've seen rollouts where the security team enabled decryption across the entire organization on a Friday afternoon, no CA distribution had happened, and the help desk queue went vertical over the weekend. Don't be that team.

    To identify this, first verify what CA the firewall is using for forward proxy:

    infrarunbook-admin@sw-infrarunbook-01> show ssl-decrypt certificate

    You'll see output similar to this:

    Certificate Name:    solvethenetwork-forward-proxy-ca
    Issued By:           solvethenetwork-internal-root-ca
    Valid From:          2024-03-01 00:00:00
    Valid To:            2029-02-28 23:59:59
    Key Size:            2048
    Serial:              04B7A1F93E2C...

    Then check whether that CA is present in the client's trust store. On Windows, open certmgr.msc and look under Trusted Root Certification Authorities > Certificates. Or check programmatically with PowerShell:

    Get-ChildItem -Path Cert:\LocalMachine\Root | Where-Object {$_.Subject -like "*solvethenetwork*"}

    If it's missing, push the CA cert via Group Policy Object under Computer Configuration > Windows Settings > Security Settings > Public Key Policies > Trusted Root Certification Authorities. For macOS and Linux endpoints, deploy via your MDM or configuration management tool. For mobile devices, push via MDM profile.

    One detail that bites people during distribution: export only the certificate from the firewall, not the private key. The CA cert you export should be a PEM or DER file. If the cert has already expired, you'll need to generate a new CA, re-export it, update your Decryption Profile to reference the new certificate, push it to all endpoints, and commit. Don't delete the old cert immediately — give clients time to pick up the new one before removing the old reference.


    Root Cause 3: Certificate Pinning on Applications

    Certificate pinning is a security mechanism where an application validates not just that a certificate chains to a trusted CA, but that the certificate itself — or its public key — matches a specific expected value baked into the application. When your firewall's SSL forward proxy intercepts the connection and re-signs it with your internal CA, the certificate no longer matches the pinned value. The application refuses the connection, usually silently, with no useful error message for the end user.

    This is why enterprise SSL decryption deployments always need a well-maintained exclusion list. Applications like Microsoft Teams, Zoom, Dropbox, most banking and financial apps, corporate MDM agents, and virtually all native mobile applications use certificate pinning. You won't get a browser-style certificate warning — the app just fails to connect, and users blame the network.

    The diagnostic process here is process of elimination. If a browser works on the same host but a specific application doesn't, and that application is known to use pinning, you've found your culprit. Check your decryption logs to confirm — if the session shows as decrypted but the application is failing, pinning is almost certainly involved:

    infrarunbook-admin@sw-infrarunbook-01> show log decryption direction equal forward | match "fail"

    For deeper packet-level inspection, use the dataplane debug tool to examine what's happening during the TLS handshake for a specific source:

    infrarunbook-admin@sw-infrarunbook-01> debug dataplane packet-diag set filter match source 10.10.1.50 destination-port 443
    infraunbook-admin@sw-infrarunbook-01> debug dataplane packet-diag set log feature ssl yes
    infraunbook-admin@sw-infrarunbook-01> debug dataplane packet-diag clear log feature

    Watch for TLS alert messages from the client-side — specifically a fatal alert of type bad_certificate or unknown_ca immediately after the server hello. That's the application rejecting the re-signed cert.

    The fix is a decryption exclusion rule. In Policies > Decryption, create a rule positioned above your main decryption policy that matches the affected application's traffic and sets the action to No Decrypt. Use App-ID where possible rather than raw IP addresses — it's more maintainable as destination IPs change. Palo Alto also maintains a compatibility list of known decryption-incompatible applications in the Decryption Profile settings. Review that list when you're standing up a new environment, and save yourself the reactive troubleshooting.


    Root Cause 4: HSM Integration Failures

    If your Palo Alto firewall is integrated with a Hardware Security Module for private key storage — common in financial services, healthcare, and high-compliance environments — an HSM connectivity problem will break SSL decryption entirely. The firewall delegates cryptographic operations during the TLS handshake to the HSM. No HSM access means no key operations, which means no decryption.

    HSM failures are particularly nasty to diagnose because from the traffic log perspective, the failure looks identical to a policy mismatch — sessions just aren't getting decrypted, and no obvious error surfaces to the user. The differentiation is in the system log. That's the first place to look:

    infrarunbook-admin@sw-infrarunbook-01> show log system direction equal backward | match "hsm"

    An unhealthy HSM integration produces entries like these:

    2026-04-15 08:23:14  Error    HSM connection failed - timeout connecting to 10.10.5.20:1792
    2026-04-15 08:23:14  Critical SSL decryption unavailable - HSM unreachable
    2026-04-15 08:25:01  Error    HSM partition authentication failed - check client certificate

    You can also query HSM status directly from operational mode:

    infrarunbook-admin@sw-infrarunbook-01> show system hsm info

    A healthy integration returns something like this:

    HSM Type:       SafeNet Luna Network HSM
    Status:         Connected
    Partition:      solvethenetwork-fw-partition
    Key Count:      3
    Last Synced:    2026-04-15 08:00:01

    An unhealthy one looks like this:

    HSM Type:       SafeNet Luna Network HSM
    Status:         Disconnected
    Last Error:     Connection refused (10.10.5.20:1792)
    Last Synced:    2026-04-14 23:59:42

    Remediation depends on the failure type. For a network connectivity issue, verify the HSM appliance is reachable from the firewall's management interface, confirm that security policy permits the traffic (SafeNet Luna HSMs typically use TCP port 1792), and check whether any certificate rotations on the HSM invalidated the firewall's client identity. For partition authentication failures, you may need to re-authenticate the firewall client to the HSM partition — the procedure varies by vendor but generally involves re-registering the client certificate through the HSM's admin console.

    If the HSM itself has failed and you have a secondary HSM in an HA pair, fail over to it now. If you don't have a secondary, you'll need to temporarily move key storage back to software while the HSM is restored. That's a non-trivial operation — it involves regenerating or re-importing keys and updating your Decryption Profile — so plan for downtime on SSL decryption during the remediation window.

    One operational lesson I keep seeing repeated in the field: HSM failover gets documented in a runbook and then never tested. The failover procedure breaks silently when an HSM firmware upgrade changes the expected behavior, and no one finds out until there's an actual outage. Schedule a tested failover drill annually. Put it on the calendar as a required operational exercise, not a nice-to-have.


    Root Cause 5: Performance Impact Dropping SSL Sessions

    SSL decryption is computationally expensive. The firewall performs full TLS termination on every decrypted session — key exchange, asymmetric operations, symmetric encryption and decryption, certificate validation — on top of all its other inspection tasks. On a firewall that's undersized for its traffic load, or a properly-sized firewall dealing with an unexpected traffic spike, the data plane CPU saturates and sessions start dropping.

    The symptom pattern here is distinctive. Everything works fine at 8 AM with 200 users, but at 9 AM when 2000 users come online, HTTPS connections begin failing intermittently or timing out mid-session. The morning collaboration surge, a Patch Tuesday software update storm, a company-wide video call — any of these can push a borderline firewall over the edge. The key tell is that the failures are load-correlated, not topology-correlated.

    From the CLI, start with data plane resource monitoring:

    infrarunbook-admin@sw-infrarunbook-01> show resource-monitor second last 60

    Pay attention to the dp0 CPU utilization. Above 80% sustained is a problem. For more granular detail:

    infrarunbook-admin@sw-infrarunbook-01> show running resource-monitor

    Then check global counters for session drops, specifically filtering for drop-severity events:

    infrarunbook-admin@sw-infrarunbook-01> show counter global filter delta yes severity drop

    When SSL decryption is exhausting resources, you'll see counters like these incrementing under load:

    Name                    Value   Rate    Severity  Aspect    Description
    ssl_dec_enomem          847     14/s    drop      ssl       SSL decryption out of memory
    flow_dos_pf_drop        1203    20/s    drop      flow      DoS protection zone drop
    pktproc_ssl_dec_fail    391      6/s    drop      ssl       SSL decryption processing failure

    For SSL-specific session load, check decryption statistics and concurrent decrypted session count:

    infrarunbook-admin@sw-infrarunbook-01> show ssl-decrypt statistics
    infraunbook-admin@sw-infrarunbook-01> show session all filter ssl-decrypt yes count

    Compare the concurrent decrypted session count against your platform's rated SSL decryption session capacity. This number is in the hardware datasheet, and it's substantially lower than the platform's total session capacity — that asymmetry surprises a lot of engineers the first time they see it.

    Mitigation comes in several forms. First, tune your decryption policy to exclude categories that don't meaningfully improve your security posture. Financial services sites, healthcare portals, and OS software update services are common candidates for exclusion — decrypting them adds risk without much inspection value, and it reduces your CPU load considerably. Second, if your platform supports SSL hardware offloading (PA-5200 and PA-7000 series have dedicated SSL processing ASICs), verify it's enabled. Third, if you're consistently running the DP above 75% during normal business hours, that's a right-sizing conversation — SSL decryption at scale requires the compute to match the throughput requirement.


    Root Cause 6: Unsupported Cipher Suites or TLS Version Mismatch

    Palo Alto's SSL decryption engine supports a defined set of cipher suites and TLS protocol versions. When a server negotiates a cipher or TLS version that either isn't supported by the firewall or is explicitly blocked by your Decryption Profile, the TLS handshake fails and the session drops. This has become a more frequent issue as TLS 1.3 adoption has accelerated across the internet, particularly for servers that no longer support TLS 1.2 fallback.

    Check your Decryption Profile under Objects > Decryption Profile > SSL Protocol Settings. Confirm that TLS 1.3 is enabled if your user population accesses modern servers, and verify you haven't inadvertently restricted cipher suites that those servers require. PAN-OS 10.2 and later have improved TLS 1.3 forward proxy support — if you're on an older release and seeing handshake failures on TLS 1.3 traffic, a PAN-OS upgrade may be the actual fix.

    From the CLI, look for handshake error entries in your decryption log:

    infrarunbook-admin@sw-infrarunbook-01> show log decryption | match "hs_err"
    infraunbook-admin@sw-infrarunbook-01> show log decryption | match "err_index 40"
    infraunbook-admin@sw-infrarunbook-01> show log decryption | match "err_index 54"

    Error index 40 indicates an unsupported protocol version; error index 54 indicates a cipher suite not supported by the firewall. Both are straightforward to fix by aligning your Decryption Profile settings with the protocol versions and ciphers in use in your environment.


    Root Cause 7: Expired or Invalid Forward-Proxy CA Certificate

    Your forward-proxy CA certificate has an expiration date, just like any other certificate. When it expires, the firewall can no longer re-sign server certificates for decrypted sessions, and SSL decryption breaks completely. This is the kind of failure that hits at 2 AM on a Tuesday when the cert quietly rolls over, and the help desk queue fills up before anyone on the infrastructure team is awake.

    Check your forward proxy CA expiration from operational mode:

    infrarunbook-admin@sw-infrarunbook-01> show ssl-decrypt certificate

    Also cross-check from the configuration side:

    infrarunbook-admin@sw-infrarunbook-01> show config running | match "ssl-forward-proxy"

    If the CA is expired or within 30 days of expiring, act now. Generate a new CA certificate, export it, update the Decryption Profile to reference the new certificate, push the new CA cert to all managed endpoints before enabling it on the firewall, then commit. Don't immediately remove the old certificate — keep it in place for a transition period so systems that cached the old cert have time to update.


    Prevention

    Most SSL decryption failures are preventable with disciplined operational practices. Certificate lifecycle management is the most critical area. Palo Alto doesn't have built-in expiration alerting for the forward-proxy CA by default, so you need to manage it externally. Create a monitoring check — a simple script or your NMS platform querying the cert expiration date — that alerts at 90 days and again at 30 days. Put the renewal procedure in a runbook and test it in a non-production environment before you need it in production.

    Build a standard decryption exclusion list before going live with SSL decryption. Maintain it as a Dynamic URL Category or a well-documented address group that the security team can update through a change process. Applications like Microsoft 365, Google Workspace, Zoom, and major CDN providers publish IP ranges and URL lists — integrate these into your exclusion maintenance workflow so your exclusions stay accurate as those providers evolve.

    Establish a performance baseline during normal peak hours. Run show resource-monitor during your busiest business hour and document the DP CPU and memory utilization. That baseline becomes the reference point when you're troubleshooting intermittent SSL failures at 9:30 AM on a Monday. Without it, you're guessing whether you're resource-constrained.

    For HSM-integrated environments, build HSM health monitoring into your existing NMS stack. A TCP reachability check from the firewall management interface to the HSM listener port is the minimum — add it to your standard monitoring templates for any firewall with HSM integration. Test HSM failover annually as a scheduled operational exercise, not just as a line in a runbook. The test will surface configuration drift before an actual outage forces your hand.

    Finally, document your decryption policy logic and keep it current. Policy sprawl is real. Environments with SSL decryption enabled for more than two years tend to accumulate redundant, shadowed, or contradictory decryption rules. Schedule an annual decryption policy review — check hit counts, validate that zone references are still accurate, and clean up rules that are no longer serving a purpose. A clean, documented decryption policy is one you can troubleshoot quickly under pressure. A cluttered one is how a twenty-minute problem turns into a three-hour incident.

    Frequently Asked Questions

    How do I verify that SSL decryption is actually working on my Palo Alto firewall?

    Check Monitor > Logs > Decryption to confirm decryption log entries are being generated. In the Traffic log, add the Decrypted column and confirm sessions show as decrypted. From the CLI, run 'show ssl-decrypt statistics' to see active decryption counters, and 'show session all filter ssl-decrypt yes count' to see concurrent decrypted sessions. If the decryption log is empty while HTTPS traffic is flowing, decryption is not happening.

    Why does Palo Alto SSL decryption break Microsoft Teams but not browsers?

    Microsoft Teams uses certificate pinning, meaning it validates that the certificate matches a specific expected value, not just that it chains to a trusted CA. When Palo Alto's forward proxy re-signs the certificate with your internal CA, Teams rejects it because the certificate no longer matches the pinned value. The fix is to create a decryption exclusion rule above your main decryption policy that matches Teams traffic with the action set to No Decrypt. Use App-ID to match the 'ms-teams' application rather than raw IP addresses.

    What is the performance impact of enabling SSL decryption on a Palo Alto firewall?

    SSL decryption significantly increases data plane CPU utilization because the firewall must perform full TLS termination — including key exchange and cryptographic operations — for every decrypted session. The rated SSL decryption session capacity for any given platform is substantially lower than its total session capacity. To assess impact, run 'show resource-monitor' during peak hours before and after enabling decryption. Mitigate by excluding categories that don't add inspection value, enabling hardware SSL offloading on platforms that support it, and right-sizing the platform to the decryption throughput requirement.

    How do I fix the 'HSM connection failed' error on a Palo Alto firewall?

    First check 'show system hsm info' to see the current HSM status and last error. Verify network connectivity from the firewall management interface to the HSM appliance on the expected port (typically TCP 1792 for SafeNet Luna). Check that no firewall policy is blocking HSM traffic and that the firewall's client certificate for HSM authentication has not expired. If the HSM appliance itself has failed, fail over to a secondary HSM if one is deployed. Review system logs with 'show log system direction equal backward | match hsm' for detailed error context.

    How do I push the Palo Alto forward-proxy CA certificate to Windows clients?

    Export the forward-proxy CA certificate from the firewall as a PEM or DER file — export only the certificate, not the private key. Then deploy it via Group Policy Object under Computer Configuration > Windows Settings > Security Settings > Public Key Policies > Trusted Root Certification Authorities. This applies the certificate to the LocalMachine certificate store on all domain-joined Windows machines when Group Policy refreshes. Verify distribution using PowerShell: Get-ChildItem -Path Cert:\LocalMachine\Root | Where-Object {$_.Issuer -like '*your-ca-name*'}.

    Related Articles