InfraRunBook
    Back to articles

    Palo Alto GlobalProtect VPN Not Connecting

    Palo Alto
    Published: Apr 12, 2026
    Updated: Apr 13, 2026

    A senior engineer's systematic guide to diagnosing and fixing Palo Alto GlobalProtect VPN connection failures, covering portal reachability, certificate trust errors, authentication failures,...

    Palo Alto GlobalProtect VPN Not Connecting

    Symptoms

    You open the GlobalProtect client and click Connect. The wheel spins for thirty seconds, then you get a message like "Unable to connect to portal" or "The portal address is not valid." Sometimes the failure is completely silent — the client times out and drops back to Disconnected with no explanation. In other cases the client gets past the portal handshake but then hangs at the gateway stage, cycling through "Connecting to gateway" indefinitely. Users on Windows report a persistent red X in the system tray. macOS users see the GP menu bar icon fade to grey. Some users report the icon goes green but SSH sessions to internal hosts time out and no internal resources are actually reachable.

    The frustrating part about GlobalProtect failures is how vague the error messages are. "Failed to get configuration from portal" can mean a dozen different things. "Gateway connection failed" tells you almost nothing. You need to approach this systematically, starting from the network layer and working up through certificates, authentication, and client configuration. That's exactly what this guide does.


    Root Cause 1: Portal Unreachable

    Why It Happens

    The GlobalProtect portal is a web service running on a Palo Alto firewall interface — typically TCP 443 on the external-facing IP. If the client can't reach that IP on that port, the connection attempt dies before authentication even begins. This can happen because of a routing failure between the client and the firewall, a security policy on the firewall not permitting inbound traffic to the portal service, an upstream ISP issue, or something as mundane as someone updating the portal's external IP in DNS without pushing the change to the client's portal address configuration.

    In my experience, this is the most common root cause behind "works from the office but not from home" complaints. The portal IP is reachable when you're on the internal LAN because you're hitting it through an internal route, but external clients trying to reach it through the internet are hitting a NAT translation that never got set up, or a security policy that only permits known source IPs.

    How to Identify It

    Start with the simplest possible test from the affected client machine. Try reaching the portal's prelogin endpoint directly with curl:

    curl -v https://vpn.solvethenetwork.com/global-protect/prelogin.esp

    If the portal is unreachable you'll see a connection timeout like this:

    * Trying 203.0.113.45:443...
    * connect to 203.0.113.45 port 443 failed: Connection timed out
    * Failed to connect to vpn.solvethenetwork.com port 443 after 21005 ms: Timed out
    curl: (28) Failed to connect to vpn.solvethenetwork.com port 443 after 21005 ms: Timed out

    A timeout means the port is filtered or the IP is entirely unreachable. A connection refused response means the IP is alive but nothing is listening on 443 — wrong IP, wrong interface, or the GlobalProtect service isn't running. Run a TCP traceroute to see where the path breaks:

    traceroute -T -p 443 vpn.solvethenetwork.com

    On the firewall side, verify the external interface is up and confirm the security policy actually allows traffic inbound to the portal:

    > show interface ethernet1/1
    
    Name              : ethernet1/1
    Link status       : up
    Speed             : 1000/full
    IP address        : 203.0.113.45/28
    Zone              : untrust
    > test security-policy-match protocol 6 source 203.0.113.1 destination 203.0.113.45 destination-port 443 from untrust to untrust

    How to Fix It

    If the policy test shows no matching rule, you need to add or correct a security policy permitting inbound TCP/443 to the portal IP. Make sure the rule's destination zone and IP match what your interface is actually configured with. Don't forget to also verify a NAT rule isn't double-translating the destination in an unexpected way. After adding the rule, commit and test again with curl before telling users to retry. Also confirm the GlobalProtect service objects (application "ssl" and "web-browsing" or the dedicated GlobalProtect app-id) are included in the rule.


    Root Cause 2: Gateway Certificate Not Trusted

    Why It Happens

    GlobalProtect validates the TLS certificate presented by the gateway before it completes the connection handshake. If that certificate is self-signed, issued by an internal CA that isn't in the client's trust store, expired, or has a common name that doesn't match the gateway address the client is connecting to, the client refuses to proceed. This is intentional security behavior — you don't want users silently connecting to a rogue gateway — but it's also one of the most reliable sources of "it broke after the maintenance window" tickets following a certificate renewal cycle.

    I've watched this scenario play out at least half a dozen times: the security team rotates the gateway certificate, uploads the new cert to the firewall, commits, everything looks fine — but they forgot to push the new intermediate CA certificate to client machines via Group Policy. The cert is technically valid, but clients can't build the chain to a trusted root, so they reject it.

    How to Identify It

    Check the GlobalProtect client log first. On Windows the log is at

    %APPDATA%\Palo Alto Networks\GlobalProtect\PanGPS.log
    . On macOS it's at
    /Library/Logs/PaloAlto/GlobalProtect/PanGPS.log
    . Look for SSL handshake failures:

    SSL: certificate verify failed (20): unable to get local issuer certificate
    Gateway ssl handshake failed, closing connection.
    Failed to connect to gateway gw-external.solvethenetwork.com

    Confirm with openssl from the command line — this shows you exactly what the client sees during the TLS handshake:

    openssl s_client -connect vpn.solvethenetwork.com:443 -showcerts

    If the chain is broken or terminates at an untrusted issuer, openssl will output:

    Verify return code: 20 (unable to get local issuer certificate)

    On the firewall, check the certificate currently bound to the SSL/TLS service profile used by the GP gateway and verify its expiry:

    > show certificate name GP-GW-2025
    
    Certificate name   : GP-GW-2025
    Common Name        : vpn.solvethenetwork.com
    Not valid before   : 2024-03-01 00:00:00
    Not valid after    : 2025-03-01 00:00:00   <-- EXPIRED
    Issuer             : SolveTheNetwork-InternalCA

    How to Fix It

    If the certificate is expired, generate a new CSR, get it signed, and import the new cert and chain into the firewall under Device > Certificate Management > Certificates. Update the SSL/TLS service profile to reference the new certificate, then commit. If you're using an internal CA, the root CA certificate and any intermediate CA certificates must be distributed to all GP client machines. On Windows, deploy via GPO to the Trusted Root Certification Authorities and Intermediate Certification Authorities stores. On macOS, push via MDM profile to the System keychain. After distribution, clients should automatically trust the new cert chain without any reinstall.

    For production environments, consider using a publicly trusted certificate from a commercial CA. The cost is low and it eliminates the entire CA distribution problem — every mainstream OS already trusts the public root CAs.


    Root Cause 3: Authentication Failure

    Why It Happens

    GlobalProtect supports LDAP, RADIUS, SAML, Kerberos, local database, and certificate-based authentication. Authentication failures can originate from expired user passwords, account lockouts, misconfigured LDAP bind credentials on the firewall, a RADIUS server that's unreachable or returning reject responses, or a SAML identity provider returning an assertion the firewall can't validate. The failure lives on the backend authentication infrastructure, not on the firewall itself — but from the user's perspective it looks like a GP problem.

    SAML failures are worth calling out specifically because they've become more common as organizations migrate toward SSO. If the IdP metadata XML loaded on the firewall is stale after an IdP certificate rotation, if there's more than a few minutes of clock skew between the firewall and the identity provider, or if the Assertion Consumer Service URL is wrong, authentication fails with an error that's opaque on the client side but very clear in the firewall's system log.

    How to Identify It

    The authentication log on the firewall is your first and best resource:

    > show log auth | match infrarunbook-admin
    2025-09-14 14:22:01  auth  deny  infrarunbook-admin  LDAP  LDAP server timeout
    2025-09-14 14:22:05  auth  deny  infrarunbook-admin  LDAP  LDAP server timeout
    2025-09-14 14:22:09  auth  deny  infrarunbook-admin  LDAP  LDAP server timeout

    The firewall also provides a direct authentication test command that bypasses the GP client entirely and tells you exactly what the auth stack returns:

    > test authentication authentication-profile GP-Auth-Profile username infrarunbook-admin password
    Enter password:
    
    Do multi-factor authentication (yes|no)? no
    
    Result    : Authentication failed
    Reason    : LDAP: server sw-infrarunbook-01.solvethenetwork.com:389 is not responding

    For SAML issues, the system log is more useful than the auth log:

    > show log system | match saml
    2025-09-14 14:25:11  SAML SSO: response validation failed - assertion expired or clock skew exceeds 300 seconds
    2025-09-14 14:25:11  SAML SSO: authentication failed for user infrarunbook-admin

    How to Fix It

    If the LDAP server is unreachable, verify the server profile under Device > Server Profiles > LDAP. Confirm the IP, port, and bind DN credentials are correct, and test connectivity from the firewall management plane. If the bind account password has rotated, update it in the server profile. For RADIUS, do the same under Device > Server Profiles > RADIUS and verify the shared secret matches what the RADIUS server expects.

    For SAML clock skew, check the firewall's NTP configuration under Device > Setup > Services and confirm it's synchronized. Then verify the identity provider is also using synchronized time. A delta larger than the IdP's assertion lifetime window — usually 300 seconds — will silently kill every SAML login. For stale IdP metadata, re-export the metadata XML from your identity provider and re-import it on the firewall under Device > Server Profiles > SAML Identity Provider.


    Root Cause 4: Split Tunnel Configuration Wrong

    Why It Happens

    Split tunneling controls which traffic gets routed through the VPN tunnel and which exits directly to the internet. When it's misconfigured, the user connects successfully — the GP icon goes green, the authentication works — but they can't reach internal resources because the routes the GP client injects into the OS routing table don't cover the destination subnet. This is a subtle failure mode because the VPN is technically up. It just isn't routing the right traffic.

    The scenario I see most often: an organization adds a new internal network (a new AWS VPC peered back to the datacenter over Direct Connect, for example, or a new VLAN for a recently acquired company) and correctly adds it to the internal routing table — but nobody updates the split tunnel include routes in the GP gateway configuration. Traffic to the original 10.0.0.0/8 works fine over the tunnel. Traffic to the new 10.20.0.0/16 goes out the default gateway locally, hits nothing, and silently drops.

    How to Identify It

    From a connected GP client, inspect the routing table to see what the tunnel adapter has actually injected. On Windows:

    route print | findstr /V "127\. 0\.0\.0"
    Network Destination    Netmask          Gateway          Interface
    10.0.0.0               255.0.0.0        10.200.0.1       PANGP Virtual Ethernet Adapter
    10.20.0.0              255.255.0.0      192.168.1.1      Wi-Fi Adapter   <-- NOT through VPN

    On macOS or Linux:

    netstat -rn | grep -E "10\.|utun|gpd0"

    If the subnet you're trying to reach isn't routed through the GP tunnel interface, the split tunnel configuration is missing that prefix. On the firewall, verify what routes the gateway is actually configured to push:

    > show running global-protect gateway <gateway-name> split-tunnel include-acl
    include-acl :
      10.0.0.0/8
      172.16.0.0/12

    Note the absence of 10.20.0.0/16 — that subnet is reachable from the datacenter via routing but the GP client doesn't know to send that traffic through the tunnel.

    How to Fix It

    In the firewall GUI or Panorama, navigate to Network > GlobalProtect > Gateways > [gateway name] > Agent > Split Tunneling. Under the Include tab, add the missing subnet. Commit the change. The client must fully disconnect and reconnect to receive the updated route injection — no client reinstall required, but a simple tunnel refresh is mandatory. Inform affected users they'll need to toggle the connection.

    For environments where the internal address space grows frequently, consider managing the include list through a dynamic address group or using the "Include All" (full tunnel) mode with explicit excludes for known internet-only traffic like streaming services. That way new internal subnets are covered automatically without manual gateway config updates every time the network expands.

    DNS split tunneling deserves a separate check. Even if the IP routes are correct, internal hostnames may resolve to wrong addresses if the split DNS configuration is incomplete. Verify internal domain suffixes are listed in the gateway's split DNS configuration so internal DNS queries get forwarded to the internal resolver over the tunnel, not the client's local ISP resolver.


    Root Cause 5: Client Version Mismatch

    Why It Happens

    Palo Alto gateways can enforce a minimum GlobalProtect client version. If a user's installed GP client falls below that threshold, the gateway rejects the connection outright after authentication — the client authenticated fine, but it never gets a tunnel. This is configured intentionally to ensure clients are running versions with specific security fixes, but it creates a hard failure mode when the minimum version is bumped on the gateway before a controlled client rollout has completed.

    The reverse scenario also happens. A user self-upgrades their GP client to a newer version that has a compatibility issue with the current PAN-OS release running on the firewall. I've personally worked a case where GP 6.2 clients were connecting to gateways on PAN-OS 9.1 and the TLS feature negotiation during the IPSec/SSL handshake produced errors that weren't obvious without reading both the client log and the firewall system log together.

    How to Identify It

    The GP client log on Windows will be explicit about version rejections:

    type "%APPDATA%\Palo Alto Networks\GlobalProtect\PanGPS.log" | findstr /I "version reject minimum"
    2025-09-14 10:15:33  Error: GP gateway rejected client: version 5.2.7 is below minimum required version 6.0.0
    2025-09-14 10:15:33  Error: Gateway connection rejected, disconnecting tunnel.

    On Linux, check the GP agent version directly:

    /opt/paloaltonetworks/globalprotect/pangps --version
    GlobalProtect Agent 5.2.7-20

    On the firewall, confirm what minimum version is currently enforced on the gateway:

    > show running global-protect gateway <gateway-name> | match min-client-version
    min-client-version: 6.0.0

    Cross-reference the Palo Alto compatibility matrix to verify that your PAN-OS version and GP client version are a supported combination. This lives in the Palo Alto support portal under the compatibility documentation for your PAN-OS release.

    How to Fix It

    The clean fix is to push updated GP client packages to affected endpoints through your endpoint management platform — SCCM, Jamf, Intune, or the GP portal's built-in client upgrade mechanism — before the minimum version enforcement takes effect. The GP portal can deliver client upgrades automatically: under the portal's agent config, upload the new GP installer package and set the upgrade behavior to either "Prompt" or "Transparent." When a user connects to the portal with an out-of-date client, they'll be upgraded before being handed off to the gateway.

    If you've already bumped the minimum version and users are locked out, the fastest remediation is to temporarily lower the minimum version back to the currently deployed client version while you work through the upgrade rollout. Don't leave the reduced minimum version in place longer than necessary — it exists for a reason.

    For the reverse compatibility scenario (newer client, older PAN-OS), check Palo Alto's release notes for the specific GP client version you've deployed. Known compatibility issues with specific PAN-OS branches are documented there. Sometimes a PAN-OS hotfix resolves the incompatibility; sometimes you need to roll back the GP client until the PAN-OS upgrade path is ready.


    Root Cause 6: DNS Resolution Failure for Portal Hostname

    Why It Happens

    GlobalProtect clients almost universally connect to a hostname rather than a raw IP. If DNS resolution fails for that hostname, the client can't even begin the TCP connection to the portal. This is especially common on hotel and airport Wi-Fi networks where a captive portal intercepts all DNS until the user authenticates through the browser splash page. It also crops up on networks where a DNS security appliance blocks resolution for external hostnames that aren't in an approved list.

    How to Identify It

    nslookup vpn.solvethenetwork.com
    Server:  192.168.1.1
    Address:  192.168.1.1#53
    
    ** server can't find vpn.solvethenetwork.com: NXDOMAIN

    If DNS resolution returns NXDOMAIN or times out, the GP client log will show connection attempts that fail immediately with no TCP handshake. On Windows, also check whether a third-party security agent like Zscaler or Cisco Umbrella has taken over the DNS stack — these products sometimes intercept and block resolution for VPN-related hostnames depending on their policy configuration.

    How to Fix It

    For captive portal scenarios, GlobalProtect has a built-in captive portal detection feature. When enabled under the GP client configuration, the agent detects when DNS is being intercepted and prompts the user to open a browser and complete the captive portal authentication before reattempting the VPN connection. Enable this under the portal's agent config in the "Internal Host Detection" and "Captive Portal" sections.

    For DNS filtering conflicts with third-party security agents, work with the security team to explicitly whitelist the GP portal and gateway hostnames in the DNS policy. Most DNS security platforms support per-hostname bypass rules that don't require disabling filtering globally.


    Root Cause 7: HIP Check Failure Blocking Post-Connect Access

    Why It Happens

    Host Information Profile checks let the gateway evaluate a client's security posture — OS patch level, antivirus definition currency, disk encryption status, presence of required software — before granting access to network resources. When a client fails a HIP check, the gateway can either block the connection entirely or silently route the user to a restricted network segment. From the user's perspective they "connected" successfully (the GP icon is green) but nothing internal is reachable. This is one of the most confusing failure modes to diagnose remotely because the VPN is genuinely up.

    How to Identify It

    On the firewall, check the HIP report status for the connected user:

    > show global-protect-gateway current-user gateway <gateway-name> user infrarunbook-admin
    username          : infrarunbook-admin
    client-ip         : 192.168.100.45
    hip-report-status : failed
    hip-report-reason : Windows Update: last update 2024-06-01, required within 30 days of today

    The GP client log on the endpoint will also contain the HIP evaluation outcome:

    HIPReport check failed: disk-encryption not found (required by policy GP-HIP-Corporate)
    User routed to restricted-access zone per HIP policy

    How to Fix It

    Remediation depends entirely on what the HIP check is enforcing. Patch-level failures require the user to update their OS. Missing required software — EDR agent, backup client, full-disk encryption — requires installing that software before reconnecting. For immediate emergency unblocking while you investigate, you can temporarily disable the HIP-based security policy rule on the firewall for the specific user or source IP. Scope this carefully and revert it as soon as the root cause is addressed.

    For recurring HIP failures across many users, the root cause is usually an endpoint management gap — machines that aren't getting patch deployments reliably, or a required software package that's not being enrolled on new builds. Work with the endpoint team to enforce the HIP requirements through your MDM or endpoint management platform so that machines arrive at the VPN client in a compliant state.


    Prevention

    Most GlobalProtect outages are entirely preventable with proactive monitoring and disciplined change management. Certificate expiry is the easiest problem to eliminate. Set monitoring alerts — via Nagios, Zabbix, your NMS, or even a cron job running

    openssl s_client
    against the portal endpoint — to fire at 60 days and again at 30 days before any certificate in the GP stack expires. The firewall CLI command
    show certificate
    can be scripted and its output parsed for the
    Not valid after
    field. Make cert expiry checks part of your weekly NOC runbook review.

    For client version management, never bump the minimum enforced version on a gateway without first confirming through your endpoint management platform that the new GP package has successfully installed on the vast majority of your fleet. A phased approach works well: update the minimum version on a single secondary gateway, validate with a pilot group of technically capable users, then roll out the change to all gateways once you've confirmed there are no hidden compatibility issues.

    Authentication infrastructure changes — LDAP migrations, RADIUS server replacements, IdP metadata refreshes after a certificate rotation — should always be validated against the GP authentication profile before the old system is taken offline. The

    test authentication authentication-profile
    command on the firewall CLI should be a mandatory step in your change validation checklist for any identity infrastructure work. NTP synchronization across all components in the SAML chain — firewall, identity provider, DNS servers — should be verified after every maintenance window involving these systems.

    Split tunnel changes require their own change process with an explicit rollback plan. Always test the new route table injection on at least one lab or pilot client before pushing the gateway config change to production. Maintain a living document of all internal subnets that require VPN tunnel routing, and build a process to update that list whenever new cloud segments, datacenter VLANs, or acquired-company networks are added to the routing fabric.

    Finally, invest in centralizing GP client log collection at scale. Forwarding PanGPS logs to your SIEM — Splunk, Elastic, whatever you're running — means you can build dashboards that surface authentication storm patterns, version rejection spikes after a PAN-OS upgrade, and certificate error trends before the help desk queue fills up. A spike in SSL handshake failures on a Monday morning after a Friday change window tells you exactly where to look before a single ticket is filed. Reactive troubleshooting is expensive and stressful. Build the visibility layer early and you'll spend far less time in the weeds on calls with frustrated remote workers.


    Related Articles

    Frequently Asked Questions

    Why does GlobalProtect say 'Unable to connect to portal' even though the firewall is running?

    This error usually means the client can't reach TCP port 443 on the portal IP. Check that a security policy rule permits inbound traffic to the portal interface from the untrust zone, verify the external IP is correct in DNS, and test reachability with curl -v https://your-portal-hostname/global-protect/prelogin.esp from the affected client.

    How do I fix a GlobalProtect gateway SSL certificate error?

    Run openssl s_client -connect your-portal:443 -showcerts to inspect the full certificate chain. If the chain doesn't terminate in a trusted root, distribute the issuing CA certificate to client machines via GPO (Windows) or MDM profile (macOS). If the certificate is expired, replace it on the firewall under Device > Certificate Management > Certificates and update the SSL/TLS service profile.

    GlobalProtect connects but I can't reach internal resources — what's wrong?

    If the VPN icon is green but internal hosts are unreachable, check the client's routing table with 'route print' (Windows) or 'netstat -rn' (macOS/Linux) to see whether the destination subnet has a route through the GP tunnel adapter. If not, the split tunnel include routes in the gateway configuration are missing that subnet. Add the subnet, commit, and have the user reconnect.

    Why is GlobalProtect rejecting my credentials even though my password is correct?

    Run 'test authentication authentication-profile <profile> username <user>' from the firewall CLI to test the auth stack independently of the GP client. This will show whether the failure is an LDAP/RADIUS server connectivity issue, an account lockout, or a SAML assertion validation error. Check 'show log auth' and 'show log system | match saml' for detailed failure reasons.

    How do I fix the GlobalProtect 'client version below minimum required' error?

    The gateway is enforcing a minimum GP client version that the installed client doesn't meet. Push the updated GlobalProtect installer via your endpoint management platform (SCCM, Jamf, Intune) or use the GP portal's built-in client upgrade feature. As an emergency measure, you can temporarily lower the minimum version on the gateway under Network > GlobalProtect > Gateways > Agent > Client Configuration while the upgrade rollout completes.

    What causes GlobalProtect authentication to fail with SAML?

    The most common SAML failure causes are stale IdP metadata on the firewall (re-export and re-import the metadata XML from your identity provider), clock skew exceeding the assertion lifetime window (verify NTP sync on both the firewall and IdP), or an incorrect Assertion Consumer Service URL in the IdP application configuration. Check 'show log system | match saml' on the firewall for the specific validation error.

    Related Articles