Prerequisites
Before you touch the firewall, confirm these things are in order. I've watched engineers spend an hour debugging a tunnel that was broken from the start because they skipped this part — don't be that person.
You'll need two FortiGate firewalls running FortiOS 7.x. This guide is tested on 7.2 and 7.4. If you're on 6.x, the commands are largely the same but some Phase 1 default values differ, so watch out for automatic settings that may not match your remote peer.
Both WAN interfaces need stable, routable IPs. NAT traversal (NAT-T) can handle scenarios where one or both peers sit behind NAT, but for a clean production site-to-site setup, static IPs on both WAN ports make your life significantly simpler. If one side is behind NAT, enable NAT-T — I'll call that out in the config section.
Non-overlapping subnets on both sides is non-negotiable. Here's the IP scheme we'll use throughout this guide:
- HQ Site A: WAN (port1) = 172.16.1.1, LAN (port2) = 192.168.10.0/24
- Branch Site B: WAN (port1) = 172.16.2.1, LAN (port2) = 192.168.20.0/24
Admin access to both FortiGates via CLI is assumed. This guide uses CLI throughout because it's reproducible, diffable, and easier to paste into a runbook than a sequence of GUI screenshots.
Finally, agree on a pre-shared key (PSK) with the remote site team before you start. Use something with real entropy — 24 or more characters, mixed case, numbers, and symbols. Certificate-based auth is the better long-term choice for large deployments, but PSK gets most site-to-site tunnels built without the PKI overhead.
Step-by-Step Setup
Step 1 — Create Firewall Address Objects
Firewall policies in FortiOS reference named address objects, not raw subnets directly. This step gets skipped in a lot of quick-start guides, then produces confusing policy-commit errors when you try to apply it. Get it out of the way first.
Run this on both FortiGates — both sides need objects representing each subnet:
config firewall address
edit "HQ-LAN"
set subnet 192.168.10.0 255.255.255.0
set comment "HQ internal subnet"
next
edit "Branch-LAN"
set subnet 192.168.20.0 255.255.255.0
set comment "Branch internal subnet"
next
end
Step 2 — Configure Phase 1 (IKE)
Phase 1 is where the two peers authenticate each other and establish the IKE Security Association. Both sides must agree on the encryption algorithm, the integrity/hash algorithm, the Diffie-Hellman group, and the SA lifetime. A mismatch on any one of these causes a negotiation failure — and the error messages won't always tell you which parameter is the culprit unless you're reading the IKE debug output.
On HQ FortiGate:
config vpn ipsec phase1-interface
edit "to-branch"
set interface "port1"
set ike-version 2
set peertype any
set proposal aes256-sha256
set dhgrp 14
set remote-gw 172.16.2.1
set psksecret "Infra@RunB00k!PSK#Secure"
set keylife 86400
set dpd on-idle
set dpd-retrycount 3
set dpd-retryinterval 10
next
end
On Branch FortiGate:
config vpn ipsec phase1-interface
edit "to-hq"
set interface "port1"
set ike-version 2
set peertype any
set proposal aes256-sha256
set dhgrp 14
set remote-gw 172.16.1.1
set psksecret "Infra@RunB00k!PSK#Secure"
set keylife 86400
set dpd on-idle
set dpd-retrycount 3
set dpd-retryinterval 10
next
end
A few things worth highlighting here. Setting dpd on-idle rather than on-demand means the FortiGate sends keepalives when the tunnel is quiet, so a dead peer is detected quickly even when there's no application traffic. With on-demand, DPD only triggers when you're trying to send outbound traffic — a tunnel that's been idle for hours can appear up in the status table when it's actually dead. In my experience, on-idle is always the right call for production site-to-site tunnels.
DH group 14 (2048-bit MODP) is a solid baseline for most environments. If you're under compliance requirements like PCI-DSS or HIPAA, use group 19 or 20 (NIST elliptic curves). Groups 1, 2, and 5 are cryptographically broken — they'll negotiate just fine without errors, which is exactly what makes them dangerous.
Step 3 — Configure Phase 2 (IPSec SA)
Phase 2 defines how the actual data traffic is encrypted and which subnets are permitted through the tunnel. Each Phase 2 entry creates one traffic selector pair. If you need multiple subnet pairs protected by the same tunnel, create additional Phase 2 entries under the same Phase 1 name.
On HQ FortiGate:
config vpn ipsec phase2-interface
edit "to-branch-p2"
set phase1name "to-branch"
set proposal aes256-sha256
set dhgrp 14
set pfs enable
set src-subnet 192.168.10.0 255.255.255.0
set dst-subnet 192.168.20.0 255.255.255.0
set keylifeseconds 3600
set auto-negotiate enable
next
end
On Branch FortiGate:
config vpn ipsec phase2-interface
edit "to-hq-p2"
set phase1name "to-hq"
set proposal aes256-sha256
set dhgrp 14
set pfs enable
set src-subnet 192.168.20.0 255.255.255.0
set dst-subnet 192.168.10.0 255.255.255.0
set keylifeseconds 3600
set auto-negotiate enable
next
end
PFS (Perfect Forward Secrecy) is enabled here and I always keep it on for site-to-site tunnels. If a session key is ever compromised, PFS limits the blast radius to that session only — past and future sessions stay protected because each negotiation generates a fresh DH exchange. The overhead is minimal and the protection is meaningful.
The auto-negotiate enable setting tells the FortiGate to initiate Phase 2 proactively rather than waiting for matching traffic to arrive. This keeps the tunnel warm and eliminates the first-packet-drop behavior that application teams reliably complain about. Leaving it disabled is rarely the right call for a production tunnel.
Step 4 — Add Static Routes
This guide uses route-based (interface mode) tunnels, which is the right architecture for virtually every FortiOS deployment. The tunnel interface needs a static route so the routing table knows to send branch-bound traffic into it.
On HQ FortiGate:
config router static
edit 10
set dst 192.168.20.0 255.255.255.0
set device "to-branch"
set comment "Branch LAN via IPSec tunnel"
next
end
On Branch FortiGate:
config router static
edit 10
set dst 192.168.10.0 255.255.255.0
set device "to-hq"
set comment "HQ LAN via IPSec tunnel"
next
end
Step 5 — Create Firewall Policies
Route-based tunnels require explicit firewall policies just like any other interface pair on the FortiGate. You need two policies per side: one for outbound traffic from the LAN into the tunnel, and one for inbound traffic arriving from the tunnel into the LAN. Forgetting the inbound policy is surprisingly common — the tunnel comes up, traffic flows one direction, and you spend 20 minutes wondering why responses never arrive.
On HQ FortiGate:
config firewall policy
edit 100
set name "HQ-to-Branch"
set srcintf "port2"
set dstintf "to-branch"
set srcaddr "HQ-LAN"
set dstaddr "Branch-LAN"
set action accept
set schedule "always"
set service "ALL"
set logtraffic all
next
edit 101
set name "Branch-to-HQ-inbound"
set srcintf "to-branch"
set dstintf "port2"
set srcaddr "Branch-LAN"
set dstaddr "HQ-LAN"
set action accept
set schedule "always"
set service "ALL"
set logtraffic all
next
end
Mirror these on the Branch FortiGate with interface names and address object references swapped. The set logtraffic all setting is useful during initial setup and troubleshooting. Once the tunnel is stable, switch to utm or reduce logging on high-throughput links to control log storage growth.
Step 6 — Verify NAT Exemption
This step is critical if you have a catch-all outbound NAT policy for internet access on the LAN interface. Without an explicit exclusion, traffic destined for 192.168.20.0/24 will match the internet policy first, get NATed to your WAN IP, and never enter the tunnel. The tunnel will show as up, the route will look correct, and you'll spend a long time staring at configs that look perfectly fine.
Policy evaluation in FortiOS is top-down, first-match. Verify your VPN policies (IDs 100 and 101) sit above the internet NAT policy in the list:
show firewall policy
Review the edit order in the output. If the outbound internet policy appears before your VPN policies, reorder them. You can do this from the GUI by dragging policies, or from the CLI by noting the policy IDs and restructuring the order using the GUI's drag handles. If a full reorder isn't practical, create an explicit NAT-exempt policy above the internet policy for traffic matching the branch subnet.
Full Configuration Example
Here's the complete consolidated CLI configuration for both sites — ready to paste into a runbook or adapt directly to your environment.
HQ FortiGate (WAN: 172.16.1.1 — LAN: 192.168.10.0/24)
config firewall address
edit "HQ-LAN"
set subnet 192.168.10.0 255.255.255.0
next
edit "Branch-LAN"
set subnet 192.168.20.0 255.255.255.0
next
end
config vpn ipsec phase1-interface
edit "to-branch"
set interface "port1"
set ike-version 2
set peertype any
set proposal aes256-sha256
set dhgrp 14
set remote-gw 172.16.2.1
set psksecret "Infra@RunB00k!PSK#Secure"
set keylife 86400
set dpd on-idle
set dpd-retrycount 3
set dpd-retryinterval 10
next
end
config vpn ipsec phase2-interface
edit "to-branch-p2"
set phase1name "to-branch"
set proposal aes256-sha256
set dhgrp 14
set pfs enable
set src-subnet 192.168.10.0 255.255.255.0
set dst-subnet 192.168.20.0 255.255.255.0
set keylifeseconds 3600
set auto-negotiate enable
next
end
config router static
edit 10
set dst 192.168.20.0 255.255.255.0
set device "to-branch"
set comment "Branch LAN via IPSec tunnel"
next
end
config firewall policy
edit 100
set name "HQ-to-Branch"
set srcintf "port2"
set dstintf "to-branch"
set srcaddr "HQ-LAN"
set dstaddr "Branch-LAN"
set action accept
set schedule "always"
set service "ALL"
set logtraffic all
next
edit 101
set name "Branch-to-HQ-inbound"
set srcintf "to-branch"
set dstintf "port2"
set srcaddr "Branch-LAN"
set dstaddr "HQ-LAN"
set action accept
set schedule "always"
set service "ALL"
set logtraffic all
next
end
Branch FortiGate (WAN: 172.16.2.1 — LAN: 192.168.20.0/24)
config firewall address
edit "HQ-LAN"
set subnet 192.168.10.0 255.255.255.0
next
edit "Branch-LAN"
set subnet 192.168.20.0 255.255.255.0
next
end
config vpn ipsec phase1-interface
edit "to-hq"
set interface "port1"
set ike-version 2
set peertype any
set proposal aes256-sha256
set dhgrp 14
set remote-gw 172.16.1.1
set psksecret "Infra@RunB00k!PSK#Secure"
set keylife 86400
set dpd on-idle
set dpd-retrycount 3
set dpd-retryinterval 10
next
end
config vpn ipsec phase2-interface
edit "to-hq-p2"
set phase1name "to-hq"
set proposal aes256-sha256
set dhgrp 14
set pfs enable
set src-subnet 192.168.20.0 255.255.255.0
set dst-subnet 192.168.10.0 255.255.255.0
set keylifeseconds 3600
set auto-negotiate enable
next
end
config router static
edit 10
set dst 192.168.10.0 255.255.255.0
set device "to-hq"
set comment "HQ LAN via IPSec tunnel"
next
end
config firewall policy
edit 100
set name "Branch-to-HQ"
set srcintf "port2"
set dstintf "to-hq"
set srcaddr "Branch-LAN"
set dstaddr "HQ-LAN"
set action accept
set schedule "always"
set service "ALL"
set logtraffic all
next
edit 101
set name "HQ-to-Branch-inbound"
set srcintf "to-hq"
set dstintf "port2"
set srcaddr "HQ-LAN"
set dstaddr "Branch-LAN"
set action accept
set schedule "always"
set service "ALL"
set logtraffic all
next
end
Verification Steps
Once both sides are configured, verify the tunnel actually comes up and passes traffic. Don't assume it works because there are no error messages — verify it end-to-end before you hand it off.
Start with a high-level summary of all VPN tunnels on the box:
get vpn ipsec tunnel summary
You should see the tunnel name listed with an up status and a non-zero bytes-in/bytes-out count once traffic has flowed. If you see connecting — or nothing at all — Phase 1 hasn't completed. That's your starting point for digging in.
For real visibility into what's happening during IKE negotiation, enable debug output:
diagnose debug application ike -1
diagnose debug enable
Send traffic across the tunnel (or let auto-negotiate trigger it), and watch the output. You'll see the full IKE exchange — exactly which proposals were sent, which were accepted or rejected, and precisely where the negotiation breaks down if it fails. When you're done capturing:
diagnose debug disable
diagnose debug reset
This is the most useful diagnostic tool in your FortiGate VPN troubleshooting kit. In my experience, reading this output carefully resolves the vast majority of VPN issues without needing anything else.
For detailed SA state information, including traffic selectors, byte counters, and rekey timers, run:
get vpn ipsec tunnel details
If Phase 1 shows as established but Phase 2 doesn't, your traffic selectors are mismatched between the two sides. Check that src-subnet and dst-subnet on HQ are the mirror image of what's configured on Branch — this is a common typo when both sides are configured by the same person in a hurry.
Test connectivity using a source-specific ping from the FortiGate CLI:
execute ping-options source 192.168.10.1
execute ping 192.168.20.1
The ping-options source command is critical here. If you skip it and run a plain execute ping, the FortiGate sources the packet from the WAN interface, which completely bypasses the tunnel and gives you a misleading result. Always pin the source to a LAN-side IP to confirm traffic is actually traversing the IPSec SA.
To verify firewall policies are being hit, check the byte and session counters on a specific policy:
show firewall policy 100
From the GUI, check Policy & Objects → Firewall Policy and look at the counters next to your VPN policies. Zero bytes on a policy that should be carrying traffic is a clear sign — either routing isn't directing packets to the tunnel interface, or policy ordering is causing a different policy to match first.
Common Mistakes
Mismatched Phase 1 or Phase 2 proposals. This is the most common reason tunnels won't come up, and it's avoidable. One side proposes AES-256/SHA-256/DH-14, the other has AES-128/SHA-1/DH-2 from a leftover default config, and the negotiation fails without a clear error in normal logging. The IKE debug output makes it obvious. Document your crypto parameters before you start and verify both sides explicitly — don't rely on compatible defaults, especially across different FortiOS versions.
Missing NAT exemption. Close second. If you have a catch-all outbound NAT policy for internet access, branch-bound traffic gets masked to the WAN IP before it ever reaches the tunnel. Policy order is almost always the culprit — the NAT policy sits above the VPN policy and matches first. Put your VPN-bound policies at the top of the outbound policy list, above any NAT-enabled internet policies, and confirm the fix by watching the policy byte counters change.
Overlapping subnets. The FortiGate won't refuse to build a tunnel where the remote subnet overlaps with your local addressing, but the routing behavior will be unpredictable. I've seen this in post-merger network integrations and in environments where someone chose 192.168.1.0/24 for everything without a central IP plan. Map your subnets before you scale to multiple sites and keep them documented somewhere authoritative.
MTU and MSS issues. This is the silent killer. The tunnel appears up, some traffic passes, but PDF downloads freeze mid-file, SSH sessions hang during key exchange, and SMB shares connect but won't browse directories. IPSec encapsulation adds overhead — typically 50 to 80 bytes depending on cipher suite and encapsulation mode — which reduces your effective payload MTU through the tunnel. If the WAN path MTU is 1500, you're working with roughly 1420 to 1450 usable bytes inside the tunnel. Fix this with TCP MSS clamping on the tunnel interface:
config system interface
edit "to-branch"
set tcp-mss 1350
next
end
A value of 1350 gives you a conservative safety margin. You can tune upward toward 1400 if needed, but 1350 eliminates the problem in essentially all WAN path scenarios.
Skipping DPD or leaving it misconfigured. I've encountered tunnels that show as established in the status output but have been dead for hours. The FortiGate's SA table considers them valid, but no traffic passes and no alarm fires. With DPD set to on-idle and a reasonable retry interval, the firewall detects a dead peer within seconds of the keepalive timeout and re-negotiates the tunnel automatically. Without it, that dead tunnel sits in your status table indefinitely while users file tickets about the branch office losing access to shared resources. Always configure DPD. It's a few lines and it saves a lot of middle-of-the-night phone calls.
