Symptoms
SNAT pool misconfigurations on F5 BIG-IP are among the most deceptive problems in load balancer operations. The failure mode can look identical to backend server outages, network blackholes, or application bugs. Before diving into root causes, recognize the following symptom clusters that consistently point toward a SNAT issue rather than something else.
Connection-level symptoms:
- TCP connections to pool members time out or receive RST immediately after the BIG-IP forwards the SYN, with the backend never completing the handshake
- Health monitors for pool members fluctuate between available and offline without any change on the backend servers
- Connections work intermittently — some requests succeed, others fail — depending on which SNAT address or pool member is selected per round-robin
- Session persistence breaks because the return path bypasses the BIG-IP entirely
- Only certain virtual servers or pool members are affected while others sharing the same BIG-IP work normally
Log-level symptoms visible in
/var/log/ltm:
Apr 5 09:14:32 sw-infrarunbook-01 err tmm[17963]: 01230140:3: No SNAT pool available for virtual server /Common/vs_web_443
Apr 5 09:14:45 sw-infrarunbook-01 err tmm[17963]: 01010038:5: SNAT pool /Common/snatpool_web not found
Apr 5 09:15:01 sw-infrarunbook-01 warning tmm[17963]: 01010028:4: RST sent from 10.10.50.11:443 to 10.20.1.15:52331, [0x0000:0x0]Backend-level symptoms:
- Application access logs show connections from unexpected source IPs such as BIG-IP self IPs instead of the designated SNAT pool addresses
- Backend host-based firewalls log
REJECT
orDROP
for the SNAT source IP - Packet captures on backend servers show SYN packets arriving but RST packets being sent in reply with no SYN-ACK
- Connections from specific VLANs or route domains succeed while others fail, pointing to a SNAT address reachability gap
Root Cause 1: Missing SNAT Pool
Why It Happens
The most direct cause of a SNAT pool misconfiguration is a virtual server that references a SNAT pool object that no longer exists. This typically occurs when a SNAT pool was deleted without first updating the virtual server that referenced it. It also happens when a configuration is restored from a UCS archive that contained virtual server definitions but not the corresponding SNAT pool object, or during staged rollouts where a virtual server is pushed live before its associated SNAT pool is created.
How to Identify It
Inspect the virtual server configuration and confirm whether the referenced SNAT pool exists:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm virtual vs_web_443 source-address-translation
ltm virtual vs_web_443 {
source-address-translation {
pool snatpool_web
type snat
}
}Now verify the pool object actually exists:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm snatpool snatpool_web
01020036:3: The requested SNAT Pool (/Common/snatpool_web) was not found.Error code
01020036confirms the pool is missing. Cross-reference by listing all SNAT pools to confirm the expected name is absent:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm snatpool
ltm snatpool snatpool_dmz {
members { 10.10.50.20 10.10.50.21 }
}
ltm snatpool snatpool_mgmt {
members { 10.10.60.10 }
}Note that
snatpool_webdoes not appear. The LTM log will also contain entries confirming the failure:
Apr 5 09:14:45 sw-infrarunbook-01 err tmm[17963]: 01010038:5: SNAT pool /Common/snatpool_web not foundHow to Fix It
Create the missing SNAT pool with appropriate member IP addresses. Use RFC 1918 addresses from the same VLAN as the backend servers, ensuring those IPs are also configured as self IPs on the BIG-IP:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh create ltm snatpool snatpool_web members add { 10.10.50.30 10.10.50.31 }
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm snatpool snatpool_web
ltm snatpool snatpool_web {
members {
10.10.50.30
10.10.50.31
}
}Verify the virtual server now resolves its SNAT pool correctly, then save the configuration:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh show ltm virtual vs_web_443 field-fmt | grep snat
source-address-translation.pool snatpool_web
source-address-translation.type snat
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh save sys configRoot Cause 2: Asymmetric Routing
Why It Happens
Asymmetric routing occurs when a backend server receives a connection from the BIG-IP SNAT address but routes its return traffic on a path that does not pass back through the BIG-IP. This is particularly common in environments with multiple uplinks, ECMP routing, or where backend servers have a default gateway that points to a different device than the BIG-IP's backend VLAN interface. The sequence is: the BIG-IP forwards the client SYN to the pool member with the SNAT address as the source; the pool member sends the SYN-ACK toward its default gateway; that gateway routes it away from the BIG-IP; the BIG-IP never sees the SYN-ACK; the connection times out. This can affect all connections or appear intermittent if multiple gateways are in use.
How to Identify It
Use
tcpdumpon the BIG-IP backend VLAN interface to observe whether SYN packets are sent but no SYN-ACK is received in return:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tcpdump -i internal -nn 'host 10.20.1.50 and tcp'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on internal, link-type EN10MB (Ethernet), capture size 65535 bytes
09:22:11.441023 IP 10.10.50.30.45231 > 10.20.1.50.80: Flags [S], seq 3842910011, win 65535, length 0
09:22:14.441088 IP 10.10.50.30.45231 > 10.20.1.50.80: Flags [S], seq 3842910011, win 65535, length 0
09:22:20.441104 IP 10.10.50.30.45231 > 10.20.1.50.80: Flags [S], seq 3842910011, win 65535, length 0Three SYN retransmits with no SYN-ACK response indicates the return path is broken. Log into the backend server and verify its routing table and default gateway:
[infrarunbook-admin@web01 ~]$ ip route show
default via 10.20.1.1 dev eth0
10.20.1.0/24 dev eth0 proto kernel scope link src 10.20.1.50The default gateway is
10.20.1.1. Confirm the BIG-IP's self IP on the backend VLAN:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list net self self_internal
net self self_internal {
address 10.20.1.254/24
allow-service default
floating disabled
traffic-group traffic-group-local-only
vlan internal
}The backend gateway (
10.20.1.1) differs from the BIG-IP's self IP (
10.20.1.254), confirming return traffic exits through a different device and never reaches the BIG-IP.
How to Fix It
The cleanest fix is to set the backend server's default gateway to the BIG-IP's floating self IP or the backend VLAN self IP:
[infrarunbook-admin@web01 ~]$ sudo ip route replace default via 10.20.1.254
[infrarunbook-admin@web01 ~]$ ip route show
default via 10.20.1.254 dev eth0
10.20.1.0/24 dev eth0 proto kernel scope link src 10.20.1.50If changing the backend default gateway is not operationally feasible, add a host-specific or network-specific return route for the SNAT pool subnet on each backend server:
[infrarunbook-admin@web01 ~]$ sudo ip route add 10.10.50.0/24 via 10.20.1.254 dev eth0For HA environments, always use the floating self IP as the gateway target so return traffic flows correctly regardless of which BIG-IP unit is currently active. Persist the route change in the network configuration manager for your OS distribution to survive reboots.
Root Cause 3: SNAT Address Not Routable
Why It Happens
A SNAT pool member address that is not owned or reachable by the BIG-IP causes silent connection failures. This happens when SNAT pool members are configured with IP addresses from a subnet that does not exist on any interface of the BIG-IP, or that backend servers cannot route back to. Examples include IP addresses assigned to a VLAN that was decommissioned, addresses from a subnet blocked by an upstream firewall, or floating IPs that were defined in the SNAT pool but never added to the BIG-IP's self IP configuration. The backend server receives the SYN from the SNAT address, sends the SYN-ACK, but the packet is dropped in transit because no path back to the SNAT IP exists on the network.
How to Identify It
Inspect the SNAT pool members and compare them against the configured self IPs on the BIG-IP:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm snatpool snatpool_web
ltm snatpool snatpool_web {
members {
10.10.50.30
10.10.50.31
}
}
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list net self
net self self_external {
address 10.10.50.10/24
vlan external
}
net self self_internal {
address 10.20.1.254/24
vlan internal
}Neither
10.10.50.30nor
10.10.50.31appears as a self IP. The BIG-IP will attempt to source traffic from these addresses but has no ARP or routing ownership of them, so reply packets will not be delivered. Validate from the backend server perspective:
[infrarunbook-admin@web01 ~]$ ping -c 3 10.10.50.30
PING 10.10.50.30 (10.10.50.30) 56(84) bytes of data.
--- 10.10.50.30 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2002msThe SNAT address is completely unreachable from the backend, confirming it is not owned by any device on the network.
How to Fix It
Add the SNAT pool member IPs as self IPs on the BIG-IP on the appropriate VLAN, with
allow-service noneto prevent unintended management access through the SNAT addresses:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh create net self snat_web_01 address 10.10.50.30/24 vlan external allow-service none
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh create net self snat_web_02 address 10.10.50.31/24 vlan external allow-service none
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh save sys configConfirm reachability from the backend after the change:
[infrarunbook-admin@web01 ~]$ ping -c 3 10.10.50.30
PING 10.10.50.30 (10.10.50.30) 56(84) bytes of data.
64 bytes from 10.10.50.30: icmp_seq=1 ttl=64 time=0.412 ms
64 bytes from 10.10.50.30: icmp_seq=2 ttl=64 time=0.388 ms
64 bytes from 10.10.50.30: icmp_seq=3 ttl=64 time=0.401 msAlternatively, if adding new self IPs is not desirable, update the SNAT pool to reference addresses that are already configured as self IPs on the correct VLAN. The important constraint is that every SNAT pool member IP must be owned by the BIG-IP and reachable from the backend subnet.
Root Cause 4: Wrong SNAT Type Selected
Why It Happens
F5 BIG-IP supports four source address translation types on a virtual server: None, Auto Map, SNAT (named pool), and LSN (Large Scale NAT). Selecting the wrong type causes incorrect behavior that is difficult to trace because the configuration appears valid in isolation. Common mistakes include setting the type to None on a one-armed deployment where SNAT is required for return routing, configuring Auto Map in a multi-tenant or multi-route-domain environment where the dynamically selected self IP is not permitted by backend ACLs, or specifying SNAT type without providing a pool name — leaving the reference empty and causing the BIG-IP to drop connections silently.
How to Identify It
Inspect the
source-address-translationstanza for the affected virtual server:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm virtual vs_app_8443 source-address-translation
ltm virtual vs_app_8443 {
source-address-translation {
type none
}
}With
type none, the BIG-IP preserves the original client IP as the source when forwarding to the backend. In a one-armed topology where the backend default gateway points elsewhere, the backend server cannot route the reply back to the original client IP through the BIG-IP, causing the connection to stall. Another misconfiguration pattern uses Auto Map on a shared multi-tenant environment:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm virtual vs_api_80 source-address-translation
ltm virtual vs_api_80 {
source-address-translation {
type automap
}
}Determine which self IP Auto Map would select for a given pool member by checking the routing table:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh show net route 10.20.1.50
Net::Routes
Name Destination Type NextHop Origin
/Common/rt_int 10.20.1.0/24 interface 10.20.1.254 staticAuto Map on this path would use
10.20.1.254as the SNAT source. If backend firewalls only permit a specific SNAT pool subnet and not the self IP, connections will be rejected. The lack of a pool name in the configuration makes this especially hard to identify without checking routing behavior explicitly.
How to Fix It
For predictable and auditable source IPs in production environments, always use a named SNAT pool rather than Auto Map:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh modify ltm virtual vs_app_8443 source-address-translation { type snat pool snatpool_web }
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh list ltm virtual vs_app_8443 source-address-translation
ltm virtual vs_app_8443 {
source-address-translation {
pool snatpool_web
type snat
}
}
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh save sys configIf the deployment uses a two-armed routed topology with correct return routing configured on all backend servers,
type noneis intentionally valid and should be documented. For one-armed topologies, SNAT is always required and
type nonewill cause connection failures.
Root Cause 5: Backend Rejecting SNAT IP
Why It Happens
Even when the SNAT pool is correctly defined and every member address is routable, backend servers or intermediate firewalls may actively reject connections from the SNAT IP. This occurs because firewall rules, TCP wrappers,
/etc/hosts.allowpolicies, or application-layer ACLs whitelist specific source subnets, and the SNAT pool addresses were never added to those allow lists. It is particularly common after SNAT pool addresses change due to a network redesign: the BIG-IP configuration is updated but the backend security policies are not, creating a gap that causes silent connection drops from the new SNAT addresses.
How to Identify It
Use a packet capture on the BIG-IP backend VLAN to confirm the SYN is being sent and the backend is immediately responding with a RST, which indicates active rejection rather than a routing problem:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tcpdump -i internal -nn 'host 10.20.1.50 and tcp port 443'
09:45:33.221091 IP 10.10.50.30.51204 > 10.20.1.50.443: Flags [S], seq 1234567890, win 65535, length 0
09:45:33.221342 IP 10.20.1.50.443 > 10.10.50.30.51204: Flags [R.], seq 0, ack 1234567891, win 0, length 0A RST immediately after the SYN — with no SYN-ACK — means the backend is actively rejecting the connection at the TCP layer. On the backend server, inspect the firewall rules:
[infrarunbook-admin@web01 ~]$ sudo iptables -L INPUT -n -v --line-numbers
Chain INPUT (policy DROP)
num pkts bytes target prot opt in out source destination
1 5420 2.1M ACCEPT all -- * * 10.20.1.0/24 0.0.0.0/0
2 0 0 ACCEPT all -- * * 127.0.0.1 0.0.0.0/0
3 1204 480K DROP all -- * * 0.0.0.0/0 0.0.0.0/0Rule 1 only accepts traffic from
10.20.1.0/24. The SNAT pool address
10.10.50.30originates from
10.10.50.0/24and falls through to the DROP policy. Confirm in the kernel log:
[infrarunbook-admin@web01 ~]$ sudo journalctl -k | grep "10.10.50.30"
Apr 5 09:45:33 web01 kernel: iptables DROP: IN=eth0 OUT= SRC=10.10.50.30 DST=10.20.1.50 PROTO=TCP DPT=443 LEN=60How to Fix It
Add an explicit firewall rule on the backend server to permit the SNAT pool subnet, inserted before the catch-all DROP rule:
[infrarunbook-admin@web01 ~]$ sudo iptables -I INPUT 2 -s 10.10.50.0/24 -j ACCEPT
[infrarunbook-admin@web01 ~]$ sudo iptables -L INPUT -n -v --line-numbers
Chain INPUT (policy DROP)
num pkts bytes target prot opt in out source destination
1 5420 2.1M ACCEPT all -- * * 10.20.1.0/24 0.0.0.0/0
2 0 0 ACCEPT all -- * * 10.10.50.0/24 0.0.0.0/0
3 0 0 ACCEPT all -- * * 127.0.0.1 0.0.0.0/0
4 1204 480K DROP all -- * * 0.0.0.0/0 0.0.0.0/0Persist the rule across reboots using your distribution's firewall persistence mechanism, then verify end-to-end connectivity by sourcing a test request from the SNAT address:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # curl -sk --interface 10.10.50.30 https://10.20.1.50/ -o /dev/null -w "%{http_code}"
200If a network-level firewall (not host-based) is blocking the SNAT addresses, work with the network team to add the SNAT pool subnet to the relevant access control lists. Ensure backend firewall rule updates are included in the change record whenever a SNAT pool is created or modified.
Prevention
SNAT pool misconfigurations are largely preventable with disciplined change management, topology documentation, and proactive validation. The following practices reduce the risk of encountering these failures in production.
Validate SNAT Pool Members Before Deployment
Before assigning a SNAT pool to a virtual server, confirm every member IP is configured as a self IP on the BIG-IP and is pingable from the backend servers. Use
curl --interfaceon the BIG-IP to simulate SNAT-originated connections during pre-production testing, confirming the backend responds with HTTP 200 before the virtual server goes live.
Use Named SNAT Pools Instead of Auto Map
Auto Map is convenient for small deployments but produces unpredictable source IPs in complex topologies with multiple VLANs or route domains. In production, always use named SNAT pools so that source IPs are deterministic, auditable, and can be whitelisted on backend firewalls with confidence. Document the pool-to-virtual-server mapping in your CMDB.
Link SNAT Addresses to Backend Firewall Rules in Change Records
Maintain a dependency record that links each SNAT pool to the backend firewall rules that permit it. When SNAT pool addresses change, use this record as a checklist to update all downstream ACLs before the change window closes. Automate the check with a pre-change validation script that confirms reachability from every SNAT IP to every pool member.
Enforce Return Routing via Configuration Management
Use a configuration management tool such as Ansible to enforce that backend servers in load-balanced pools always carry a static route for SNAT subnets pointing to the BIG-IP floating self IP. This prevents asymmetric routing from silently re-emerging after OS rebuilds, kernel upgrades, or network reconfigurations that reset the routing table.
Monitor SNAT Pool Port Utilization
Each SNAT pool address supports a finite number of concurrent connections limited by the available ephemeral port range. Monitor utilization and respond before exhaustion causes connection failures:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh show ltm snatpool snatpool_web
-----------------------------------------
Ltm::SNAT Pool: snatpool_web
-----------------------------------------
Translation Current Conns Max Conns
10.10.50.30 41230 65535
10.10.50.31 39870 65535Set monitoring alerts at 80% utilization per member. When any member exceeds that threshold, add additional IPs to the SNAT pool before exhaustion impacts production traffic.
Run Configuration Audits After Every Network Change
After any change that could affect routing — adding a VLAN, modifying a default gateway, adjusting firewall policy, or decommissioning a subnet — re-run end-to-end connection tests through every virtual server that uses a SNAT pool. Upload a BIG-IP qkview to F5 iHealth regularly to catch dangling SNAT pool references and other configuration warnings automatically.
Frequently Asked Questions
Q: What is the difference between Auto Map and a named SNAT pool on F5 BIG-IP?
A: Auto Map dynamically selects the BIG-IP self IP closest (in routing terms) to the pool member as the SNAT source address. A named SNAT pool uses a fixed set of administrator-defined IPs. Auto Map is simpler to configure but produces unpredictable source IPs in multi-VLAN or multi-route-domain environments. Named SNAT pools give full control over which IPs backends see, enabling precise firewall whitelisting and deterministic troubleshooting.
Q: How many IP addresses should I put in a SNAT pool?
A: Calculate based on expected peak concurrent connections. Each SNAT IP supports up to 65,535 simultaneous connections limited by the TCP/UDP ephemeral port range, though practical limits are lower when TIME_WAIT sockets accumulate. For high-traffic virtual servers, provision enough members so that peak load divided by 50,000 (a conservative per-IP ceiling) rounds up to a whole number of members, then add one for headroom.
Q: Will a SNAT pool misconfiguration cause health monitors to fail?
A: Yes, indirectly. Health monitors originate from the BIG-IP self IP by default, not from the SNAT pool. If backend servers have strict firewall rules that only permit the SNAT pool subnet, monitors sent from the self IP will be rejected, causing pool members to be marked offline. Ensure the BIG-IP self IP for the backend VLAN is also permitted on backend firewall rules, or configure monitors to use a SNAT explicitly if the self IP is not allowed.
Q: How do I check which SNAT address is being used for an active connection?
A: Query the BIG-IP connection table filtered by the pool member IP:
[infrarunbook-admin@sw-infrarunbook-01:Active:Standalone] ~ # tmsh show sys connection cs-server-addr 10.20.1.50
Sys::Connections
10.10.50.30:51890 10.20.1.50:443 10.10.50.30:51890 10.20.1.50:443 tcp 290The source IP in the server-side tuple is the active SNAT address being used for that connection.
Q: Can I use the same SNAT pool across multiple virtual servers?
A: Yes. A single SNAT pool object can be referenced by any number of virtual servers. This simplifies management and ensures consistent source IPs across all services sharing that pool. Ensure the pool has enough members to handle the combined peak concurrent load of all virtual servers that reference it, since pool exhaustion on one virtual server will affect all others sharing the same pool.
Q: What happens when a SNAT pool reaches port exhaustion?
A: When all ephemeral ports on a SNAT IP are in use, new connections assigned to that address will fail with a connection reset. The BIG-IP will attempt to use another pool member if one is available. If all members are exhausted, new connections fail entirely. The LTM log will contain:
01010038:3: SNAT port allocation failed for 10.10.50.30. Resolve by adding more IP addresses to the SNAT pool and monitoring utilization proactively.
Q: Does SNAT affect X-Forwarded-For or original client IP visibility on the backend?
A: SNAT replaces the client source IP at the network layer — the backend server sees the SNAT IP as the connection source. To preserve the real client IP, enable X-Forwarded-For insertion in the HTTP profile attached to the virtual server. The BIG-IP will add an
X-Forwarded-Forheader containing the original client IP before forwarding the request. This is fully compatible with SNAT and is the standard approach for client IP visibility in load-balanced environments.
Q: How do I gracefully drain an IP from a SNAT pool without dropping active connections?
A: Remove the address from the pool definition. New connections will no longer be assigned to that address, but existing flows using it will continue until they close naturally. Monitor the connection table to confirm the address is fully drained before decommissioning it from the self IP configuration:
tmsh modify ltm snatpool snatpool_web members delete { 10.10.50.31 }followed by
tmsh show sys connection cs-client-addr 10.10.50.31until no entries remain.
Q: Can SNAT pool misconfiguration cause SSL or TLS handshake errors on the client side?
A: Indirectly, yes. If SSL offload is configured on the virtual server and backend connections fail due to SNAT misconfiguration, the BIG-IP cannot establish the server-side connection needed to complete the proxied TLS session. The client may receive a generic TLS alert or connection reset that masks the underlying SNAT failure. Always test backend connectivity independently — using
curl --interfacefrom the BIG-IP — before concluding that an SSL issue is protocol-level rather than connectivity-level.
Q: Should SNAT pool member IPs be in the client-facing subnet or the backend server subnet?
A: SNAT pool member IPs must be in the same subnet as the backend pool members — the server-side VLAN. This ensures that backend servers can send their reply packets to the SNAT source IP via the BIG-IP, keeping the return path symmetric. Using a client-facing subnet for SNAT addresses would require backend servers to route replies across VLAN boundaries, almost always resulting in asymmetric routing and connection failures.
Q: How do I verify SNAT is working correctly after a configuration change?
A: Run three checks in sequence: first, a packet capture on the backend VLAN confirming the expected SNAT IP appears as the source in SYN packets and that SYN-ACKs are returned; second, a connection table inspection showing established connections with the SNAT address; third, an end-to-end application request through the virtual server that returns a successful response. If all three pass, SNAT is functioning correctly.
Q: What BIG-IP log file should I check first when diagnosing SNAT issues?
A: Start with
/var/log/ltm, which contains TMM-level messages about SNAT pool resolution failures, connection resets, and port exhaustion. Use
tail -f /var/log/ltmwhile reproducing the issue to capture real-time events. For more verbose output, temporarily enable debug logging with
tmsh modify sys db log.ltmlog.level value debug— always revert to
noticeafter troubleshooting to avoid filling the disk.
