InfraRunBook
    Back to articles

    F5 BIG-IP Virtual Server and Pool Basic Setup

    F5
    Published: Apr 12, 2026
    Updated: Apr 12, 2026

    A step-by-step guide to building a functional F5 BIG-IP virtual server and pool from scratch using TMSH, covering health monitors, SNAT, profiles, and verification.

    F5 BIG-IP Virtual Server and Pool Basic Setup

    Prerequisites

    Before you touch a single TMSH command, make sure your environment is actually ready. I've seen engineers spend hours troubleshooting a virtual server that never had a chance because the foundation was missing. Here's what you need locked down before starting.

    Your BIG-IP needs to be licensed and provisioned with the Local Traffic Manager (LTM) module. That sounds obvious, but in shared environments it's easy to assume someone else already handled it. Run a quick check: System > Resource Provisioning in the GUI, or

    tmsh show sys provision
    on the CLI. LTM should show nominal. If it shows none, you're not going anywhere until that's corrected.

    You also need a clear picture of your network topology. Know which VLANs the BIG-IP is trunked into, what your self IPs are on each VLAN, and most importantly — what the default gateway is. SNAT configuration decisions live or die on this information. The BIG-IP needs to be able to route return traffic back to clients, and your backend servers need a path to reach the BIG-IP's self IP on the internal VLAN. These details need to be in your head before you write a single line of config.

    • BIG-IP LTM licensed and provisioned (nominal)
    • Management access via SSH or HTTPS GUI on port 443
    • Internal VLAN self IP configured — e.g. 10.20.20.1/24
    • External VLAN self IP configured — e.g. 10.10.10.1/24
    • Default route pointing to upstream gateway already in place
    • Backend server IPs, listening ports, and health check endpoints documented
    • Virtual IP address allocated and confirmed non-conflicting

    If your backend servers aren't using the BIG-IP as their default gateway, you'll need SNAT AutoMap or a static SNAT pool. Get that decision made before you start building — it affects the virtual server configuration directly, and retrofitting it later under pressure is how misconfigurations happen.

    Step-by-Step Setup

    Step 1: Create a Health Monitor

    Don't skip the monitor. A pool with no health monitor will blindly send traffic to dead members, and your users will get intermittent failures that are painful to debug. The monitor is what makes LTM intelligent rather than just a dumb load distributor.

    For a standard HTTP application, an HTTP monitor is the right starting point. You configure it to hit a specific URI and look for a known string in the response. In my experience, using the application's dedicated health check endpoint — something like

    /health
    or
    /status
    — is far better than just checking TCP reachability. A TCP port being open does not mean the application is actually serving requests. I've seen pools full of members that were listening on port 80 with a broken database connection behind them, all passing a TCP monitor just fine.

    tmsh create ltm monitor http solvethenetwork-http-monitor {
        defaults-from http
        interval 5
        timeout 16
        send "GET /health HTTP/1.1\r\nHost: solvethenetwork.com\r\nConnection: close\r\n\r\n"
        recv "200 OK"
        recv-disable "503"
    }

    The interval is 5 seconds, timeout is 16. The rule of thumb is timeout = (interval × 3) + 1. If a member misses three consecutive health checks it gets marked down. The

    recv-disable
    string tells BIG-IP to mark the node as disabled — rather than hard down — when it sees a 503 in the response. This is useful when your app supports graceful drain via a maintenance mode endpoint that starts returning 503 to signal it's ready to be pulled from rotation.

    Step 2: Create the Pool and Add Members

    A pool is the logical group of backend servers. The load balancing method, health monitor, and member list all live here. For most stateless web applications, Round Robin is perfectly reasonable. If your application has meaningful session state that isn't handled by a persistence profile, consider Least Connections — it distributes based on active connection count rather than rotating requests evenly, which helps if your backends have uneven processing times.

    tmsh create ltm pool solvethenetwork-web-pool {
        description "Web tier pool for solvethenetwork.com"
        load-balancing-mode round-robin
        monitor solvethenetwork-http-monitor
        members {
            10.20.20.11:80 {
                address 10.20.20.11
            }
            10.20.20.12:80 {
                address 10.20.20.12
            }
            10.20.20.13:80 {
                address 10.20.20.13
            }
        }
        min-active-members 1
        slow-ramp-time 30
    }

    The

    min-active-members 1
    setting keeps the pool available as long as at least one member is healthy. Without it, BIG-IP can decide the pool is in an all-down state and stop sending traffic even when members are functioning. The
    slow-ramp-time 30
    gradually increases traffic to a newly available member over 30 seconds rather than slamming it at full speed the moment it passes a health check. This matters a lot for Java or .NET applications that need JVM warmup time or cache initialization before they can handle real load.

    Step 3: Create the Virtual Server

    The virtual server is what clients actually connect to. It's the IP and port you advertise in DNS. BIG-IP intercepts that traffic, applies your configured profiles and policies, and forwards it to the pool. This is the heart of the whole setup.

    tmsh create ltm virtual solvethenetwork-web-vs {
        description "HTTP Virtual Server for solvethenetwork.com"
        destination 10.10.10.100:80
        ip-protocol tcp
        pool solvethenetwork-web-pool
        source-address-translation {
            type automap
        }
        profiles {
            http { }
            tcp { }
        }
        vlans {
            external
        }
        vlans-enabled
    }

    A few things worth calling out. The

    destination
    is your VIP — 10.10.10.100 on port 80. The
    source-address-translation type automap
    is SNAT AutoMap, which rewrites the source IP of each client packet to the BIG-IP's self IP on the internal VLAN before it reaches the backend server. This is critical when your servers don't use the BIG-IP as their default gateway. Without SNAT, return traffic from the servers goes directly to the real client IP, bypassing the BIG-IP entirely, and the connection fails because the client never opened a session to the server's IP — it connected to the VIP.

    The

    vlans-enabled
    directive combined with specifying the
    external
    VLAN restricts this virtual server to that VLAN only. Without this explicit binding, BIG-IP listens on all VLANs by default, which creates unnecessary exposure in segmented environments and can cause routing confusion in multi-tenant deployments.

    Step 4: Add Cookie Persistence

    If your application requires sticky sessions — and many still do despite stateless being the ideal — attach a cookie persistence profile. BIG-IP injects a cookie into the response that identifies which pool member handled the request, and subsequent requests from that client get pinned to the same member.

    tmsh create ltm persistence cookie solvethenetwork-cookie-persist {
        defaults-from cookie
        cookie-name "BIPSESSION"
        expiration 0
    }
    
    tmsh modify ltm virtual solvethenetwork-web-vs {
        persist {
            solvethenetwork-cookie-persist {
                default yes
            }
        }
    }

    Setting

    expiration 0
    makes it a session cookie — it disappears when the browser closes. If you want persistence to survive browser restarts (for shopping carts, for example), set an explicit expiration in seconds.

    Full Configuration Example

    Here's the complete sequence in order, ready to paste into an SSH session logged in as infrarunbook-admin. Run these from the BIG-IP bash shell prefixed with

    tmsh
    , or drop into the TMSH interactive shell first. The partition context is Common by default.

    # 1. Health Monitor
    tmsh create ltm monitor http solvethenetwork-http-monitor {
        defaults-from http
        interval 5
        timeout 16
        send "GET /health HTTP/1.1\r\nHost: solvethenetwork.com\r\nConnection: close\r\n\r\n"
        recv "200 OK"
        recv-disable "503"
    }
    
    # 2. Pool with three members
    tmsh create ltm pool solvethenetwork-web-pool {
        description "Web tier pool for solvethenetwork.com"
        load-balancing-mode round-robin
        monitor solvethenetwork-http-monitor
        members {
            10.20.20.11:80 {
                address 10.20.20.11
            }
            10.20.20.12:80 {
                address 10.20.20.12
            }
            10.20.20.13:80 {
                address 10.20.20.13
            }
        }
        min-active-members 1
        slow-ramp-time 30
    }
    
    # 3. Cookie Persistence Profile
    tmsh create ltm persistence cookie solvethenetwork-cookie-persist {
        defaults-from cookie
        cookie-name "BIPSESSION"
        expiration 0
    }
    
    # 4. Virtual Server
    tmsh create ltm virtual solvethenetwork-web-vs {
        description "HTTP Virtual Server for solvethenetwork.com"
        destination 10.10.10.100:80
        ip-protocol tcp
        pool solvethenetwork-web-pool
        source-address-translation {
            type automap
        }
        profiles {
            http { }
            tcp { }
        }
        persist {
            solvethenetwork-cookie-persist {
                default yes
            }
        }
        vlans {
            external
        }
        vlans-enabled
    }
    
    # 5. Commit to disk
    tmsh save sys config

    Always save after making changes. I've watched engineers build out entire configurations, get pulled into another incident, have the system reload unexpectedly, and lose everything because they never ran that last command. The

    tmsh save sys config
    call writes the running configuration to
    /config/bigip.conf
    . Make it a reflex.

    Verification Steps

    Check Pool Member Status

    The first thing to verify is whether your pool members are actually up and the health monitor is functioning as expected.

    tmsh show ltm pool solvethenetwork-web-pool members

    You want Availability: available and State: enabled for each member. If you see Availability: offline, the monitor is failing. Check the monitor's

    recv
    string against what the application actually returns. A subtle mismatch — extra whitespace, a different status code, an unexpected redirect — is the most common culprit. For detailed monitor failure information:

    tmsh show ltm pool solvethenetwork-web-pool detail

    Check Virtual Server Status

    tmsh show ltm virtual solvethenetwork-web-vs

    The virtual server availability rolls up from the pool. If the pool is down, the VS goes down. If the VS shows available but traffic isn't flowing, the issue is almost always network-level: check VLAN assignment, confirm ARP resolution for the VIP on your upstream router, and verify routing. The BIG-IP won't respond to ARP for a VIP unless the VS is enabled and the VIP is in the correct VLAN.

    Verify Traffic with Statistics

    tmsh show ltm virtual solvethenetwork-web-vs stats

    Make a test request from a client and watch the connection counter increment. If it doesn't move, traffic isn't reaching the virtual server — that's a network problem upstream of the BIG-IP. If the counter increments but the client is getting errors, the problem is in the BIG-IP-to-pool-member path. That distinction alone saves a lot of time when you're troubleshooting under pressure.

    Test Pool Member Reachability Directly

    SSH into the BIG-IP as infrarunbook-admin, drop to bash, and test a backend member directly:

    curl -v -H "Host: solvethenetwork.com" http://10.20.20.11/health

    If this succeeds from the BIG-IP but traffic through the virtual server is failing, the backend itself is fine and the fault is in your VS or pool configuration. If this also fails, the backend server is unreachable from the BIG-IP's data plane — check routing between the internal self IP subnet (10.20.20.0/24) and the backend servers.

    Packet Capture

    When everything looks correct in TMSH but something still isn't working, capture packets on the wire:

    tcpdump -i 0.0 -nn host 10.10.10.100 and port 80

    The

    -i 0.0
    interface is unique to BIG-IP — it captures on all interfaces simultaneously, letting you see both client-side and server-side traffic in a single stream. You can correlate the client SYN arriving on the external VLAN with the translated packet leaving on the internal VLAN, which immediately confirms whether SNAT is working and whether the pool member is responding.

    Common Mistakes

    SNAT Missing When Servers Don't Route Through the BIG-IP

    This is the single most common issue in new BIG-IP deployments. If your backend servers have a default gateway that isn't the BIG-IP's internal self IP, you must use SNAT. Without it, the server receives a TCP SYN with the real client source IP, sends its SYN-ACK directly to that client bypassing the BIG-IP, and the client rejects it — because the client never sent a SYN to the server's IP, it connected to the VIP. The three-way handshake never completes. Use

    source-address-translation { type automap }
    unless you have a specific architectural reason to use a dedicated SNAT pool.

    Health Monitor Timeout Set Incorrectly

    If your timeout is less than three times the interval, you'll get false positives — members bouncing up and down while they're perfectly healthy but occasionally slow to respond. Always use (interval × 3) + 1 as your minimum timeout. On the other end, setting an interval of 30 seconds gives unhealthy members a very long window to keep receiving traffic before they're pulled. Five seconds is the right default for most HTTP applications.

    No Default Route on the BIG-IP Data Plane

    The management routing domain on BIG-IP is separate from the data plane routing domain. You might have full management connectivity and still have broken data plane routing if there's no default route in the main route table. Confirm it:

    tmsh show net route

    You should see a

    0.0.0.0/0
    entry pointing to your upstream gateway. Without it, return traffic from SNAT connections going to client IPs outside your directly connected subnets will be blackholed.

    Virtual Server Listening on All VLANs

    By default, a newly created virtual server listens on all VLANs. In a properly segmented deployment, this means an internal-facing VS is reachable from external interfaces, which is a security exposure. Always explicitly bind virtual servers to the appropriate VLAN using

    vlans-enabled
    with a VLAN list. It takes ten seconds to add and it's the right thing to do every time.

    Saving Configuration Before Verifying It Works

    Save frequently during a build session, but don't do your final save until you've confirmed traffic is flowing correctly. If you save a broken configuration and need to roll back, you've overwritten your last known good state. Before any production change, take a UCS backup so you always have a clean restore point:

    tmsh save sys ucs /var/local/ucs/pre-change-$(date +%Y%m%d).ucs

    That gives you a timestamped snapshot you can restore in minutes if something goes sideways.


    A health monitor, a pool with members, a virtual server with SNAT AutoMap and the right profiles — that's the complete foundation of F5 LTM. Every more advanced feature: iRules, SSL offload, WAF policies, traffic shaping, rate limiting — it all builds on top of this exact structure. Get the basics right and the rest of the platform opens up in a logical, predictable way.

    Frequently Asked Questions

    What is the difference between a node and a pool member on F5 BIG-IP?

    A node is the IP address object representing a backend server in BIG-IP's configuration. A pool member is that node combined with a specific port, assigned to a specific pool. The same node IP can be a member of multiple pools on different ports — for example, 10.20.20.11 as a node, with 10.20.20.11:80 in an HTTP pool and 10.20.20.11:8443 in a separate internal API pool.

    Do I always need SNAT on an F5 BIG-IP virtual server?

    Not always, but you need it whenever your backend servers don't route return traffic back through the BIG-IP. If servers use the BIG-IP's internal self IP as their default gateway, return traffic flows through BIG-IP naturally and SNAT isn't required. In practice, most environments require SNAT AutoMap because servers typically have their own default gateway pointing upstream rather than to the BIG-IP.

    Why would a BIG-IP pool member show as offline even though the server is running?

    The most common reasons are a mismatch in the health monitor's recv string (the response doesn't contain what the monitor expects), the monitor is probing the wrong port, the server's firewall is blocking the health probe source IP (the BIG-IP's self IP), or the application is returning an unexpected status code or redirect. Run 'tmsh show ltm pool <name> detail' to see the specific failure reason, and test directly with curl from the BIG-IP bash shell to isolate the issue.

    What profiles should I attach to a basic HTTP virtual server on F5 BIG-IP?

    At minimum, attach the http and tcp profiles. The TCP profile handles connection management and optimization. The HTTP profile gives BIG-IP Layer 7 visibility, enabling features like header manipulation, X-Forwarded-For insertion, HTTP compression, and iRule HTTP events. For HTTPS, you would also attach a clientssl profile for SSL offload, and optionally a serverssl profile if the backend connection needs to be re-encrypted.

    How do I verify that traffic is actually passing through an F5 BIG-IP virtual server?

    Run 'tmsh show ltm virtual <vs-name> stats' and watch the connection counter while making a test request. If the counter increments, traffic is reaching the VS. Also check pool member stats with 'tmsh show ltm pool <pool-name> members' to confirm members are up. For deeper inspection, use 'tcpdump -i 0.0 -nn host <vip-ip> and port <port>' from the BIG-IP bash shell to capture actual packet flow across all interfaces simultaneously.

    Related Articles