InfraRunBook
    Back to articles

    HAProxy Basic Load Balancing Configuration

    HAProxy
    Published: Apr 5, 2026
    Updated: Apr 5, 2026

    Learn how to configure HAProxy as a basic HTTP load balancer on Linux, covering round-robin scheduling, HTTP health checks, the stats page, and zero-downtime reloads with real-world configuration examples.

    HAProxy Basic Load Balancing Configuration

    Overview

    HAProxy (High Availability Proxy) is a battle-tested, open-source TCP and HTTP load balancer trusted in production environments worldwide. It provides fine-grained control over connection handling, health checks, session persistence, and routing logic — all with minimal CPU and memory overhead. Unlike cloud-native load balancers, HAProxy runs entirely on your own infrastructure, giving you full visibility and control over every connection.

    This guide walks through a complete, production-ready basic load balancing setup using HAProxy on a Linux server. By the end you will have a working HTTP load balancer that distributes requests across multiple backend web servers using round-robin scheduling, with active HTTP health checks ensuring only healthy nodes receive traffic.

    Prerequisites

    • A Linux server running Ubuntu 22.04 LTS or Rocky Linux 9 — hostname sw-infrarunbook-01, IP 192.168.10.10
    • Root or sudo access via the infrarunbook-admin account
    • Three backend web servers reachable over the network: 192.168.10.11, 192.168.10.12, 192.168.10.13
    • HAProxy 2.8 LTS or later installed on the load balancer host
    • Basic familiarity with Linux networking and systemd service management
    • Ports 80 and 8404 open in the firewall on the HAProxy host
    • Backend servers serving a health check endpoint at
      /healthz
      that returns HTTP 200

    Step 1: Install HAProxy

    On Ubuntu 22.04 the default apt repository ships an older HAProxy release. Use the official HAProxy maintainer PPA for the latest 2.8 LTS build:

    sudo apt-get install --no-install-recommends software-properties-common
    sudo add-apt-repository ppa:vbernat/haproxy-2.8
    sudo apt-get update
    sudo apt-get install haproxy=2.8.*

    On Rocky Linux 9 or RHEL 9:

    sudo dnf install haproxy -y

    Confirm the installed version before continuing:

    haproxy -v

    Expected output:

    HAProxy version 2.8.x 2024/xx/xx - https://haproxy.org/

    Step 2: Understand the Configuration Structure

    The main HAProxy configuration file is located at /etc/haproxy/haproxy.cfg. It is divided into four primary sections that are evaluated top to bottom:

    • global — Process-wide settings: logging destination, OS user and group, max connections, SSL/TLS tuning, and the runtime API socket path.
    • defaults — Default values inherited by all frontend and backend sections unless explicitly overridden. Set your timeouts, logging format, and mode here.
    • frontend — Defines a listening socket and routes incoming connections to one or more backends. Think of this as the ingress point.
    • backend — Defines the pool of upstream servers, the load balancing algorithm, health check parameters, and per-server limits.

    Some configurations also use a listen block, which combines a frontend and backend into a single stanza. This is handy for simple TCP proxies but less flexible for complex HTTP routing.

    Step 3: Back Up the Default Configuration

    Always preserve the default config before making any changes:

    sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

    Step 4: Write the Load Balancer Configuration

    Open the configuration file for editing:

    sudo nano /etc/haproxy/haproxy.cfg

    Replace the contents with the configuration in the next section. Each block is annotated so you understand why each directive is present.

    Full Configuration Example

    #---------------------------------------------------------------------
    # Global settings
    #---------------------------------------------------------------------
    global
        log         /dev/log local0
        log         /dev/log local1 notice
        chroot      /var/lib/haproxy
        stats       socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats       timeout 30s
        user        haproxy
        group       haproxy
        daemon
    
        # SSL/TLS hardening (modern compatibility profile)
        ca-base     /etc/ssl/certs
        crt-base    /etc/ssl/private
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
    
        # Maximum concurrent connections across all frontends
        maxconn     50000
    
    #---------------------------------------------------------------------
    # Defaults — inherited by all frontend/backend sections
    #---------------------------------------------------------------------
    defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        option  forwardfor
        option  http-server-close
        timeout connect  5s
        timeout client   30s
        timeout server   30s
        errorfile 400  /etc/haproxy/errors/400.http
        errorfile 403  /etc/haproxy/errors/403.http
        errorfile 408  /etc/haproxy/errors/408.http
        errorfile 500  /etc/haproxy/errors/500.http
        errorfile 502  /etc/haproxy/errors/502.http
        errorfile 503  /etc/haproxy/errors/503.http
        errorfile 504  /etc/haproxy/errors/504.http
    
    #---------------------------------------------------------------------
    # Stats page — bind to internal management IP only
    #---------------------------------------------------------------------
    listen stats
        bind 192.168.10.10:8404
        stats enable
        stats uri /haproxy-stats
        stats realm HAProxy\ Statistics
        stats auth infrarunbook-admin:Ch4ng3M3N0w!
        stats refresh 10s
        stats show-node
        stats show-legends
        stats hide-version
    
    #---------------------------------------------------------------------
    # Frontend — HTTP ingress on all interfaces, port 80
    #---------------------------------------------------------------------
    frontend http_frontend
        bind *:80
        default_backend web_servers
    
        # Capture the Host header for access log enrichment
        http-request capture req.hdr(Host) len 64
    
        # Inject basic security response headers
        http-response set-header X-Content-Type-Options nosniff
        http-response set-header X-Frame-Options SAMEORIGIN
    
    #---------------------------------------------------------------------
    # Backend — three-node web server pool
    #---------------------------------------------------------------------
    backend web_servers
        balance     roundrobin
    
        # HTTP health check targeting the application health endpoint
        option      httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
    
        # Health check tuning: probe every 5s, down after 3 failures, up after 2 successes
        # maxconn per server protects backends during traffic spikes
        default-server inter 5s fall 3 rise 2 maxconn 200
    
        server web-01 192.168.10.11:80 check
        server web-02 192.168.10.12:80 check
        server web-03 192.168.10.13:80 check

    Step 5: Validate the Configuration

    HAProxy provides a built-in syntax checker. Always run this before touching a live service — a config error will prevent the process from starting or reloading:

    sudo haproxy -c -f /etc/haproxy/haproxy.cfg

    Successful output:

    Configuration file is valid

    If the check reports an error, the message includes the section name and line number. Fix the issue and re-run the check before proceeding.

    Step 6: Enable and Start HAProxy

    sudo systemctl enable haproxy
    sudo systemctl restart haproxy
    sudo systemctl status haproxy

    Look for active (running) in the status output. If HAProxy fails to start, inspect the journal for the detailed error:

    sudo journalctl -u haproxy -n 50 --no-pager

    Step 7: Open Firewall Ports

    On Ubuntu with UFW:

    sudo ufw allow 80/tcp
    sudo ufw allow 8404/tcp comment "HAProxy stats"
    sudo ufw reload

    On Rocky Linux with firewalld:

    sudo firewall-cmd --permanent --add-port=80/tcp
    sudo firewall-cmd --permanent --add-port=8404/tcp
    sudo firewall-cmd --reload

    Verification Steps

    Confirm HAProxy Is Listening

    Run the following on sw-infrarunbook-01 and verify both ports appear:

    sudo ss -tlnp | grep haproxy

    Expected output:

    LISTEN  0  128  0.0.0.0:80           0.0.0.0:*  users:(("haproxy",pid=XXXX,fd=5))
    LISTEN  0  128  192.168.10.10:8404   0.0.0.0:*  users:(("haproxy",pid=XXXX,fd=6))

    Send a Test HTTP Request

    From a host on the 192.168.10.0/24 network:

    curl -I http://192.168.10.10/

    You should receive an HTTP 200 response. If you get a 502 Bad Gateway, the backend servers are not reachable or the health check is failing — check the stats page for details.

    Verify Round-Robin Distribution

    Send nine sequential requests and observe which backend responds. If your backends return a custom header or a response body identifying themselves, you will see the requests cycle across all three nodes:

    for i in $(seq 1 9); do
      curl -s -o /dev/null -w "%{http_code} - %{remote_ip}\n" http://192.168.10.10/
    done

    Access the Stats Page

    Open a browser and navigate to:

    http://192.168.10.10:8404/haproxy-stats

    Authenticate with infrarunbook-admin and the password configured in the

    stats auth
    directive. The dashboard shows real-time session counts, connection rates, health check status (green = UP, red = DOWN), error counters, and backend response times for every server in the pool.

    Query the Runtime API

    The HAProxy admin socket enables live inspection and control without a config reload. Install

    socat
    if not already present, then query server state:

    sudo apt-get install socat -y
    echo "show servers state" | sudo socat stdio /run/haproxy/admin.sock

    Perform a Graceful Reload

    When updating the configuration on a live system, always use

    reload
    — never
    restart
    . A reload leverages SO_REUSEPORT to hand off listening sockets seamlessly, preserving all active connections:

    sudo systemctl reload haproxy

    Understanding Balance Algorithms

    The roundrobin algorithm distributes new connections sequentially across all active backend servers. It is the correct default for stateless HTTP workloads where all backends have equal capacity and similar response times.

    Other commonly used algorithms include:

    • leastconn — Routes each new connection to the server with the fewest active sessions. Best for long-lived connections such as databases, WebSockets, or LDAP.
    • source — Hashes the client source IP address to consistently send the same client to the same backend. Provides rudimentary session affinity without cookies but breaks if client IPs change (e.g., mobile users on CGNAT).
    • uri — Hashes the request URI path. Useful for reverse proxy caching where the same URL should always reach the same cache node.
    • random — Picks two servers at random and routes to the one with fewer connections (power-of-two-choices). Performs well with very large backend pools where leastconn becomes expensive.

    To switch algorithms, change the

    balance
    directive in the backend block, validate, and reload:

    balance leastconn

    Health Check Tuning

    The configuration above uses an HTTP health check targeting

    /healthz
    . This is strongly preferred over a bare TCP check because it validates that the application process is actually serving HTTP responses — not just that the TCP port is accepting connections.

    Key health check parameters on

    default-server
    :

    • inter 5s — Send a probe every 5 seconds.
    • fall 3 — Mark the server DOWN after three consecutive failed probes.
    • rise 2 — Mark the server UP again after two consecutive successful probes.
    • maxconn 200 — Queue connections at HAProxy rather than forwarding more than 200 simultaneous connections to a single backend.

    For backends that do not expose an HTTP health endpoint, use a TCP-level check instead:

    backend web_servers
        balance roundrobin
        option  tcp-check
        default-server inter 5s fall 3 rise 2
        server web-01 192.168.10.11:80 check
        server web-02 192.168.10.12:80 check
        server web-03 192.168.10.13:80 check

    Draining a Server for Maintenance

    To remove a server from the pool without a config reload — for example, before applying OS patches — use the runtime API to set it to maintenance mode. HAProxy will stop sending new connections to it while existing sessions finish naturally:

    echo "set server web_servers/web-02 state maint" | sudo socat stdio /run/haproxy/admin.sock

    Once maintenance is complete, restore the server to the active pool:

    echo "set server web_servers/web-02 state ready" | sudo socat stdio /run/haproxy/admin.sock

    Common Mistakes

    1. Skipping Config Validation Before Reload

    Running

    systemctl reload haproxy
    without first running
    haproxy -c -f /etc/haproxy/haproxy.cfg
    risks applying a broken config. HAProxy will refuse to reload with a syntax error, but the failed reload attempt itself can interrupt in-flight health checks. Validate every time.

    2. Setting Timeouts Too Low for the Application

    A 30-second

    timeout client
    is suitable for short-lived API calls but will prematurely terminate large file downloads, server-sent event streams, or slow mobile connections. Match your timeout values to the longest legitimate request your application serves.

    3. Omitting
    option forwardfor

    Without this directive, every request your backend servers receive appears to come from HAProxy's IP address. This breaks GeoIP lookups, per-client rate limiting, security audit logs, and web application firewalls that rely on the real client IP. Always include

    option forwardfor
    in HTTP mode.

    4. Binding the Stats Page to All Interfaces

    Using

    bind *:8404
    on the stats listener exposes your credentials, server topology, and connection metrics to anyone on any network interface. Always bind the stats page to an internal management IP and enforce a strong password.

    5. Using
    restart
    Instead of
    reload
    on a Live System

    A

    systemctl restart haproxy
    on a live load balancer drops all active connections during the brief process restart window. Use
    systemctl reload haproxy
    for all configuration changes in production. HAProxy's reload mechanism transfers listening sockets to the new process without interrupting established sessions.

    6. Not Setting
    maxconn
    on Backend Servers

    Without a per-server connection cap, a traffic spike can forward thousands of simultaneous connections to a backend that can only handle a few hundred. HAProxy will queue excess connections internally (up to the global

    maxconn
    limit) when per-server
    maxconn
    is set, protecting backends from being overwhelmed.

    7. Forgetting
    option http-server-close

    Without this option, HAProxy may reuse a backend connection for multiple client requests in a way that causes subtle request routing issues.

    http-server-close
    closes the server-side connection after each request while keeping the client-side keep-alive connection open — the correct behavior for the vast majority of HTTP/1.1 deployments.


    Frequently Asked Questions

    Q: What is the difference between a HAProxy frontend and a backend?

    A: A frontend defines how HAProxy listens for incoming client connections — it specifies the bind address, port, protocol mode, and any ACL-based routing rules that determine which backend receives the traffic. A backend defines the pool of upstream servers that ultimately handle the request, along with the load balancing algorithm and health check settings. The frontend is the ingress point; the backend is the server farm. One frontend can route to multiple backends based on rules such as URL path or Host header.

    Q: How do I add a new backend server without causing downtime?

    A: Add the new

    server
    line to the appropriate backend block in
    /etc/haproxy/haproxy.cfg
    , run
    sudo haproxy -c -f /etc/haproxy/haproxy.cfg
    to validate, then run
    sudo systemctl reload haproxy
    . HAProxy performs a graceful reload — existing connections continue on the old process while the new process takes over the listening socket. The new server begins receiving traffic as soon as its initial health checks pass.

    Q: What does the
    check
    keyword do on a server line?

    A: The

    check
    keyword enables active health monitoring for that server. HAProxy periodically sends a probe — either a TCP connection or an HTTP request depending on the backend's health check configuration — to verify the server is alive and responding correctly. Servers that fail
    fall
    consecutive checks are removed from the rotation automatically. They are restored once they pass
    rise
    consecutive checks. Removing
    check
    from a server line disables health monitoring for that specific node, meaning it will always receive traffic regardless of its actual state.

    Q: What is
    option forwardfor
    and why is it required?

    A:

    option forwardfor
    instructs HAProxy to insert an
    X-Forwarded-For
    HTTP header containing the originating client IP address into every proxied request. Without it, backend servers see all traffic arriving from HAProxy's own IP, making it impossible to identify real clients for access logging, rate limiting, security controls, or geolocation. It is a required directive for any HTTP-mode deployment where backend servers need visibility into the true client source address.

    Q: Can HAProxy load balance non-HTTP TCP traffic?

    A: Yes. Set

    mode tcp
    in the defaults section or within a specific frontend and backend pair. In TCP mode HAProxy operates at Layer 4, forwarding raw byte streams without HTTP parsing. This is the correct approach for MySQL, PostgreSQL, Redis, SMTP, or any other TCP protocol. Use
    option tcp-check
    for health checks in this mode. Note that
    option forwardfor
    and HTTP-specific directives are not available in TCP mode.

    Q: What happens if all backend servers fail their health checks simultaneously?

    A: HAProxy returns an HTTP 503 Service Unavailable response to all incoming clients until at least one backend server recovers. You can mitigate this by configuring a backup server with the

    backup
    keyword — a backup server only receives traffic when all primary servers are marked DOWN:

    server web-backup 192.168.10.20:80 check backup

    The backup server is completely idle under normal conditions and takes over only during a full backend outage, making it suitable for a static maintenance page host.

    Q: What is the purpose of the
    stats socket
    directive?

    A: The

    stats socket
    directive exposes HAProxy's Runtime API through a Unix domain socket file. Using tools like
    socat
    or the dedicated
    hatop
    utility, operators can query real-time statistics, change individual server states (up, down, maintenance, drain), adjust server weights, clear counters, and reload the configuration — all without a process restart. It is an essential operational tool for zero-downtime maintenance in any production deployment.

    Q: How does HAProxy handle session persistence for stateful applications?

    A: HAProxy supports cookie-based session persistence. When configured, HAProxy inserts a cookie into the HTTP response that identifies which backend server handled the first request. Subsequent requests from the same browser carrying that cookie are routed to the same server. Configure it by adding the following to your backend block:

    cookie SERVERID insert indirect nocache
    server web-01 192.168.10.11:80 check cookie web-01
    server web-02 192.168.10.12:80 check cookie web-02
    server web-03 192.168.10.13:80 check cookie web-03

    The

    indirect
    flag removes the cookie from requests forwarded to the backend so the application never sees it.
    nocache
    prevents caches from storing responses that contain the cookie.

    Q: How do I configure remote syslog for HAProxy logs?

    A: Replace the

    log /dev/log local0
    directive in the
    global
    block with the IP address and UDP port of your remote syslog server:

    log 192.168.10.5:514 local0 notice

    HAProxy sends log events over UDP by default. Ensure your syslog server (rsyslog, syslog-ng, or similar) is configured to accept remote UDP input on port 514 and to route the

    local0
    facility to the appropriate log file or forwarding destination. You can define up to two log targets in the global block for redundancy.

    Q: What is
    maxconn
    and where should it be configured?

    A: The

    maxconn
    directive limits concurrent connections at different scopes. In the global block it sets the process-wide maximum — HAProxy will queue or reject connections beyond this limit. In a frontend block it caps connections accepted on that specific listener. In a backend block's
    default-server
    or individual
    server
    lines it caps the number of simultaneous connections forwarded to each upstream node. Connections exceeding the per-server cap are queued at HAProxy, smoothing out traffic spikes rather than flooding a single backend.

    Q: How can I test configuration changes safely before applying them to production?

    A: Use the built-in syntax checker first:

    sudo haproxy -c -f /etc/haproxy/haproxy.cfg

    For more thorough testing, run a shadow HAProxy instance on an alternate port pointing to a staging backend pool. Use the

    -f
    flag to point at a test config file and
    -p
    to write its PID to a separate file so it does not conflict with the production process. This lets you validate ACL logic, routing rules, and health check behavior without affecting live traffic.

    Q: Why does HAProxy show a 502 Bad Gateway even when backend servers are running?

    A: A 502 typically means HAProxy reached the backend server but received an invalid or no HTTP response. Common causes include: the backend is listening on a different port than configured; a firewall between HAProxy and the backend is blocking the connection; the backend process is running but not yet ready to serve requests; or the health check path returns a non-2xx status code causing the server to be marked DOWN before you send a test request. Check the stats page at

    http://192.168.10.10:8404/haproxy-stats
    to see the current health check status and last error for each server, and run
    echo "show servers state" | sudo socat stdio /run/haproxy/admin.sock
    for detailed runtime state.

    Frequently Asked Questions

    What is the difference between a HAProxy frontend and a backend?

    A frontend defines how HAProxy listens for incoming client connections — it specifies the bind address, port, protocol mode, and routing rules that determine which backend receives the traffic. A backend defines the pool of upstream servers that handle the request, along with the load balancing algorithm and health check settings. One frontend can route to multiple backends based on rules such as URL path or Host header.

    How do I add a new backend server without causing downtime?

    Add the new server line to the backend block in /etc/haproxy/haproxy.cfg, validate with sudo haproxy -c -f /etc/haproxy/haproxy.cfg, then run sudo systemctl reload haproxy. HAProxy performs a graceful reload — existing connections continue on the old process while the new process takes over the listening socket. The new server begins receiving traffic as soon as its initial health checks pass.

    What does the check keyword do on a server line?

    The check keyword enables active health monitoring for that server. HAProxy periodically sends a probe (TCP or HTTP depending on configuration) to verify the server is alive. Servers that fail a configurable number of consecutive checks (fall) are removed from rotation automatically and restored once they pass a configurable number of successes (rise).

    What is option forwardfor and why is it required?

    option forwardfor instructs HAProxy to insert an X-Forwarded-For HTTP header containing the originating client IP address into every proxied request. Without it, backend servers see all traffic arriving from HAProxy's own IP, making it impossible to identify real clients for access logging, rate limiting, or security controls.

    Can HAProxy load balance non-HTTP TCP traffic?

    Yes. Set mode tcp in the defaults section or within a specific frontend and backend pair. In TCP mode HAProxy operates at Layer 4, forwarding raw byte streams without HTTP parsing. This is the correct approach for MySQL, PostgreSQL, Redis, SMTP, or any other TCP protocol. Use option tcp-check for health checks in this mode.

    What happens if all backend servers fail their health checks simultaneously?

    HAProxy returns an HTTP 503 Service Unavailable to all incoming clients until at least one backend server recovers. You can configure a backup server with the backup keyword — it only receives traffic when all primary servers are marked DOWN, making it suitable for hosting a static maintenance page.

    What is the purpose of the stats socket directive?

    The stats socket directive exposes HAProxy's Runtime API through a Unix domain socket. Using tools like socat, operators can query real-time statistics, change server states, adjust weights, drain connections, and reload configuration without a process restart. It is an essential operational tool for zero-downtime maintenance.

    How does HAProxy handle session persistence for stateful applications?

    HAProxy supports cookie-based session persistence. When configured with the cookie directive, HAProxy inserts a cookie identifying the backend server into the HTTP response. Subsequent requests from the same client carrying that cookie are routed to the same server. The indirect flag removes the cookie before forwarding to the backend so the application never sees it.

    How do I configure remote syslog for HAProxy logs?

    Replace the log /dev/log local0 directive in the global block with the IP and port of your remote syslog server: log 192.168.10.5:514 local0 notice. HAProxy sends log events over UDP. Ensure your syslog server is configured to accept remote UDP input on port 514 and route the local0 facility to the appropriate destination.

    What is maxconn and where should it be configured?

    The maxconn directive limits concurrent connections at different scopes. In the global block it sets the process-wide maximum. In a frontend block it caps connections on that listener. In a backend's default-server or individual server lines it caps simultaneous connections forwarded to each upstream node. Connections exceeding the per-server cap are queued at HAProxy rather than flooding the backend.

    Related Articles