InfraRunBook
    Back to articles

    HAProxy Load Balancing Algorithms: Round-Robin, Leastconn, Source Hashing, URI Hashing, and Random

    HAProxy
    Published: Feb 16, 2026
    Updated: Feb 16, 2026

    Master every HAProxy load-balancing algorithm with production-ready configurations, real benchmarks, and decision matrices. Covers roundrobin, static-rr, leastconn, source hashing, URI hashing, url_param, hdr(), rdp-cookie, and random — with complete examples for each.

    HAProxy Load Balancing Algorithms: Round-Robin, Leastconn, Source Hashing, URI Hashing, and Random

    Introduction

    Choosing the right load-balancing algorithm is one of the highest-impact decisions you make when deploying HAProxy. A mismatch between your traffic pattern and the balancing strategy can cause hot servers, broken sessions, cache misses, and wasted capacity. This guide walks through every algorithm HAProxy supports, explains when to use each, provides copy-paste configuration blocks, and includes a decision matrix to speed up architecture reviews.

    All examples use HAProxy 2.8+ syntax (LTS branch) and are tested on Ubuntu 22.04 / Rocky Linux 9. Hostnames, VLANs, and domains follow InfraRunBook conventions.


    Prerequisites

    • HAProxy 2.6 or later installed (
      haproxy -v
      to verify)
    • At least two backend servers reachable from the HAProxy node
    • Basic familiarity with HAProxy frontend/backend structure
    • Root or sudo access on the HAProxy host

    Global and Defaults Skeleton

    Before diving into algorithms, here is the shared skeleton every example builds on:

    global
        log /dev/log local0
        log /dev/log local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        maxconn 50000
        tune.ssl.default-dh-param 2048
    
    defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        option  forwardfor
        timeout connect 5s
        timeout client  30s
        timeout server  30s
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 503 /etc/haproxy/errors/503.http
    
    frontend ft_infrarunbook
        bind *:80
        bind *:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
        default_backend bk_infrarunbook_app

    Each section below replaces or extends the

    backend
    block only.


    1 — Round-Robin (
    roundrobin
    )

    How It Works

    Requests are distributed sequentially across all healthy servers. Each server's

    weight
    is respected dynamically — you can change weights at runtime through the stats socket without a reload. This is the default algorithm when no
    balance
    directive is set.

    Best For

    • Stateless APIs and micro-services
    • Content servers behind a CDN
    • Environments where all backends have similar capacity

    Configuration

    backend bk_infrarunbook_app
        balance roundrobin
        option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server web01-infrarunbook 10.20.30.11:8080 check weight 100 inter 3s fall 3 rise 2
        server web02-infrarunbook 10.20.30.12:8080 check weight 100 inter 3s fall 3 rise 2
        server web03-infrarunbook 10.20.30.13:8080 check weight 50  inter 3s fall 3 rise 2

    Here

    web03-infrarunbook
    receives roughly half the requests of the other two because its weight is 50 versus 100.

    Runtime Weight Change

    echo "set weight bk_infrarunbook_app/web03-infrarunbook 100" | socat stdio /run/haproxy/admin.sock

    This takes effect immediately — no reload needed.

    Key Limit

    roundrobin
    supports a maximum of 4095 active servers per backend. If you exceed that, use
    static-rr
    .


    2 — Static Round-Robin (
    static-rr
    )

    How It Works

    Identical to

    roundrobin
    but weights are fixed at configuration load time. Runtime weight changes via the socket are not supported. In exchange, there is no upper limit on the number of servers.

    Best For

    • Very large server pools (thousands of backends)
    • Environments where weights never change between reloads

    Configuration

    backend bk_infrarunbook_static
        balance static-rr
        option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server node01-infrarunbook 10.20.30.11:8080 check weight 1
        server node02-infrarunbook 10.20.30.12:8080 check weight 1
        server node03-infrarunbook 10.20.30.13:8080 check weight 1
        server node04-infrarunbook 10.20.30.14:8080 check weight 1

    3 — Least Connections (
    leastconn
    )

    How It Works

    The server with the fewest active connections receives the next request. Weights act as multipliers — a server with weight 200 can hold twice as many connections before being considered "busier" than a server with weight 100. If two servers tie, round-robin breaks the tie.

    Best For

    • Long-lived connections (WebSockets, gRPC streams, database proxying)
    • Backends with mixed response times (some requests are fast, others slow)
    • Session-heavy applications where request duration varies

    Configuration

    backend bk_infrarunbook_api
        balance leastconn
        option httpchk GET /ready HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server api01-infrarunbook 10.20.30.21:9090 check weight 100 maxconn 500 inter 2s fall 3 rise 2
        server api02-infrarunbook 10.20.30.22:9090 check weight 100 maxconn 500 inter 2s fall 3 rise 2
        server api03-infrarunbook 10.20.30.23:9090 check weight 200 maxconn 1000 inter 2s fall 3 rise 2

    api03-infrarunbook
    is a beefier machine (weight 200, maxconn 1000), so HAProxy allows it twice the concurrent connections before considering it equally loaded.

    Combining with Slow-Start

    When a server recovers from a failure, you usually do not want it flooded immediately. Use

    slowstart
    :

        server api01-infrarunbook 10.20.30.21:9090 check weight 100 maxconn 500 slowstart 30s

    Over 30 seconds the effective weight ramps from 0 to 100, preventing thundering-herd spikes on a freshly recovered node.


    4 — Source IP Hashing (
    source
    )

    How It Works

    A hash of the client's source IP selects the backend. As long as the server pool stays stable, the same client always hits the same server. This provides basic session persistence without cookies.

    Hash Types

    • hash-type map-based
      — deterministic mapping. Fast, but adding/removing a server remaps many clients.
    • hash-type consistent
      — consistent hashing (ketama). Adding/removing a server remaps only ~1/N of clients.

    Best For

    • TCP-mode (Layer 4) load balancing where cookies are unavailable
    • Legacy apps that store sessions on disk
    • Quick-and-dirty stickiness without application changes

    Configuration

    backend bk_infrarunbook_legacy
        balance source
        hash-type consistent
        option httpchk GET /status HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server legacy01-infrarunbook 10.20.30.31:8080 check weight 100
        server legacy02-infrarunbook 10.20.30.32:8080 check weight 100
        server legacy03-infrarunbook 10.20.30.33:8080 check weight 100

    Caveat — NAT and Proxies

    If all clients appear from the same NAT IP, every request goes to the same backend. In that scenario, prefer cookie-based persistence or

    balance hdr(X-Forwarded-For)
    .


    5 — URI Hashing (
    uri
    )

    How It Works

    HAProxy hashes the request URI (left part by default, before the query string) and maps the hash to a server. Identical URIs always reach the same backend, which is perfect for maximising cache hit rates on upstream caches.

    Parameters

    • len <n>
      — hash only the first n characters of the URI
    • depth <n>
      — hash only the first n directory levels
    • whole
      — include the query string in the hash

    Best For

    • Reverse-proxy caching tiers (Varnish, Nginx cache, Squid)
    • Static asset delivery (images, CSS, JS)
    • API-GW patterns where each path maps to a different micro-service cache

    Configuration

    backend bk_infrarunbook_cache
        balance uri
        hash-type consistent
        option httpchk GET /cache-check HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server cache01-infrarunbook 10.20.30.41:6081 check weight 100
        server cache02-infrarunbook 10.20.30.42:6081 check weight 100
        server cache03-infrarunbook 10.20.30.43:6081 check weight 100

    URI Depth Example

    backend bk_infrarunbook_assets
        balance uri depth 2
        hash-type consistent
    
        server assets01-infrarunbook 10.20.30.51:8080 check
        server assets02-infrarunbook 10.20.30.52:8080 check

    With

    depth 2
    ,
    /images/products/widget.png
    and
    /images/products/gadget.png
    hash identically (both resolve to
    /images/products
    ).


    6 — URL Parameter Hashing (
    url_param
    )

    How It Works

    HAProxy extracts a named query-string (or POST body) parameter and hashes its value. This is useful when a session ID or user ID is passed as a URL parameter.

    Best For

    • Legacy applications that embed
      JSESSIONID
      or
      sid
      in the URL
    • API calls where a
      user_id
      parameter must always reach the same backend

    Configuration

    backend bk_infrarunbook_session
        balance url_param userid
        hash-type consistent
        option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server app01-infrarunbook 10.20.30.61:8080 check
        server app02-infrarunbook 10.20.30.62:8080 check
        server app03-infrarunbook 10.20.30.63:8080 check

    A request to

    https://solvethenetwork.com/api/data?userid=42&page=3
    hashes the value
    42
    and always lands on the same server.

    POST Body Variant

    If the parameter is in the POST body, add

    check_post
    with a byte limit:

        balance url_param userid check_post 128

    HAProxy reads up to 128 bytes of the POST body to find

    userid
    .


    7 — Header-Based Hashing (
    hdr()
    )

    How It Works

    HAProxy hashes the value of a specified HTTP header. If the header is missing, it falls back to round-robin.

    Best For

    • Multi-tenant platforms where
      X-Tenant-ID
      determines routing
    • Hashing on
      User-Agent
      for A/B testing
    • Sticky routing based on
      Authorization
      token prefix

    Configuration

    backend bk_infrarunbook_multitenant
        balance hdr(X-Tenant-ID)
        hash-type consistent
        option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server tenant01-infrarunbook 10.20.30.71:8080 check
        server tenant02-infrarunbook 10.20.30.72:8080 check
        server tenant03-infrarunbook 10.20.30.73:8080 check
        server tenant04-infrarunbook 10.20.30.74:8080 check

    Every request carrying

    X-Tenant-ID: infrarunbook-prod
    consistently lands on the same server.

    Using with use-header-name-to-lower

    Header names are case-insensitive in HTTP/1.1, and HAProxy normalises them. But if your application sends mixed-case custom headers, explicitly lower-case the lookup:

        balance hdr(x-tenant-id)
        hash-type consistent

    8 — RDP Cookie Hashing (
    rdp-cookie
    )

    How It Works

    Extracts the RDP cookie (mstshash by default) from the initial RDP negotiation and hashes it. This is the go-to algorithm for Microsoft Remote Desktop Gateway / RDS farm load balancing.

    Best For

    • Microsoft RDS Session Host farms
    • Citrix XenApp / XenDesktop brokers

    Configuration (TCP Mode)

    frontend ft_infrarunbook_rdp
        bind *:3389
        mode tcp
        option tcplog
        default_backend bk_infrarunbook_rdsfarm
    
    backend bk_infrarunbook_rdsfarm
        mode tcp
        balance rdp-cookie
        persist rdp-cookie
        hash-type consistent
        timeout server 8h
        timeout connect 10s
    
        option tcp-check
        tcp-check connect port 3389
    
        server rdsh01-infrarunbook 10.20.30.81:3389 check inter 10s fall 3 rise 2
        server rdsh02-infrarunbook 10.20.30.82:3389 check inter 10s fall 3 rise 2

    The

    persist rdp-cookie
    directive ensures reconnections from the same user return to the same host.


    9 — Random (
    random
    /
    random(n)
    )

    How It Works

    Introduced in HAProxy 1.9,

    random
    selects a server by generating a random number and applying it with weights. The optional parameter
    random(n)
    uses the power-of-n-choices technique: HAProxy picks n servers at random, then sends the request to the one with the fewest connections.

    Why Power-of-Two-Choices Matters

    Research shows that

    random(2)
    achieves near-optimal load distribution — close to
    leastconn
    — while being simpler to reason about in consistent-hashing edge cases. HAProxy's documentation recommends
    random(2)
    as a good general-purpose algorithm.

    Best For

    • Large, homogeneous clusters where statistical uniformity suffices
    • DNS-discovered backends that change frequently
    • Environments wanting near-leastconn behaviour with lower lock contention

    Configuration

    backend bk_infrarunbook_microservices
        balance random(2)
        option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server svc01-infrarunbook 10.20.30.91:8080 check weight 100
        server svc02-infrarunbook 10.20.30.92:8080 check weight 100
        server svc03-infrarunbook 10.20.30.93:8080 check weight 100
        server svc04-infrarunbook 10.20.30.94:8080 check weight 100
        server svc05-infrarunbook 10.20.30.95:8080 check weight 100

    Algorithm Decision Matrix

    Use this table during design reviews:

    +----------------+----------------+----------------+------------+-------------------+
    | Algorithm      | Session Sticky | Cache Friendly | Long Lived | Runtime Weight Δ  |
    +----------------+----------------+----------------+------------+-------------------+
    | roundrobin     | No             | No             | No         | Yes               |
    | static-rr      | No             | No             | No         | No                |
    | leastconn      | No             | No             | Yes        | Yes               |
    | source         | Yes (IP)       | No             | Either     | No*               |
    | uri            | No             | Yes            | No         | No*               |
    | url_param      | Yes (param)    | No             | No         | No*               |
    | hdr()          | Yes (header)   | No             | No         | No*               |
    | rdp-cookie     | Yes (RDP)      | N/A            | Yes        | No*               |
    | random         | No             | No             | No         | Yes               |
    | random(2)      | No             | No             | Yes        | Yes               |
    +----------------+----------------+----------------+------------+-------------------+
    * hash-based algorithms do not support dynamic weight changes

    Combining Algorithms with Cookie Persistence

    Session stickiness via a load-balancing algorithm alone is fragile — a server removal remaps clients. For production session affinity, layer HAProxy cookie insertion on top of your chosen algorithm:

    backend bk_infrarunbook_webapp
        balance leastconn
        cookie IRBS insert indirect nocache httponly secure
        option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server web01-infrarunbook 10.20.30.11:8080 check cookie w01
        server web02-infrarunbook 10.20.30.12:8080 check cookie w02
        server web03-infrarunbook 10.20.30.13:8080 check cookie w03

    New visitors are balanced by

    leastconn
    . On the first response HAProxy inserts
    Set-Cookie: IRBS=w01; HttpOnly; Secure
    . Subsequent requests are pinned to that server regardless of algorithm.


    Verifying Algorithm Behaviour

    Enable the Stats Page

    frontend ft_infrarunbook_stats
        bind 10.20.30.1:9000
        mode http
        stats enable
        stats uri /haproxy-stats
        stats realm "InfraRunBook HAProxy Stats"
        stats auth infrarunbook-admin:Str0ngP@ss!2026
        stats refresh 5s
        stats show-legends

    Open

    http://10.20.30.1:9000/haproxy-stats
    and watch the Cur (current sessions), Tot (total sessions), and LastChk columns to confirm distribution.

    CLI Verification

    # Show per-server session counts
    echo "show stat" | socat stdio /run/haproxy/admin.sock | \
      awk -F',' '/bk_infrarunbook/ {printf "%-30s cur=%s tot=%s\n", $2, $5, $8}'

    Quick Load Test

    # Install hey (HTTP load generator)
    go install github.com/rakyll/hey@latest
    
    # 10,000 requests, 50 concurrency
    hey -n 10000 -c 50 -host solvethenetwork.com http://10.20.30.1:80/api/test

    After the run, check stats for even (or intentionally weighted) distribution.


    Hash-Type Deep Dive:
    map-based
    vs
    consistent

    map-based (default)

    Builds a static lookup table. Very fast (O(1) lookup). Downside: adding or removing a single server can remap the majority of existing connections. Use for pools that rarely change.

    consistent (Ketama)

    Each server occupies multiple points on a virtual ring. Adding/removing a server remaps only ~1/N of entries. Downside: slightly higher CPU per lookup. Use when backends scale dynamically (auto-scaling groups, Kubernetes pods).

    backend bk_infrarunbook_dynamic
        balance uri
        hash-type consistent sdbm
        # sdbm is the hash function — alternatives: djb2, wt6, crc32
    
        server-template srv-infrarunbook 1-20 10.20.30.0:8080 check resolvers infrarunbook-dns resolve-prefer ipv4

    The

    server-template
    directive combined with
    consistent
    hashing is ideal for service-discovery-driven environments.


    Performance Tuning Tips per Algorithm

    1. roundrobin / static-rr: Set
      maxconn
      per server to prevent queue build-up on a slow backend.
    2. leastconn: Always set
      slowstart 30s
      or more to prevent reconnection storms after a backend recovers.
    3. source: Use
      hash-type consistent
      in cloud environments where backends auto-scale.
    4. uri / url_param / hdr(): Tune
      hash-type consistent
      and consider
      hash-balance-factor 150
      (HAProxy 2.2+) to cap the maximum imbalance at 150 %.
    5. random(2): No special tuning needed — it is inherently resistant to hot spots.
    backend bk_infrarunbook_balanced_uri
        balance uri
        hash-type consistent
        hash-balance-factor 150
        # No server can receive more than 150% of the average load
    
        server cdn01-infrarunbook 10.20.30.41:8080 check
        server cdn02-infrarunbook 10.20.30.42:8080 check
        server cdn03-infrarunbook 10.20.30.43:8080 check

    Complete Production Configuration Example

    Below is a full

    /etc/haproxy/haproxy.cfg
    demonstrating multiple backends, each with a different algorithm, fronted by ACL-based routing:

    global
        log /dev/log local0
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        user haproxy
        group haproxy
        daemon
        maxconn 100000
    
    resolvers infrarunbook-dns
        nameserver dns1 10.20.30.2:53
        nameserver dns2 10.20.30.3:53
        resolve_retries 3
        timeout resolve 1s
        timeout retry   1s
        hold valid      10s
    
    defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        option  forwardfor
        timeout connect 5s
        timeout client  30s
        timeout server  30s
        default-server inter 3s fall 3 rise 2
    
    frontend ft_infrarunbook_main
        bind *:80
        bind *:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem alpn h2,http/1.1
        http-request redirect scheme https unless { ssl_fc }
    
        # ACL routing
        acl is_api     path_beg /api/
        acl is_assets  path_beg /static/ /images/ /css/ /js/
        acl is_ws      hdr(Upgrade) -i websocket
    
        use_backend bk_infrarunbook_api    if is_api
        use_backend bk_infrarunbook_assets if is_assets
        use_backend bk_infrarunbook_ws     if is_ws
        default_backend bk_infrarunbook_web
    
    # --- BACKEND: Web (roundrobin + cookie) ---
    backend bk_infrarunbook_web
        balance roundrobin
        cookie IRBSRV insert indirect nocache httponly secure
        option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server web01-infrarunbook 10.20.30.11:8080 check cookie w01 weight 100
        server web02-infrarunbook 10.20.30.12:8080 check cookie w02 weight 100
    
    # --- BACKEND: API (leastconn) ---
    backend bk_infrarunbook_api
        balance leastconn
        option httpchk GET /api/health HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
    
        server api01-infrarunbook 10.20.30.21:9090 check maxconn 500 slowstart 30s
        server api02-infrarunbook 10.20.30.22:9090 check maxconn 500 slowstart 30s
        server api03-infrarunbook 10.20.30.23:9090 check maxconn 1000 weight 200 slowstart 30s
    
    # --- BACKEND: Assets (URI hash) ---
    backend bk_infrarunbook_assets
        balance uri depth 2
        hash-type consistent
        hash-balance-factor 150
    
        server assets01-infrarunbook 10.20.30.51:8080 check
        server assets02-infrarunbook 10.20.30.52:8080 check
    
    # --- BACKEND: WebSocket (leastconn, long timeout) ---
    backend bk_infrarunbook_ws
        balance leastconn
        timeout tunnel 1h
        timeout server 1h
    
        server ws01-infrarunbook 10.20.30.61:8080 check maxconn 2000
        server ws02-infrarunbook 10.20.30.62:8080 check maxconn 2000
    
    # --- Stats ---
    listen stats_infrarunbook
        bind 10.20.30.1:9000
        mode http
        stats enable
        stats uri /haproxy-stats
        stats auth infrarunbook-admin:Str0ngP@ss!2026
        stats refresh 5s

    Validate and Reload

    # Syntax check
    haproxy -c -f /etc/haproxy/haproxy.cfg
    
    # Reload without dropping connections
    systemctl reload haproxy
    
    # Verify
    systemctl status haproxy
    ss -tlnp | grep haproxy

    Troubleshooting Common Issues

    Uneven distribution with
    roundrobin

    Check if one server is marked DOWN — HAProxy skips it. Run

    echo "show servers state" | socat stdio /run/haproxy/admin.sock
    and look for state
    2
    (UP) vs
    0
    (DOWN).

    Source-hash sends all traffic to one server

    Your clients are likely behind a single NAT. Switch to

    balance hdr(X-Forwarded-For)
    or use cookie-based persistence.

    URI-hash hot spot on popular URLs

    Enable

    hash-balance-factor 150
    to cap the maximum overload any single server can receive. Values between 125 and 200 work well.

    random(2)
    not available

    You are running HAProxy < 1.9. Upgrade or fall back to

    leastconn
    .


    Frequently Asked Questions

    Q1: What is the default load-balancing algorithm in HAProxy?

    The default is

    roundrobin
    . If you omit the
    balance
    directive entirely, HAProxy distributes requests in weighted round-robin order across all healthy servers in the backend.

    Q2: Can I change the algorithm at runtime without restarting HAProxy?

    No. The

    balance
    directive is part of the configuration and requires a reload (
    systemctl reload haproxy
    ). However, you can change individual server weights at runtime with the stats socket when using
    roundrobin
    ,
    leastconn
    , or
    random
    .

    Q3: What is the difference between
    roundrobin
    and
    static-rr
    ?

    roundrobin
    allows dynamic weight changes via the runtime API and supports up to 4095 servers.
    static-rr
    does not support runtime weight changes but has no server count limit. Use
    static-rr
    only when you exceed 4095 backends per pool.

    Q4: When should I use
    leastconn
    instead of
    roundrobin
    ?

    Use

    leastconn
    when requests have highly variable processing times — for example, WebSocket connections, file uploads, or database queries.
    leastconn
    prevents slow requests from piling up on one server while faster servers sit idle.

    Q5: How does
    hash-balance-factor
    prevent hot spots?

    Available since HAProxy 2.2,

    hash-balance-factor
    sets a percentage ceiling. A value of 150 means no server can receive more than 150 % of the average load. If a hash would overload a server, HAProxy reassigns that request to the next server on the consistent hash ring.

    Q6: Is
    source
    hashing reliable for session persistence?

    It works when each client has a unique, stable IP address. It breaks down behind NATs, corporate proxies, or mobile networks where IPs change. For production session persistence, use cookie-based stickiness instead.

    Q7: What does
    random(2)
    mean exactly?

    random(2)
    implements the "power of two random choices" algorithm. HAProxy randomly picks two servers, then sends the request to whichever has fewer current connections. Research shows this achieves exponentially better load distribution than pure random selection.

    Q8: Can I use different algorithms for different backends in the same configuration?

    Yes. Each

    backend
    section has its own independent
    balance
    directive. You can route API traffic to a
    leastconn
    backend and static assets to a
    uri
    -hashed backend from the same frontend using ACLs and
    use_backend
    rules.

    Q9: How do I verify which server a specific request was sent to?

    Enable the

    httplog
    format and look at the server name in the log. Alternatively, add a response header:
    http-response set-header X-Served-By %s
    in the backend block. The
    %s
    variable expands to the server name.

    Q10: Does the algorithm choice affect HAProxy's CPU usage?

    Minimally. Hash-based algorithms with

    consistent
    type use slightly more CPU per request than
    roundrobin
    , but the difference is negligible even at hundreds of thousands of requests per second. The real performance factor is
    maxconn
    , TLS overhead, and logging — not the balancing algorithm.

    Q11: What happens when a server goes down in a consistent-hash setup?

    Only the clients that were mapped to the failed server get remapped — roughly 1/N of total traffic, where N is the number of servers. All other client-to-server mappings remain unchanged, preserving cache locality and session affinity.

    Q12: Can I combine
    uri
    hashing with cookie-based persistence?

    Yes. HAProxy evaluates cookie persistence first. If a valid cookie is present, the request goes to the pinned server regardless of algorithm. If no cookie is present, the

    uri
    hash selects the server and HAProxy inserts the cookie for subsequent requests.

    Frequently Asked Questions

    What is the default load-balancing algorithm in HAProxy?

    The default is roundrobin. If you omit the balance directive entirely, HAProxy distributes requests in weighted round-robin order across all healthy servers in the backend.

    Can I change the algorithm at runtime without restarting HAProxy?

    No. The balance directive is part of the configuration and requires a reload (systemctl reload haproxy). However, you can change individual server weights at runtime with the stats socket when using roundrobin, leastconn, or random.

    What is the difference between roundrobin and static-rr?

    roundrobin allows dynamic weight changes via the runtime API and supports up to 4095 servers. static-rr does not support runtime weight changes but has no server count limit. Use static-rr only when you exceed 4095 backends per pool.

    When should I use leastconn instead of roundrobin?

    Use leastconn when requests have highly variable processing times — for example, WebSocket connections, file uploads, or database queries. leastconn prevents slow requests from piling up on one server while faster servers sit idle.

    How does hash-balance-factor prevent hot spots?

    Available since HAProxy 2.2, hash-balance-factor sets a percentage ceiling. A value of 150 means no server can receive more than 150% of the average load. If a hash would overload a server, HAProxy reassigns that request to the next server on the consistent hash ring.

    Is source hashing reliable for session persistence?

    It works when each client has a unique, stable IP address. It breaks down behind NATs, corporate proxies, or mobile networks where IPs change. For production session persistence, use cookie-based stickiness instead.

    What does random(2) mean exactly?

    random(2) implements the 'power of two random choices' algorithm. HAProxy randomly picks two servers, then sends the request to whichever has fewer current connections. Research shows this achieves exponentially better load distribution than pure random selection.

    Can I use different algorithms for different backends in the same configuration?

    Yes. Each backend section has its own independent balance directive. You can route API traffic to a leastconn backend and static assets to a uri-hashed backend from the same frontend using ACLs and use_backend rules.

    How do I verify which server a specific request was sent to?

    Enable the httplog format and look at the server name in the log. Alternatively, add a response header: http-response set-header X-Served-By %s in the backend block. The %s variable expands to the server name.

    Does the algorithm choice affect HAProxy's CPU usage?

    Minimally. Hash-based algorithms with consistent type use slightly more CPU per request than roundrobin, but the difference is negligible even at hundreds of thousands of requests per second. The real performance factors are maxconn, TLS overhead, and logging — not the balancing algorithm.

    Related Articles