InfraRunBook
    Back to articles

    Benefits of Using HAProxy for High Availability Systems

    HAProxy
    Published: Mar 31, 2026
    Updated: Mar 31, 2026

    HAProxy delivers enterprise-grade load balancing, SSL termination, health checks, and session persistence that form the backbone of resilient high availability architectures. Learn how to configure and tune HAProxy for production environments.

    Benefits of Using HAProxy for High Availability Systems

    Introduction to HAProxy in High Availability Environments

    High availability (HA) is a non-negotiable requirement for modern production infrastructure. Whether you operate a SaaS platform, a financial services application, or a public-facing web service, the ability to survive hardware failures, absorb traffic spikes, and route requests intelligently determines whether your SLA is met or missed. HAProxy — the High Availability Proxy — has become the industry standard for software-based load balancing and proxying on Linux, trusted by teams running everything from small clusters to multi-datacenter deployments.

    HAProxy operates at both Layer 4 (TCP) and Layer 7 (HTTP), giving infrastructure engineers granular control over how traffic is distributed, inspected, and forwarded. Its single-threaded, event-driven architecture handles hundreds of thousands of concurrent connections with minimal memory and CPU overhead. This efficiency, combined with a rich feature set, makes HAProxy an ideal choice as the traffic entry point in any high availability design.

    This article walks through the core benefits and capabilities of HAProxy relevant to HA systems: load balancing algorithms, health checks, SSL/TLS termination, ACL-based routing, stick tables, rate limiting, the built-in stats interface, Keepalived integration for active-passive failover, WebSocket proxying, logging, and connection limit tuning.

    Load Balancing Algorithms

    The load balancing algorithm governs how HAProxy distributes new connections or requests across the servers in a backend pool. Choosing the right algorithm has a direct impact on backend utilization, response times, and failover behavior under partial failure.

    Round Robin

    Round robin distributes requests sequentially across all available backend servers in equal turn. It is the default and most commonly used algorithm, well-suited for stateless workloads with backends of similar capacity. Optional server weights allow traffic to be skewed toward higher-capacity nodes without changing the algorithm.

    backend web_servers
        balance roundrobin
        server web01 10.10.1.11:80 check weight 2
        server web02 10.10.1.12:80 check weight 2
        server web03 10.10.1.13:80 check weight 1

    Least Connections

    The leastconn algorithm routes each new request to the server with the fewest active connections at that moment. This is the correct choice for long-lived connections — database proxying, file uploads, streaming — where request duration varies significantly and round robin would unevenly pile work onto unlucky servers.

    backend db_servers
        balance leastconn
        server db01 10.10.2.11:5432 check
        server db02 10.10.2.12:5432 check

    Source Hash

    Source hash uses a hash of the client IP address to consistently send the same client to the same backend server across requests. This provides soft session affinity without requiring cookies or application-level session tracking, useful for Layer 4 TCP proxying or any protocol where HTTP headers cannot be injected. Adding hash-type consistent minimizes redistribution when servers are added or removed from the pool.

    backend api_servers
        balance source
        hash-type consistent
        server api01 10.10.3.11:8080 check
        server api02 10.10.3.12:8080 check
        server api03 10.10.3.13:8080 check

    Health Checks: TCP and HTTP

    Accurate health checks are the foundation of high availability. Without them, HAProxy may continue routing traffic to failed or degraded backends, directly causing user-visible errors. HAProxy supports TCP-level checks for simple connectivity verification and HTTP-level checks for application-layer validation.

    TCP Health Checks

    A TCP check attempts a full TCP handshake to the backend on its configured port. If the connection succeeds, the server is considered healthy. This is appropriate for non-HTTP services such as Redis, SMTP relays, or custom TCP applications where an HTTP endpoint is not available.

    backend redis_cluster
        balance roundrobin
        option tcp-check
        server redis01 10.10.4.11:6379 check inter 2s rise 2 fall 3
        server redis02 10.10.4.12:6379 check inter 2s rise 2 fall 3

    HTTP Health Checks

    HTTP health checks issue a real HTTP request to a defined URI on the backend and validate the response status code. This is more thorough than TCP checks because it confirms the application is running and responsive, not just that the port is accepting connections. A backend process that is deadlocked or returning 500 errors will fail an HTTP check even if its TCP port remains open.

    backend app_servers
        balance leastconn
        option httpchk GET /health HTTP/1.1\r\nHost:\ solvethenetwork.com
        http-check expect status 200
        server app01 10.10.5.11:8080 check inter 5s rise 3 fall 2
        server app02 10.10.5.12:8080 check inter 5s rise 3 fall 2
        server app03 10.10.5.13:8080 check inter 5s rise 3 fall 2

    The inter parameter sets the polling interval. rise requires this many consecutive successes before a server transitions from down to up, preventing flapping. fall requires this many consecutive failures before a server is marked down and removed from rotation.

    SSL/TLS Termination

    Centralizing SSL/TLS termination at the HAProxy layer reduces computational load on backend application servers, simplifies certificate lifecycle management, and makes it easier to enforce consistent cipher suites and protocol versions across your entire service fleet.

    frontend https_frontend
        bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
        mode http
        option forwardfor
        http-request set-header X-Forwarded-Proto https
        default_backend web_servers
    
    frontend http_frontend
        bind 0.0.0.0:80
        mode http
        http-request redirect scheme https unless { ssl_fc }
    
    backend web_servers
        balance roundrobin
        server web01 10.10.1.11:80 check
        server web02 10.10.1.12:80 check

    The X-Forwarded-Proto header informs backend applications that the original client request arrived over HTTPS, which is important for generating correct redirect URLs and secure cookie flags. For compliance environments requiring TLS 1.2 minimum and specific cipher restrictions, HAProxy supports fine-grained TLS tuning directly on the bind directive.

    frontend https_frontend
        bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem \
            ssl-min-ver TLSv1.2 \
            ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
        mode http

    ACL-Based Routing

    Access Control Lists allow HAProxy to make routing decisions based on virtually any attribute of an incoming request: the URL path, the Host header, arbitrary request headers, the client IP address, or HTTP method. This enables a single HAProxy instance to act as a smart reverse proxy that dispatches traffic to entirely different backend pools based on request characteristics.

    frontend http_frontend
        bind 0.0.0.0:80
        mode http
    
        acl is_api      path_beg /api/
        acl is_static   path_end .css .js .png .jpg .gif .ico .woff2
        acl is_admin    hdr(host) -i admin.solvethenetwork.com
    
        use_backend api_servers    if is_api
        use_backend static_servers if is_static
        use_backend admin_servers  if is_admin
        default_backend web_servers
    
    backend api_servers
        balance leastconn
        server api01 10.10.3.11:8080 check
        server api02 10.10.3.12:8080 check
    
    backend static_servers
        balance roundrobin
        server cdn01 10.10.6.11:80 check
        server cdn02 10.10.6.12:80 check
    
    backend admin_servers
        balance source
        server mgmt01 10.10.7.11:8443 check ssl verify none

    ACLs can be composed with logical operators to create multi-condition rules. They can also enforce security policies: blocking known-bad IP ranges, rejecting requests that lack required headers, or redirecting all traffic to a maintenance page during a deployment window.

    Stick Tables for Session Persistence

    Stick tables are HAProxy's in-memory key-value store. Their primary use case is session persistence: after the first request from a client, HAProxy records the mapping between the client identifier (IP address, session cookie, or custom header) and the chosen backend server. All subsequent requests matching the same identifier are routed to the same server for the lifetime of the sticky entry.

    backend app_servers
        balance roundrobin
        cookie SERVERID insert indirect nocache
        stick-table type string len 64 size 100k expire 30m
        stick on cookie(SERVERID)
        server app01 10.10.5.11:8080 check cookie app01
        server app02 10.10.5.12:8080 check cookie app02
        server app03 10.10.5.13:8080 check cookie app03

    In active-active HA configurations where multiple HAProxy instances share the load, stick table data can be synchronized between peers using the HAProxy peers protocol. This ensures session persistence is maintained even when different HAProxy nodes handle requests from the same client.

    peers haproxy_peers
        peer sw-infrarunbook-01 10.10.0.11:1024
        peer sw-infrarunbook-02 10.10.0.12:1024
    
    backend app_servers
        stick-table type ip size 200k expire 10m peers haproxy_peers
        stick on src

    Rate Limiting with Stick Tables

    The same stick table infrastructure that powers session persistence also enables stateful per-client rate limiting. By tracking HTTP request rates and concurrent connection counts per source IP, HAProxy can throttle abusive clients and protect backends from brute-force attacks or runaway scrapers — entirely within the proxy layer, before requests touch application code.

    frontend https_frontend
        bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
        mode http
    
        stick-table type ip size 100k expire 60s store http_req_rate(10s),conn_cur
        http-request track-sc0 src
        http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
        http-request deny deny_status 429 if { sc_conn_cur(0) gt 20 }
    
        default_backend web_servers

    This rate limiter rejects any source IP that exceeds 100 requests in a 10-second sliding window or holds more than 20 simultaneous connections, returning an HTTP 429 response. Because all counters live in memory and are evaluated inline with every request, the overhead is negligible even at high throughput.

    HAProxy Stats Page

    HAProxy ships with a built-in HTTP statistics dashboard that provides real-time visibility into every frontend, backend, and server in the configuration. This is one of the most operationally useful features for on-call engineers who need to quickly assess cluster health without connecting to backend servers directly.

    listen stats
        bind 0.0.0.0:8404
        mode http
        stats enable
        stats uri /haproxy-stats
        stats realm HAProxy\ Statistics
        stats auth infrarunbook-admin:S3cur3St@ts!
        stats refresh 5s
        stats show-legends
        stats show-node sw-infrarunbook-01
        stats admin if TRUE

    The dashboard displays per-server metrics including request rates, response times, active connection counts, queue depths, bytes transferred, and health check results. With stats admin if TRUE enabled, operators can drain, disable, or re-enable individual backend servers directly from the browser during rolling deployments or incident triage — without touching the configuration file or reloading the process.

    Keepalived Integration for Active-Passive HA

    HAProxy itself becomes a single point of failure if only one instance is deployed. The standard solution pairs two HAProxy nodes with Keepalived, which implements VRRP (Virtual Router Redundancy Protocol) to manage a floating virtual IP (VIP). Both nodes run HAProxy identically. Keepalived monitors the HAProxy process and automatically transfers the VIP to the standby node if the active node fails.

    Primary node configuration on sw-infrarunbook-01 (10.10.0.11):

    vrrp_script chk_haproxy {
        script "killall -0 haproxy"
        interval 2
        weight 2
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        virtual_router_id 51
        priority 101
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass stn2024vrrp
        }
        virtual_ipaddress {
            10.10.0.10/24
        }
        track_script {
            chk_haproxy
        }
    }

    Standby node configuration on sw-infrarunbook-02 (10.10.0.12):

    vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass stn2024vrrp
        }
        virtual_ipaddress {
            10.10.0.10/24
        }
    }

    DNS resolves solvethenetwork.com to 10.10.0.10. If the HAProxy process on the primary node crashes or the node becomes unreachable, Keepalived detects the failure within 2 seconds and promotes the standby, reassigning the VIP. Clients experience a brief TCP reset followed by a successful reconnect to the same IP address — typically under 3 seconds of total disruption.

    WebSocket Proxying

    WebSocket connections begin as standard HTTP requests but upgrade to a persistent bidirectional TCP tunnel. HAProxy handles this transparently in HTTP mode by detecting the Upgrade: WebSocket header and switching the connection into tunnel mode for its remaining duration. The key configuration requirement is setting generous timeout values so long-lived WebSocket sessions are not prematurely torn down.

    frontend https_frontend
        bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
        mode http
        option http-server-close
    
        acl is_websocket hdr(Upgrade) -i WebSocket
        use_backend websocket_servers if is_websocket
        default_backend web_servers
    
    backend websocket_servers
        balance source
        option http-server-close
        option forwardfor
        timeout connect  5s
        timeout server   60m
        timeout tunnel   1h
        server ws01 10.10.8.11:3000 check
        server ws02 10.10.8.12:3000 check

    Using balance source for the WebSocket backend ensures that reconnections from the same client IP land on the same backend node, which is often required for WebSocket applications that maintain per-connection state on the server side.

    Logging and Syslog Integration

    HAProxy generates detailed access logs that capture full request timing breakdowns, response codes, backend server selected, bytes transferred, and session state flags. Forwarding these logs to a centralized syslog server enables correlation with application and infrastructure events across the entire stack.

    global
        log 10.10.9.11:514 local0 info
        log 10.10.9.12:514 local1 warning
    
    defaults
        log global
        option httplog
        option dontlognull
        log-format "%ci:%cp [%t] %ft %b/%s %Tq/%Tw/%Tc/%Tr/%TT %ST %B %tsc %ac/%fc/%bc/%sc/%rc %{+Q}r"

    The timing fields in the log format are invaluable for diagnosing latency issues. Tq is the time HAProxy waited to receive the full HTTP request from the client. Tw is the time spent in the request queue waiting for a server slot. Tc is the TCP connection time to the backend. Tr is the time the backend took to send its first response byte. TT is the total session time. This breakdown makes it possible to distinguish client-side network slowness, backend overload, and application processing delays from the proxy logs alone.

    Connection Limits and Timeout Tuning

    Precise control over connection counts and timeout values prevents resource exhaustion and enables graceful degradation under load. HAProxy enforces limits at the global, frontend, backend, and per-server levels, giving operators fine-grained control over how traffic is absorbed and queued.

    global
        maxconn 50000
        ulimit-n 200500
    
    defaults
        timeout connect      5s
        timeout client       30s
        timeout server       30s
        timeout http-request 10s
        timeout http-keep-alive 5s
        timeout queue        60s
        maxconn 10000
    
    backend app_servers
        maxconn 5000
        timeout server 45s
        server app01 10.10.5.11:8080 check maxconn 500
        server app02 10.10.5.12:8080 check maxconn 500
        server app03 10.10.5.13:8080 check maxconn 500

    When a backend server's per-server maxconn is reached, HAProxy queues additional requests in memory rather than sending them to the backend. This queue-based admission control absorbs burst traffic without overwhelming backend threads or exhausting database connection pools. The timeout queue value determines how long a request sits in queue before HAProxy returns a 503 to the client.

    HTTP Compression

    HAProxy can apply gzip compression to HTTP responses on behalf of backend servers, reducing bandwidth between the proxy and clients without requiring any changes to application code. This is particularly effective for JSON API responses, HTML pages, and JavaScript assets that are served without pre-compression from the application tier.

    frontend https_frontend
        bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
        mode http
        compression algo gzip
        compression type text/html text/plain text/css application/json application/javascript
    
    backend web_servers
        balance roundrobin
        server web01 10.10.1.11:80 check
        server web02 10.10.1.12:80 check

    A Production-Grade HAProxy Configuration

    The following configuration brings together the major features discussed in this article into a single production-ready HAProxy deployment for solvethenetwork.com, running on sw-infrarunbook-01 as the active node in a Keepalived-managed HA pair.

    global
        log 10.10.9.11:514 local0
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
        maxconn 50000
        ssl-default-bind-options ssl-min-ver TLSv1.2
        ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
    
    defaults
        log global
        mode http
        option httplog
        option dontlognull
        option forwardfor
        option http-server-close
        timeout connect      5s
        timeout client       30s
        timeout server       30s
        timeout http-request 10s
        timeout queue        60s
        maxconn 10000
    
    frontend http_in
        bind 0.0.0.0:80
        http-request redirect scheme https unless { ssl_fc }
    
    frontend https_in
        bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
        stick-table type ip size 100k expire 60s store http_req_rate(10s)
        http-request track-sc0 src
        http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
        acl is_api  path_beg /api/
        acl is_ws   hdr(Upgrade) -i WebSocket
        use_backend api_pool if is_api
        use_backend ws_pool  if is_ws
        default_backend web_pool
    
    backend web_pool
        balance roundrobin
        option httpchk GET /health
        http-check expect status 200
        server web01 10.10.1.11:80 check inter 5s rise 2 fall 3
        server web02 10.10.1.12:80 check inter 5s rise 2 fall 3
        server web03 10.10.1.13:80 check inter 5s rise 2 fall 3
    
    backend api_pool
        balance leastconn
        option httpchk GET /api/health
        http-check expect status 200
        server api01 10.10.3.11:8080 check inter 3s rise 2 fall 2
        server api02 10.10.3.12:8080 check inter 3s rise 2 fall 2
    
    backend ws_pool
        balance source
        timeout tunnel 1h
        server ws01 10.10.8.11:3000 check
        server ws02 10.10.8.12:3000 check
    
    listen stats
        bind 0.0.0.0:8404
        stats enable
        stats uri /haproxy-stats
        stats auth infrarunbook-admin:S3cur3St@ts!
        stats refresh 10s
        stats show-node sw-infrarunbook-01

    Frequently Asked Questions

    Q: What makes HAProxy suitable for high availability systems compared to other load balancers?

    A: HAProxy is purpose-built for proxying and load balancing with an event-driven architecture that handles hundreds of thousands of concurrent connections on commodity hardware. It offers sub-second health check intervals, fine-grained timeout and connection limit controls, native SSL/TLS termination, and stateful features like stick tables and rate limiting — all without requiring a separate licensing model or cloud dependency. These capabilities, combined with its maturity and predictable behavior under load, make it a trusted choice for HA designs at any scale.

    Q: What is the difference between the roundrobin, leastconn, and source balancing algorithms in HAProxy?

    A: Round robin distributes requests evenly in turn and is best for stateless workloads with homogeneous backends. Leastconn routes new requests to the server with the fewest active connections, making it ideal for long-lived or variable-duration requests like database queries or file uploads. Source uses a hash of the client IP to consistently send the same client to the same backend, providing soft session affinity without application-level state.

    Q: How quickly does HAProxy detect and remove a failed backend server?

    A: Failover speed depends on the health check interval and fall threshold. With inter 2s fall 2, a backend is removed from rotation after 4 seconds of consecutive failures. With inter 1s fall 1, removal can happen in under 2 seconds. The appropriate values depend on your SLA and how much false-positive sensitivity you can tolerate — too aggressive and you risk removing healthy servers during brief network blips.

    Q: Can HAProxy terminate SSL and also pass client certificate information to backend servers?

    A: Yes. HAProxy can extract client certificate fields after mTLS handshake and forward them to backends as HTTP headers using directives like http-request set-header X-SSL-Client-CN %{+Q}[ssl_c_s_dn(cn)]. This allows backend applications to make authorization decisions based on client certificate identity without performing TLS termination themselves.

    Q: What is the HAProxy stats socket and how is it used in automation?

    A: The stats socket is a Unix domain socket that exposes a command interface to the running HAProxy process. Tools and scripts can connect to it to query real-time metrics, change server states (enable, disable, drain), update weights, or clear stick table entries — all without reloading the configuration. It is the basis for zero-downtime deployment scripts and health-check-driven automation in CI/CD pipelines.

    Q: How does HAProxy handle a backend server that is in a draining state?

    A: When a server is set to the drain state (via the stats socket or stats page), HAProxy stops sending new connections to it but allows existing sessions to complete naturally. Once all active sessions close, the server can be safely taken offline for maintenance. This is the correct mechanism for rolling deployments that require graceful connection draining rather than hard cutover.

    Q: How does Keepalived determine when to transfer the VIP away from the active HAProxy node?

    A: Keepalived uses a vrrp_script that periodically runs a health check command against the local HAProxy process. The most common check is killall -0 haproxy, which returns success if the process is running and failure if it has exited. When the script fails, Keepalived reduces the VRRP priority of the active node below the standby's priority, which triggers a VRRP election and transfers the VIP to the healthy standby node.

    Q: Does HAProxy support HTTP/2?

    A: Yes, HAProxy supports HTTP/2 on the frontend (client-facing) side when using HTTPS, which allows browsers and API clients to benefit from multiplexing. By default, HAProxy communicates with backend servers over HTTP/1.1, which is compatible with virtually all application server software. Full end-to-end HTTP/2 including the backend side is supported in newer HAProxy versions with the proto h2 server option.

    Q: What is the correct way to configure HAProxy for WebSocket connections without premature timeouts?

    A: WebSocket backends require elevated timeout values because the connection persists indefinitely after the HTTP Upgrade. Set timeout tunnel to a value that accommodates your longest expected WebSocket session (commonly 1h or higher), and ensure timeout server is also generous. Using balance source is recommended so reconnects land on the same backend. Avoid option http-server-close at the backend level for WebSocket pools, as it conflicts with persistent tunnel mode.

    Q: How does HAProxy rate limiting differ from firewall-based rate limiting?

    A: HAProxy rate limiting is application-aware and operates at Layer 7. It can count requests per path, per header value, or per cookie in addition to per source IP. It returns proper HTTP status codes (such as 429 Too Many Requests) rather than silently dropping packets, which is friendlier to API clients. Because the counters live in HAProxy's memory and are evaluated inline with each request, there is no additional network hop or external service dependency, making it extremely low latency compared to firewall or WAF-based rate limiting.

    Q: Can HAProxy reload its configuration without dropping existing connections?

    A: Yes. HAProxy supports hitless (zero-downtime) reloads using the -sf (send SIGUSR1 to old process) mechanism. The new process starts and begins accepting new connections immediately while the old process continues to handle existing sessions until they naturally close. On Linux with socket transfer support, even the listening sockets are transferred seamlessly. This makes configuration changes and certificate renewals fully transparent to clients.

    Q: How should HAProxy logging be configured for production observability?

    A: Configure HAProxy to send logs to a local rsyslog or syslog-ng daemon over UDP on 127.0.0.1:514, then have that daemon forward to a centralized log aggregation platform. Use the log-format directive to capture full timing breakdowns (Tq, Tw, Tc, Tr, TT), the backend and server selected, the termination flags, and the request line. Enable option dontlognull to suppress health check noise. Index the logs in your SIEM or log platform with the backend name and server name as structured fields to enable per-server latency dashboards and error rate alerting.

    Frequently Asked Questions

    What makes HAProxy suitable for high availability systems compared to other load balancers?

    HAProxy is purpose-built for proxying and load balancing with an event-driven architecture that handles hundreds of thousands of concurrent connections on commodity hardware. It offers sub-second health check intervals, fine-grained timeout and connection limit controls, native SSL/TLS termination, and stateful features like stick tables and rate limiting — all without requiring a separate licensing model or cloud dependency.

    What is the difference between the roundrobin, leastconn, and source balancing algorithms in HAProxy?

    Round robin distributes requests evenly in turn and is best for stateless workloads. Leastconn routes new requests to the server with the fewest active connections, ideal for long-lived or variable-duration requests. Source uses a hash of the client IP to consistently send the same client to the same backend, providing soft session affinity without application-level state.

    How quickly does HAProxy detect and remove a failed backend server?

    Failover speed depends on the health check interval and fall threshold. With inter 2s fall 2, a backend is removed after 4 seconds of consecutive failures. With inter 1s fall 1, removal can happen in under 2 seconds. The appropriate values depend on your SLA and tolerance for false-positive removals during brief network blips.

    Can HAProxy terminate SSL and also pass client certificate information to backend servers?

    Yes. HAProxy can extract client certificate fields after mTLS handshake and forward them to backends as HTTP headers. This allows backend applications to make authorization decisions based on client certificate identity without performing TLS termination themselves.

    What is the HAProxy stats socket and how is it used in automation?

    The stats socket is a Unix domain socket exposing a command interface to the running HAProxy process. Scripts can connect to it to query real-time metrics, change server states, update weights, or clear stick table entries without reloading the configuration — making it the basis for zero-downtime deployment automation.

    How does HAProxy handle a backend server that is in a draining state?

    When a server is set to drain state, HAProxy stops sending new connections to it but allows existing sessions to complete naturally. Once all active sessions close, the server can be taken offline safely. This is the correct mechanism for rolling deployments requiring graceful connection draining.

    How does Keepalived determine when to transfer the VIP away from the active HAProxy node?

    Keepalived uses a vrrp_script that periodically runs a health check command against the local HAProxy process, typically killall -0 haproxy. When the check fails, Keepalived lowers the active node's VRRP priority below the standby's, triggering a VRRP election that transfers the VIP to the healthy standby node.

    Does HAProxy support HTTP/2?

    Yes. HAProxy supports HTTP/2 on the frontend side when using HTTPS, allowing clients to benefit from multiplexing. By default it communicates with backends over HTTP/1.1. Full end-to-end HTTP/2 including the backend side is supported in newer HAProxy versions with the proto h2 server option.

    What is the correct way to configure HAProxy for WebSocket connections without premature timeouts?

    Set timeout tunnel to a value covering your longest expected WebSocket session (commonly 1h or higher). Use balance source so reconnects land on the same backend. Ensure timeout server is also generous, and do not use option http-server-close at the backend level for WebSocket pools as it conflicts with persistent tunnel mode.

    How does HAProxy rate limiting differ from firewall-based rate limiting?

    HAProxy rate limiting is application-aware and operates at Layer 7, counting requests per path, header, cookie, or source IP. It returns proper HTTP 429 responses rather than silently dropping packets. All counters live in HAProxy memory and are evaluated inline, making it extremely low latency with no external service dependency.

    Related Articles