Introduction
When building a production-grade load balancing layer, two names dominate every infrastructure conversation: HAProxy and Nginx. Both are battle-tested, open source, and capable of handling millions of requests per day — but they were built with different philosophies and different primary use cases in mind. Understanding where each excels is not academic; it directly impacts how your traffic flows, how failures are detected, how sessions are maintained, and how much operational overhead your team absorbs.
This article provides a deep technical comparison of HAProxy and Nginx for load balancing workloads. We cover algorithms, health checks, SSL/TLS termination, ACL-based routing, stick tables, rate limiting, WebSocket proxying, and observability. All configuration examples use realistic infrastructure: the domain solvethenetwork.com, host sw-infrarunbook-01, user infrarunbook-admin, and RFC 1918 address space.
Architecture Philosophy
HAProxy was purpose-built as a load balancer and proxy. Its entire design centers on connection handling, health monitoring, and traffic distribution. Nginx, on the other hand, began life as a high-performance web server and later added reverse proxy and load balancing capabilities. This distinction matters in practice:
- HAProxy operates entirely in proxy mode — it does not serve static files, run application code, or handle content natively. It is a pure traffic router with no content-serving surface area.
- Nginx can serve static assets, terminate SSL, act as a cache, run Lua scripts via OpenResty, and load balance — all in a single process. This flexibility is both a strength and a source of operational complexity.
For teams that need a dedicated, high-performance load balancer with rich observability and granular control over backend health, HAProxy is typically the stronger choice. For teams that want a single process to handle both content serving and proxying, Nginx offers more versatility.
Load Balancing Algorithms
Both tools support the core distribution strategies, but HAProxy offers more built-in options without requiring upstream modules or a commercial license.
HAProxy Algorithms
HAProxy's
balancedirective supports:
roundrobin,
leastconn,
source(source IP hash),
uri,
url_param,
hdr(header hash),
rdp-cookie,
random, and
first. A minimal HAProxy backend using least-connections looks like this:
backend app_servers
balance leastconn
option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
server app01 10.10.1.11:8080 check inter 3s rise 2 fall 3
server app02 10.10.1.12:8080 check inter 3s rise 2 fall 3
server app03 10.10.1.13:8080 check inter 3s rise 2 fall 3Nginx Algorithms
Nginx open-source supports round-robin (default),
least_conn, and
ip_hash. The
hashdirective with arbitrary keys and
least_time— which factors in measured response latency — are only available in Nginx Plus. A least-connections upstream block in Nginx looks like this:
upstream app_servers {
least_conn;
server 10.10.1.11:8080;
server 10.10.1.12:8080;
server 10.10.1.13:8080;
keepalive 32;
}The gap here is meaningful. HAProxy gives you URI-based hashing, header-based hashing, and RDP cookie persistence natively in the open-source edition. Nginx requires the commercial Plus license or custom third-party modules for equivalent functionality.
Health Checks
Health check sophistication is one of HAProxy's most significant advantages over Nginx open-source. The difference becomes critical in environments where backend availability changes frequently.
HAProxy Health Checks
HAProxy supports TCP checks, HTTP checks, custom payload checks, and agent checks — where the backend process itself reports its readiness state over a secondary port. Check intervals, rise and fall thresholds, and timeouts are configured independently per server:
backend api_pool
option httpchk GET /api/health HTTP/1.1\r\nHost:\ api.solvethenetwork.com
http-check expect status 200
default-server inter 2s fastinter 500ms downinter 5s rise 3 fall 2 slowstart 30s
server api01 10.10.2.21:443 check ssl verify none
server api02 10.10.2.22:443 check ssl verify noneThe
slowstartdirective gradually ramps traffic to a server coming back online, preventing thundering-herd recovery failures. The
fastinteroption reduces the poll interval when a server is already known to be down, accelerating detection of recovery. These are active probes — HAProxy initiates them on a schedule regardless of whether real traffic is flowing.
Nginx Health Checks
Nginx open-source uses passive health checks only. It marks a server down after a configurable number of failed requests via
max_failsand
fail_timeout— meaning a real user request must fail before the server is considered unhealthy. Active health checks require Nginx Plus:
upstream app_servers {
server 10.10.1.11:8080 max_fails=3 fail_timeout=30s;
server 10.10.1.12:8080 max_fails=3 fail_timeout=30s;
}
# Active health checks below require Nginx Plus:
# health_check uri=/healthz interval=3s fails=2 passes=3;For environments where proactive backend monitoring is required without paying for a commercial license, HAProxy is the clear winner on this dimension.
SSL/TLS Termination
Both tools terminate SSL/TLS at the load balancer layer, but they differ in feature depth and certificate management flexibility.
HAProxy SSL Termination
frontend https_frontend
bind *:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem alpn h2,http/1.1
option forwardfor
http-request set-header X-Forwarded-Proto https
default_backend app_servers
frontend http_redirect
bind *:80
http-request redirect scheme https code 301HAProxy supports SNI-based certificate selection across multiple certificates on a single IP via the
crt-listdirective, OCSP stapling, TLS 1.3, and mutual TLS (client certificate authentication) natively. The
alpn h2,http/1.1parameter enables HTTP/2 negotiation without additional configuration blocks.
Nginx SSL Termination
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
ssl_certificate /etc/nginx/certs/solvethenetwork.com.crt;
ssl_certificate_key /etc/nginx/certs/solvethenetwork.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_pass http://app_servers;
}
}Nginx integrates natively with Let's Encrypt via Certbot and handles SSL very well. If you are already using Nginx as your web server, handling SSL termination in the same process is operationally simple and avoids an extra network hop.
ACL-Based Routing
HAProxy's ACL system is one of its most powerful distinguishing features. You can route traffic based on virtually any layer 4–7 attribute, combine conditions with logical operators, and reuse ACLs across multiple routing rules.
frontend http_in
bind *:80
acl is_api path_beg /api/
acl is_static path_end .jpg .png .css .js .woff2
acl is_websocket hdr(Upgrade) -i websocket
acl is_mobile req.hdr(User-Agent) -i -m sub Mobile
use_backend api_pool if is_api
use_backend static_pool if is_static
use_backend ws_pool if is_websocket
use_backend mobile_pool if is_mobile
default_backend app_serversNginx achieves similar routing via
locationblocks and
mapdirectives, which are powerful but less expressive for multi-condition logic. HAProxy ACLs can be negated, ANDed, and ORed inline without nesting block structures, giving you very fine-grained control in a flat, readable configuration.
Stick Tables and Session Persistence
Session persistence ensures that requests from the same client always reach the same backend — critical for applications that store session state in process memory rather than a shared external store.
HAProxy Stick Tables
HAProxy's stick tables are an in-memory key-value store that survive graceful reloads and can be synchronized across a Keepalived HA pair using the
peerssection. Beyond stickiness, stick tables can track concurrent connections, connection rate, and request rate per key — enabling stateful rate limiting without an external datastore:
backend app_servers
balance roundrobin
cookie SERVERID insert indirect nocache
stick-table type string len 64 size 100k expire 30m store conn_cur,conn_rate(10s),http_req_rate(10s)
stick on req.cook(SERVERID)
server app01 10.10.1.11:8080 check cookie app01
server app02 10.10.1.12:8080 check cookie app02
server app03 10.10.1.13:8080 check cookie app03Nginx Session Persistence
Nginx open-source supports only IP hash for sticky sessions. Cookie-based stickiness — which survives NAT traversal and mobile IP changes — requires Nginx Plus's
sticky cookiedirective. This is a significant operational gap for applications where IP hash is insufficient due to shared egress IPs or mobile clients.
Rate Limiting
HAProxy Rate Limiting with Stick Tables
frontend http_in
bind *:80
stick-table type ip size 200k expire 60s store http_req_rate(10s),conn_cur
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
http-request deny deny_status 429 if { sc_conn_cur(0) gt 50 }
default_backend app_serversNginx Rate Limiting
http {
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
server {
location /api/ {
limit_req zone=api_limit burst=50 nodelay;
limit_req_status 429;
limit_conn conn_limit 20;
proxy_pass http://app_servers;
}
}
}Both tools handle rate limiting well at the IP level. HAProxy's stick table approach is more flexible — you can track rate per header value, per cookie, per URL parameter, or per any extracted field, not just source IP. Nginx's
limit_req_zoneis simpler to configure and sufficient for the majority of API gateway rate limiting use cases.
WebSocket Support
HAProxy WebSocket
frontend ws_frontend
bind *:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
acl is_websocket hdr(Upgrade) -i websocket
use_backend ws_pool if is_websocket
default_backend app_servers
backend ws_pool
balance source
option http-server-close
timeout tunnel 3600s
server ws01 10.10.3.31:8080 check
server ws02 10.10.3.32:8080 checkNginx WebSocket
location /ws/ {
proxy_pass http://ws_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}Both tools handle WebSocket connections reliably once configured. The critical settings in both cases are forwarding the
Upgradeand
Connectionheaders and setting a sufficiently long tunnel or read timeout to keep long-lived connections open without premature termination.
Stats Page and Observability
HAProxy ships with a built-in stats page that provides real-time visibility into every frontend, backend, and individual server — queue depths, error counts, session rates, health check status, and more. Enabling it requires only a few lines:
listen stats
bind *:8404
stats enable
stats uri /haproxy-stats
stats auth infrarunbook-admin:Ch@ng3M3Pl3ase
stats refresh 5s
stats show-legends
stats show-node sw-infrarunbook-01
stats hide-versionNginx does not have an equivalent built-in dashboard. The
ngx_http_stub_status_moduleexposes only five aggregate counters. Per-upstream metrics require Nginx Plus or an external agent such as nginx-prometheus-exporter combined with the
ngx_http_api_module. HAProxy also exposes a Unix domain socket runtime API that allows live manipulation of backends, weights, and drain states without reloading the process.
Connection Limits and Kernel Tuning
HAProxy Global and Defaults
global
maxconn 100000
nbthread 4
cpu-map auto:1/1-4 0-3
tune.ssl.default-dh-param 2048
log 127.0.0.1 local0 info
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
timeout queue 10s
maxconn 50000
option redispatch
retries 3Nginx Worker Tuning
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 16384;
use epoll;
multi_accept on;
}
http {
keepalive_timeout 65;
keepalive_requests 1000;
sendfile on;
tcp_nopush on;
}HAProxy's
nbthreadand
cpu-mapdirectives allow precise CPU affinity without the multi-process model that Nginx uses. In HAProxy 2.x and later, all threads share the same memory for stick tables, counters, and queue state — eliminating the cross-process synchronization overhead that can affect Nginx under high connection churn.
Keepalived HA Pairing
In production, HAProxy is commonly deployed alongside Keepalived to provide a floating VIP for active/passive failover. The Keepalived script monitors the HAProxy PID and demotes the node if the process dies:
vrrp_script chk_haproxy {
script "kill -0 $(cat /run/haproxy.pid)"
interval 2
weight 10
}
vrrp_instance VI_LB {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass Kv3p@ssw0rd
}
virtual_ipaddress {
10.10.0.100/24
}
track_script {
chk_haproxy
}
}HAProxy's runtime socket allows zero-downtime reloads and dynamic backend management without process restarts. Combined with Keepalived VIP failover, you get both high availability at the network layer and live reconfiguration at the application layer — a pairing that Nginx requires more external tooling to replicate.
When to Choose HAProxy
- You need active health checks without a commercial license.
- Your application requires cookie-based session persistence.
- You need fine-grained ACL routing based on headers, cookies, or URL parameters.
- You want stateful rate limiting tracked per arbitrary extracted field.
- You need a rich real-time stats dashboard without additional tooling.
- You are load balancing TCP protocols — databases, MQTT, Redis, SMTP, or custom binary protocols.
- You want live backend weight adjustments and drain without a config reload.
When to Choose Nginx
- You need a single process to serve static files, cache responses, and proxy traffic.
- You want native integration with Let's Encrypt via Certbot.
- Your team already has deep Nginx expertise and consistent tooling across services.
- You need HTTP/2 server push, sub-requests, or Lua scripting via OpenResty.
- You are running Kubernetes and using the community Nginx Ingress Controller.
- Simple round-robin or IP hash distribution is sufficient for your workload.
