Introduction
When building a production-grade load balancing layer, two names dominate every infrastructure conversation: HAProxy and Nginx. Both are battle-tested, open source, and capable of handling millions of requests per day — but they were built with different philosophies and different primary use cases in mind. Understanding where each excels is not academic; it directly impacts how your traffic flows, how failures are detected, how sessions are maintained, and how much operational overhead your team absorbs.
This article provides a deep technical comparison of HAProxy and Nginx for load balancing workloads. We cover algorithms, health checks, SSL/TLS termination, ACL-based routing, stick tables, rate limiting, WebSocket proxying, and observability. All configuration examples use realistic infrastructure: the domain solvethenetwork.com, host sw-infrarunbook-01, user infrarunbook-admin, and RFC 1918 address space.
Architecture Philosophy
HAProxy was purpose-built as a load balancer and proxy. Its entire design centers on connection handling, health monitoring, and traffic distribution. Nginx, on the other hand, began life as a high-performance web server and later added reverse proxy and load balancing capabilities. This distinction matters in practice:
- HAProxy operates entirely in proxy mode — it does not serve static files, run application code, or handle content natively. It is a pure traffic router with no content-serving surface area.
- Nginx can serve static assets, terminate SSL, act as a cache, run Lua scripts via OpenResty, and load balance — all in a single process. This flexibility is both a strength and a source of operational complexity.
For teams that need a dedicated, high-performance load balancer with rich observability and granular control over backend health, HAProxy is typically the stronger choice. For teams that want a single process to handle both content serving and proxying, Nginx offers more versatility.
Load Balancing Algorithms
Both tools support the core distribution strategies, but HAProxy offers more built-in options without requiring upstream modules or a commercial license.
HAProxy Algorithms
HAProxy's
balancedirective supports:
roundrobin,
leastconn,
source(source IP hash),
uri,
url_param,
hdr(header hash),
rdp-cookie,
random, and
first. A minimal HAProxy backend using least-connections looks like this:
backend app_servers
balance leastconn
option httpchk GET /healthz HTTP/1.1\r\nHost:\ solvethenetwork.com
server app01 10.10.1.11:8080 check inter 3s rise 2 fall 3
server app02 10.10.1.12:8080 check inter 3s rise 2 fall 3
server app03 10.10.1.13:8080 check inter 3s rise 2 fall 3Nginx Algorithms
Nginx open-source supports round-robin (default),
least_conn, and
ip_hash. The
hashdirective with arbitrary keys and
least_time— which factors in measured response latency — are only available in Nginx Plus. A least-connections upstream block in Nginx looks like this:
upstream app_servers {
least_conn;
server 10.10.1.11:8080;
server 10.10.1.12:8080;
server 10.10.1.13:8080;
keepalive 32;
}The gap here is meaningful. HAProxy gives you URI-based hashing, header-based hashing, and RDP cookie persistence natively in the open-source edition. Nginx requires the commercial Plus license or custom third-party modules for equivalent functionality.
Health Checks
Health check sophistication is one of HAProxy's most significant advantages over Nginx open-source. The difference becomes critical in environments where backend availability changes frequently.
HAProxy Health Checks
HAProxy supports TCP checks, HTTP checks, custom payload checks, and agent checks — where the backend process itself reports its readiness state over a secondary port. Check intervals, rise and fall thresholds, and timeouts are configured independently per server:
backend api_pool
option httpchk GET /api/health HTTP/1.1\r\nHost:\ api.solvethenetwork.com
http-check expect status 200
default-server inter 2s fastinter 500ms downinter 5s rise 3 fall 2 slowstart 30s
server api01 10.10.2.21:443 check ssl verify none
server api02 10.10.2.22:443 check ssl verify noneThe
slowstartdirective gradually ramps traffic to a server coming back online, preventing thundering-herd recovery failures. The
fastinteroption reduces the poll interval when a server is already known to be down, accelerating detection of recovery. These are active probes — HAProxy initiates them on a schedule regardless of whether real traffic is flowing.
Nginx Health Checks
Nginx open-source uses passive health checks only. It marks a server down after a configurable number of failed requests via
max_failsand
fail_timeout— meaning a real user request must fail before the server is considered unhealthy. Active health checks require Nginx Plus:
upstream app_servers {
server 10.10.1.11:8080 max_fails=3 fail_timeout=30s;
server 10.10.1.12:8080 max_fails=3 fail_timeout=30s;
}
# Active health checks below require Nginx Plus:
# health_check uri=/healthz interval=3s fails=2 passes=3;For environments where proactive backend monitoring is required without paying for a commercial license, HAProxy is the clear winner on this dimension.
SSL/TLS Termination
Both tools terminate SSL/TLS at the load balancer layer, but they differ in feature depth and certificate management flexibility.
HAProxy SSL Termination
frontend https_frontend
bind *:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem alpn h2,http/1.1
option forwardfor
http-request set-header X-Forwarded-Proto https
default_backend app_servers
frontend http_redirect
bind *:80
http-request redirect scheme https code 301HAProxy supports SNI-based certificate selection across multiple certificates on a single IP via the
crt-listdirective, OCSP stapling, TLS 1.3, and mutual TLS (client certificate authentication) natively. The
alpn h2,http/1.1parameter enables HTTP/2 negotiation without additional configuration blocks.
Nginx SSL Termination
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
ssl_certificate /etc/nginx/certs/solvethenetwork.com.crt;
ssl_certificate_key /etc/nginx/certs/solvethenetwork.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_pass http://app_servers;
}
}Nginx integrates natively with Let's Encrypt via Certbot and handles SSL very well. If you are already using Nginx as your web server, handling SSL termination in the same process is operationally simple and avoids an extra network hop.
ACL-Based Routing
HAProxy's ACL system is one of its most powerful distinguishing features. You can route traffic based on virtually any layer 4–7 attribute, combine conditions with logical operators, and reuse ACLs across multiple routing rules.
frontend http_in
bind *:80
acl is_api path_beg /api/
acl is_static path_end .jpg .png .css .js .woff2
acl is_websocket hdr(Upgrade) -i websocket
acl is_mobile req.hdr(User-Agent) -i -m sub Mobile
use_backend api_pool if is_api
use_backend static_pool if is_static
use_backend ws_pool if is_websocket
use_backend mobile_pool if is_mobile
default_backend app_serversNginx achieves similar routing via
locationblocks and
mapdirectives, which are powerful but less expressive for multi-condition logic. HAProxy ACLs can be negated, ANDed, and ORed inline without nesting block structures, giving you very fine-grained control in a flat, readable configuration.
Stick Tables and Session Persistence
Session persistence ensures that requests from the same client always reach the same backend — critical for applications that store session state in process memory rather than a shared external store.
HAProxy Stick Tables
HAProxy's stick tables are an in-memory key-value store that survive graceful reloads and can be synchronized across a Keepalived HA pair using the
peerssection. Beyond stickiness, stick tables can track concurrent connections, connection rate, and request rate per key — enabling stateful rate limiting without an external datastore:
backend app_servers
balance roundrobin
cookie SERVERID insert indirect nocache
stick-table type string len 64 size 100k expire 30m store conn_cur,conn_rate(10s),http_req_rate(10s)
stick on req.cook(SERVERID)
server app01 10.10.1.11:8080 check cookie app01
server app02 10.10.1.12:8080 check cookie app02
server app03 10.10.1.13:8080 check cookie app03Nginx Session Persistence
Nginx open-source supports only IP hash for sticky sessions. Cookie-based stickiness — which survives NAT traversal and mobile IP changes — requires Nginx Plus's
sticky cookiedirective. This is a significant operational gap for applications where IP hash is insufficient due to shared egress IPs or mobile clients.
Rate Limiting
HAProxy Rate Limiting with Stick Tables
frontend http_in
bind *:80
stick-table type ip size 200k expire 60s store http_req_rate(10s),conn_cur
http-request track-sc0 src
http-request deny deny_status 429 if { sc_http_req_rate(0) gt 100 }
http-request deny deny_status 429 if { sc_conn_cur(0) gt 50 }
default_backend app_serversNginx Rate Limiting
http {
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
server {
location /api/ {
limit_req zone=api_limit burst=50 nodelay;
limit_req_status 429;
limit_conn conn_limit 20;
proxy_pass http://app_servers;
}
}
}Both tools handle rate limiting well at the IP level. HAProxy's stick table approach is more flexible — you can track rate per header value, per cookie, per URL parameter, or per any extracted field, not just source IP. Nginx's
limit_req_zoneis simpler to configure and sufficient for the majority of API gateway rate limiting use cases.
WebSocket Support
HAProxy WebSocket
frontend ws_frontend
bind *:443 ssl crt /etc/haproxy/certs/solvethenetwork.com.pem
acl is_websocket hdr(Upgrade) -i websocket
use_backend ws_pool if is_websocket
default_backend app_servers
backend ws_pool
balance source
option http-server-close
timeout tunnel 3600s
server ws01 10.10.3.31:8080 check
server ws02 10.10.3.32:8080 checkNginx WebSocket
location /ws/ {
proxy_pass http://ws_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}Both tools handle WebSocket connections reliably once configured. The critical settings in both cases are forwarding the
Upgradeand
Connectionheaders and setting a sufficiently long tunnel or read timeout to keep long-lived connections open without premature termination.
Stats Page and Observability
HAProxy ships with a built-in stats page that provides real-time visibility into every frontend, backend, and individual server — queue depths, error counts, session rates, health check status, and more. Enabling it requires only a few lines:
listen stats
bind *:8404
stats enable
stats uri /haproxy-stats
stats auth infrarunbook-admin:Ch@ng3M3Pl3ase
stats refresh 5s
stats show-legends
stats show-node sw-infrarunbook-01
stats hide-versionNginx does not have an equivalent built-in dashboard. The
ngx_http_stub_status_moduleexposes only five aggregate counters. Per-upstream metrics require Nginx Plus or an external agent such as nginx-prometheus-exporter combined with the
ngx_http_api_module. HAProxy also exposes a Unix domain socket runtime API that allows live manipulation of backends, weights, and drain states without reloading the process.
Connection Limits and Kernel Tuning
HAProxy Global and Defaults
global
maxconn 100000
nbthread 4
cpu-map auto:1/1-4 0-3
tune.ssl.default-dh-param 2048
log 127.0.0.1 local0 info
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
timeout queue 10s
maxconn 50000
option redispatch
retries 3Nginx Worker Tuning
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 16384;
use epoll;
multi_accept on;
}
http {
keepalive_timeout 65;
keepalive_requests 1000;
sendfile on;
tcp_nopush on;
}HAProxy's
nbthreadand
cpu-mapdirectives allow precise CPU affinity without the multi-process model that Nginx uses. In HAProxy 2.x and later, all threads share the same memory for stick tables, counters, and queue state — eliminating the cross-process synchronization overhead that can affect Nginx under high connection churn.
Keepalived HA Pairing
In production, HAProxy is commonly deployed alongside Keepalived to provide a floating VIP for active/passive failover. The Keepalived script monitors the HAProxy PID and demotes the node if the process dies:
vrrp_script chk_haproxy {
script "kill -0 $(cat /run/haproxy.pid)"
interval 2
weight 10
}
vrrp_instance VI_LB {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass Kv3p@ssw0rd
}
virtual_ipaddress {
10.10.0.100/24
}
track_script {
chk_haproxy
}
}HAProxy's runtime socket allows zero-downtime reloads and dynamic backend management without process restarts. Combined with Keepalived VIP failover, you get both high availability at the network layer and live reconfiguration at the application layer — a pairing that Nginx requires more external tooling to replicate.
When to Choose HAProxy
- You need active health checks without a commercial license.
- Your application requires cookie-based session persistence.
- You need fine-grained ACL routing based on headers, cookies, or URL parameters.
- You want stateful rate limiting tracked per arbitrary extracted field.
- You need a rich real-time stats dashboard without additional tooling.
- You are load balancing TCP protocols — databases, MQTT, Redis, SMTP, or custom binary protocols.
- You want live backend weight adjustments and drain without a config reload.
When to Choose Nginx
- You need a single process to serve static files, cache responses, and proxy traffic.
- You want native integration with Let's Encrypt via Certbot.
- Your team already has deep Nginx expertise and consistent tooling across services.
- You need HTTP/2 server push, sub-requests, or Lua scripting via OpenResty.
- You are running Kubernetes and using the community Nginx Ingress Controller.
- Simple round-robin or IP hash distribution is sufficient for your workload.
Frequently Asked Questions
Q: Is HAProxy faster than Nginx for pure load balancing?
A: In most published benchmarks, HAProxy handles more concurrent connections and achieves lower latency for pure proxying workloads. This is expected given that every line of HAProxy's codebase is optimized for connection handling and traffic routing, whereas Nginx carries the overhead of its web server subsystem even when used only as a proxy. For extremely high throughput scenarios — hundreds of thousands of concurrent connections — HAProxy typically edges ahead. At moderate loads both tools perform well beyond what most applications need.
Q: Can HAProxy serve static files like Nginx?
A: No. HAProxy has no content-serving capability. It cannot read from a filesystem and return file contents to a client. It can only proxy requests to backend servers. If you need to serve static assets, you need a web server — Nginx, Apache, or a CDN — behind HAProxy or instead of it at that layer.
Q: Does Nginx support active health checks in the open-source version?
A: No. Active health checks — where Nginx proactively polls a health endpoint on a schedule, independent of real traffic — are a Nginx Plus feature. The open-source version uses only passive checks, meaning a backend is marked unhealthy only after real user requests fail. HAProxy provides full active health check support, including HTTP, TCP, and agent checks, in the free open-source edition.
Q: What is the difference between HAProxy stick tables and Nginx ip_hash?
A: HAProxy stick tables are a persistent, in-memory key-value store that can track sessions by cookie, header, URL parameter, or source IP — and that survive graceful reloads and can sync across an HA pair. Nginx's ip_hash simply hashes the client IP address to select a backend and provides no persistence across reloads or failover events. Cookie-based stickiness in Nginx requires the commercial Plus license, while HAProxy provides it natively for free.
Q: Can I use HAProxy and Nginx together in the same stack?
A: Yes, this is a common and well-supported pattern. HAProxy handles layer 4/7 load balancing, health checks, and ACL routing at the front. Nginx runs on each backend node as an application server or static file server. The two tools complement each other well: HAProxy provides what Nginx lacks as a load balancer, and Nginx provides what HAProxy lacks as a content server.
Q: Does HAProxy support HTTP/2?
A: Yes. Since HAProxy 1.8, HTTP/2 is supported via the
alpn h2,http/1.1parameter on the
binddirective. HAProxy 2.0 added full HTTP/2 support on both the frontend (client-facing) and backend (server-facing) sides. You can terminate HTTP/2 from clients and proxy to backends over HTTP/1.1 or HTTP/2 independently.
Q: How does HAProxy handle zero-downtime reloads?
A: HAProxy uses a socketpair-based reload mechanism. When you send a reload signal, the new process binds to the same ports and the old process transfers its open connections and stick table state via a Unix socket. In-flight requests complete on the old process while new connections go to the new process. This results in a reload with no dropped connections. Nginx achieves similar behavior via its master/worker upgrade process, but HAProxy's approach is more granular and supports stick table state transfer natively.
Q: Which tool is better for Kubernetes ingress?
A: Nginx is the more common choice for Kubernetes ingress, primarily because the community Nginx Ingress Controller has a large ecosystem, extensive documentation, and broad support across managed Kubernetes platforms. HAProxy also has a Kubernetes Ingress Controller (haproxy-ingress) that offers superior ACL flexibility and active health checks, but the Nginx controller has more third-party integrations and a larger community. For standard workloads, either works. For complex routing requirements or when active health checks are critical, the HAProxy ingress controller is worth evaluating.
Q: How do I secure HAProxy's stats page in production?
A: Bind the stats listener to a management interface IP rather than a public-facing address, require HTTP Basic authentication with a strong password, and restrict access by source IP using ACLs. You can also run the stats page over HTTPS by adding an SSL certificate to the stats bind directive. Consider placing the stats port behind a firewall rule that allows access only from your monitoring server's IP (for example, 10.10.0.50 in your RFC 1918 management network).
Q: Can Nginx do cookie-based session persistence without Plus?
A: Not natively. Nginx open-source does not have a built-in cookie-based sticky session mechanism. The
sticky cookiedirective is exclusive to Nginx Plus. Some third-party modules (such as
nginx-sticky-module-ng) attempt to add this capability, but they are not maintained by Nginx and introduce build and compatibility overhead. If cookie-based stickiness is a requirement and you do not have a Nginx Plus license, HAProxy is the better-supported and more reliable choice.
Q: Which tool is better for TCP (non-HTTP) load balancing?
A: HAProxy. It was designed from the ground up to proxy arbitrary TCP streams and has a dedicated
mode tcpthat provides full control over connection handling, health checks, and session persistence for any TCP protocol — including PostgreSQL, MySQL, Redis, MQTT, and custom binary protocols. Nginx's
streammodule provides TCP/UDP proxying, but it is less feature-rich than HAProxy's TCP mode and lacks HAProxy's granular health check and ACL capabilities at the transport layer.
Q: What logging options does HAProxy provide compared to Nginx?
A: HAProxy logs to syslog (local Unix socket or remote UDP/TCP syslog server) and produces highly configurable log lines via the
log-formatdirective. You can include backend name, server name, queue time, connect time, response time, termination state codes, and any captured header or cookie value in every log line. Nginx logs to access log files in a configurable format and can also send to syslog. Both support structured logging, but HAProxy's termination state codes (for example,
CDfor client disconnect,
sDfor server timeout) provide deeper diagnostic signal for connection-level troubleshooting without requiring full packet capture.
