Nginx has become one of the most widely deployed web servers and reverse proxies in modern infrastructure. Originally released in 2004 by Igor Sysoev to address the C10K problem, Nginx today powers a significant portion of the world's busiest sites. Its event-driven, non-blocking architecture allows it to handle tens of thousands of concurrent connections with minimal memory overhead. Unlike thread-per-connection servers, Nginx uses a small number of worker processes — typically one per CPU core — each running an epoll-based event loop capable of multiplexing thousands of connections simultaneously.
This article walks through the ten most impactful real-world use cases that infrastructure engineers implement in production today, backed by annotated configuration examples you can adapt immediately for your own environment.
1. Reverse Proxy and Upstream Load Balancing
The most common production role for Nginx is acting as a reverse proxy that distributes traffic across a pool of application servers. Fronting your backend fleet with Nginx decouples external traffic handling from application logic and gives you fine-grained control over routing strategy, connection pooling, and passive health checking.
The following configuration demonstrates a least_conn upstream with passive health checking. Both backend nodes run on RFC 1918 addresses and the proxy maintains persistent keepalive connections to reduce TCP handshake overhead at high request rates.
# /etc/nginx/conf.d/upstream.conf
# Host: sw-infrarunbook-01 | Owner: infrarunbook-admin
upstream app_backend {
least_conn;
server 10.0.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;
server 10.0.1.11:8080 weight=1 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 80;
server_name solvethenetwork.com;
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
proxy_buffer_size 16k;
proxy_buffers 4 32k;
}
}
Key directives explained:
- least_conn — routes each new request to the backend with the fewest active connections. This outperforms round-robin for workloads with variable response latency, such as mixed fast reads and slow writes.
- keepalive 32 — each Nginx worker maintains up to 32 idle keepalive connections per upstream group. Combined with
proxy_http_version 1.1
and an emptyConnection
header, this eliminates per-request TCP overhead at high concurrency. - max_fails / fail_timeout — passive health checking: if a backend times out or returns a server error three times within 30 seconds, Nginx marks it down and stops routing to it for the next 30 seconds before retrying.
2. SSL/TLS Termination and Hardening
Terminating TLS at Nginx offloads CPU-intensive cryptographic operations from application servers and centralizes certificate management. Modern TLS hardening is non-negotiable for any public-facing service and straightforward to implement in Nginx.
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
ssl_certificate /etc/nginx/ssl/solvethenetwork.com.fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/solvethenetwork.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 10.0.0.1 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
}
Disabling TLSv1.0 and TLSv1.1 satisfies PCI-DSS 3.2+ requirements. Setting ssl_session_tickets off prevents session ticket key compromise from undermining forward secrecy — a known weakness when ticket keys are long-lived or never rotated. OCSP stapling (ssl_stapling on) eliminates the client-side OCSP round-trip during the TLS handshake, reducing first-connection latency by 50–200 ms. The ssl_prefer_server_ciphers off directive allows modern TLS 1.3 clients to select their preferred cipher rather than forcing the server's ordering.
3. Rate Limiting to Prevent Abuse
Production APIs routinely face credential-stuffing bots, scrapers, and accidental runaway API clients. Nginx's
ngx_http_limit_req_moduleimplements a leaky-bucket algorithm that smooths burst traffic and rejects excess requests before they ever reach the application tier, protecting backend resources at the edge.
http {
# Shared memory zone keyed by client IP — 10 MB stores ~160,000 IPs
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:5m rate=1r/s;
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
limit_req_status 429;
limit_req_log_level warn;
proxy_pass http://app_backend;
}
location /auth/login {
limit_req zone=login_limit burst=5;
limit_req_status 429;
proxy_pass http://app_backend;
}
}
}
burst=20 nodelay allows a client to send up to 20 requests instantly before the rate limit applies, accommodating legitimate traffic bursts such as a page load triggering multiple parallel API calls. Returning HTTP 429 instead of the default 503 is the correct RFC 6585 response and is better understood by API consumer retry logic. The login endpoint uses a much tighter 1 request/second limit without
nodelay, meaning burst requests queue rather than being immediately forwarded, adding latency that makes brute-force attacks impractical.
4. Static Content Serving with Gzip Compression
Nginx excels at serving static assets directly from disk, bypassing application server overhead entirely. Combined with gzip compression and aggressive browser caching, this pattern dramatically reduces bandwidth consumption and improves Time to First Byte for asset-heavy frontends.
http {
gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_types
application/javascript
application/json
application/xml
image/svg+xml
text/css
text/html
text/plain
font/woff2;
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
root /var/www/solvethenetwork/public;
location ~* \.(js|css|png|jpg|svg|woff2|ico)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location / {
try_files $uri $uri/ /index.html;
}
}
}
The immutable cache directive tells browsers they will never need to revalidate a cached asset during the max-age window — valid only when filenames include a content hash (standard with Webpack, Vite, and similar build pipelines). Disabling access logs for static files reduces I/O significantly on high-traffic servers. gzip_comp_level 5 is a practical sweet spot: levels above 6 yield diminishing compression gains at increasing CPU cost.
5. Caching Proxy for Backend Response Caching
Nginx's built-in proxy cache serves as a full HTTP cache in front of slow or expensive backend services, dramatically reducing upstream load for read-heavy workloads such as product catalogs, news feeds, and API responses that change infrequently.
proxy_cache_path /var/cache/nginx/solvethenetwork
levels=1:2
keys_zone=app_cache:20m
max_size=4g
inactive=60m
use_temp_path=off;
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
location /api/products/ {
proxy_cache app_cache;
proxy_cache_key "$scheme$host$request_uri";
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_cache_lock_timeout 5s;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://app_backend;
}
}
proxy_cache_use_stale is the most operationally important directive in this block. When the upstream returns a 5xx error or times out, Nginx continues serving the last valid cached response rather than propagating the error to clients — effectively acting as a circuit breaker. proxy_cache_lock prevents cache stampedes: when a cached entry expires, only one request is forwarded upstream while all concurrent requests for the same resource wait and are served the freshly cached response, avoiding a thundering herd. The
X-Cache-Statusheader surfaces cache hits and misses in responses, making cache behavior transparent during debugging.
6. WebSocket Proxying
WebSocket connections require special proxy handling. The initial HTTP Upgrade handshake must be forwarded correctly, and the resulting long-lived TCP connection must not be severed by proxy read timeouts that assume short-lived request/response cycles.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
location /ws/ {
proxy_pass http://10.0.1.20:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}
The
mapblock dynamically sets the
Connectionheader: requests carrying an
Upgradeheader receive
Connection: upgradeto initiate the protocol switch, while standard HTTP requests receive
Connection: close. Without this map, HTTP/1.1 keepalive connections and WebSocket upgrades would conflict. Extending proxy_read_timeout and proxy_send_timeout to 3600 seconds prevents Nginx from closing idle WebSocket connections that are simply waiting for server-push events.
7. Geo Module for IP-Based Access Control
The
ngx_http_geo_moduleassigns Nginx variables based on client IP address, enabling geography-aware routing, RFC 1918 range detection, and internal/external traffic differentiation without an external WAF or application-layer logic.
geo $is_internal_ip {
default 0;
10.0.0.0/8 1;
172.16.0.0/12 1;
192.168.0.0/16 1;
}
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
# Admin panel restricted to RFC 1918 ranges
location /admin/ {
if ($is_internal_ip = 0) {
return 403;
}
proxy_pass http://app_backend;
}
# Metrics endpoint for internal monitoring only
location /metrics {
if ($is_internal_ip = 0) {
return 404;
}
proxy_pass http://10.0.1.10:9100;
}
}
Returning 404 rather than 403 for sensitive internal endpoints like
/metricsavoids confirming the existence of the endpoint to external probers. For advanced country-level routing, the ngx_http_geoip2_module (compiled against MaxMind GeoIP2 databases) enables access control and routing decisions based on country code, supporting GDPR-compliant data residency enforcement without any application code changes.
8. Security Headers to Harden HTTP Responses
Browser security policies enforced via HTTP response headers form a critical defense layer against XSS, clickjacking, MIME-type sniffing, and information leakage. These directives should be applied globally in the
httpblock or per relevant
serverblock.
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
server_tokens off;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; object-src 'none'; base-uri 'none';" always;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
}
The always flag ensures these headers are appended even on error responses (4xx, 5xx). This matters because attackers can deliberately trigger error pages to probe security policy gaps. server_tokens off suppresses the Nginx version string from the
Serverresponse header, removing a low-effort fingerprinting vector that automated scanners exploit to identify unpatched versions. The
Content-Security-Policyheader with
object-src 'none'blocks Flash and other plugin-based injection vectors entirely.
9. Custom JSON Log Formats for Structured Observability
Default Nginx combined log format is inadequate for production observability. Emitting structured JSON logs that capture upstream timing, cache status, and request identifiers enables direct ingestion into Elasticsearch, Loki, or Splunk without a parsing step.
log_format json_access escape=json
'{'
'"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"method":"$request_method",'
'"uri":"$request_uri",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_response_time":"$upstream_response_time",'
'"upstream_addr":"$upstream_addr",'
'"cache_status":"$upstream_cache_status",'
'"http_referrer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"request_id":"$request_id"'
'}';
access_log /var/log/nginx/solvethenetwork.access.log json_access buffer=32k flush=5s;
error_log /var/log/nginx/solvethenetwork.error.log warn;
The buffer=32k flush=5s parameters batch log writes to reduce fsync pressure at high request rates while guaranteeing logs reach disk within 5 seconds for near-real-time observability. The
$request_idvariable (available since Nginx 1.11.0) is a UUID automatically generated per request. Forward it to backends via
proxy_set_header X-Request-ID $request_idto enable end-to-end distributed tracing across the full request path.
10. Header Manipulation for Upstream Communication
Production deployments regularly need to inject, strip, or rewrite headers as requests transit the proxy layer — communicating client context to backends, hiding internal implementation details from clients, and meeting third-party integration requirements.
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
location /api/ {
# Forward client identity context to the backend
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;
# Strip any client-supplied internal header before forwarding
proxy_set_header X-Internal-Token "";
# Remove technology disclosure headers from upstream responses
proxy_hide_header X-Powered-By;
proxy_hide_header X-Backend-Server;
proxy_hide_header X-AspNet-Version;
proxy_pass http://app_backend;
}
}
Setting
proxy_set_header X-Internal-Token ""(empty string) explicitly removes any client-supplied header with that name before the request is forwarded, preventing clients from spoofing internal service authorization tokens. proxy_hide_header strips technology disclosure headers from upstream responses before they reach the client — a defense-in-depth measure that reduces the attack surface by preventing automated scanners from identifying backend technology stacks.
Putting It All Together: A Production-Ready Configuration
In real deployments these patterns are layered into a single coherent configuration. Below is a representative production server block for sw-infrarunbook-01 running the solvethenetwork.com stack, combining TLS hardening, rate limiting, proxy caching, WebSocket support, and structured logging.
# /etc/nginx/sites-available/solvethenetwork.com.conf
# Host: sw-infrarunbook-01 | Owner: infrarunbook-admin
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream app_backend {
least_conn;
server 10.0.1.10:8080 max_fails=3 fail_timeout=30s;
server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;
keepalive 32;
}
limit_req_zone $binary_remote_addr zone=global_limit:20m rate=30r/s;
limit_req_zone $binary_remote_addr zone=login_limit:5m rate=1r/s;
proxy_cache_path /var/cache/nginx/solvethenetwork
levels=1:2 keys_zone=app_cache:20m max_size=4g inactive=60m use_temp_path=off;
server {
listen 80;
server_name solvethenetwork.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name solvethenetwork.com;
ssl_certificate /etc/nginx/ssl/solvethenetwork.com.fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/solvethenetwork.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 10.0.0.1 valid=300s;
server_tokens off;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
access_log /var/log/nginx/solvethenetwork.access.log json_access buffer=32k flush=5s;
error_log /var/log/nginx/solvethenetwork.error.log warn;
root /var/www/solvethenetwork/public;
location ~* \.(js|css|png|jpg|svg|woff2|ico)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location /api/ {
limit_req zone=global_limit burst=50 nodelay;
limit_req_status 429;
proxy_cache app_cache;
proxy_cache_valid 200 5m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
add_header X-Cache-Status $upstream_cache_status;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Request-ID $request_id;
proxy_hide_header X-Powered-By;
proxy_pass http://app_backend;
}
location /auth/login {
limit_req zone=login_limit burst=5;
limit_req_status 429;
proxy_pass http://app_backend;
}
location /ws/ {
proxy_pass http://10.0.1.20:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
location / {
try_files $uri $uri/ /index.html;
}
}
Operational note: After every configuration change on sw-infrarunbook-01, validate withnginx -tbefore applying. Usenginx -s reloadrather than a full service restart to preserve in-flight connections. In the solvethenetwork.com deployment pipeline, the reload is gated behind a syntax validation step in CI to prevent broken configurations from reaching production. Store your Nginx configs in version control and use configuration management (Ansible, Chef, or Puppet) to enforce consistency across multiple edge nodes.
