InfraRunBook
    Back to articles

    Top Real-World Use Cases of Nginx in Production

    Nginx
    Published: Apr 1, 2026
    Updated: Apr 1, 2026

    A deep-dive into how Nginx is deployed in production environments, covering reverse proxying, upstream load balancing, SSL hardening, rate limiting, caching, WebSocket proxying, and security headers with annotated config examples.

    Top Real-World Use Cases of Nginx in Production

    Nginx has become one of the most widely deployed web servers and reverse proxies in modern infrastructure. Originally released in 2004 by Igor Sysoev to address the C10K problem, Nginx today powers a significant portion of the world's busiest sites. Its event-driven, non-blocking architecture allows it to handle tens of thousands of concurrent connections with minimal memory overhead. Unlike thread-per-connection servers, Nginx uses a small number of worker processes — typically one per CPU core — each running an epoll-based event loop capable of multiplexing thousands of connections simultaneously.

    This article walks through the ten most impactful real-world use cases that infrastructure engineers implement in production today, backed by annotated configuration examples you can adapt immediately for your own environment.


    1. Reverse Proxy and Upstream Load Balancing

    The most common production role for Nginx is acting as a reverse proxy that distributes traffic across a pool of application servers. Fronting your backend fleet with Nginx decouples external traffic handling from application logic and gives you fine-grained control over routing strategy, connection pooling, and passive health checking.

    The following configuration demonstrates a least_conn upstream with passive health checking. Both backend nodes run on RFC 1918 addresses and the proxy maintains persistent keepalive connections to reduce TCP handshake overhead at high request rates.

    # /etc/nginx/conf.d/upstream.conf
    # Host: sw-infrarunbook-01 | Owner: infrarunbook-admin
    
    upstream app_backend {
        least_conn;
        server 10.0.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;
        server 10.0.1.11:8080 weight=1 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }
    
    server {
        listen 80;
        server_name solvethenetwork.com;
    
        location / {
            proxy_pass          http://app_backend;
            proxy_http_version  1.1;
            proxy_set_header    Connection        "";
            proxy_set_header    Host              $host;
            proxy_set_header    X-Real-IP         $remote_addr;
            proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;
            proxy_set_header    X-Forwarded-Proto $scheme;
            proxy_connect_timeout 5s;
            proxy_read_timeout    60s;
            proxy_buffer_size     16k;
            proxy_buffers         4 32k;
        }
    }

    Key directives explained:

    • least_conn — routes each new request to the backend with the fewest active connections. This outperforms round-robin for workloads with variable response latency, such as mixed fast reads and slow writes.
    • keepalive 32 — each Nginx worker maintains up to 32 idle keepalive connections per upstream group. Combined with
      proxy_http_version 1.1
      and an empty
      Connection
      header, this eliminates per-request TCP overhead at high concurrency.
    • max_fails / fail_timeout — passive health checking: if a backend times out or returns a server error three times within 30 seconds, Nginx marks it down and stops routing to it for the next 30 seconds before retrying.

    2. SSL/TLS Termination and Hardening

    Terminating TLS at Nginx offloads CPU-intensive cryptographic operations from application servers and centralizes certificate management. Modern TLS hardening is non-negotiable for any public-facing service and straightforward to implement in Nginx.

    server {
        listen 443 ssl http2;
        server_name solvethenetwork.com;
    
        ssl_certificate     /etc/nginx/ssl/solvethenetwork.com.fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/solvethenetwork.com.key;
    
        ssl_protocols       TLSv1.2 TLSv1.3;
        ssl_ciphers         ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
        ssl_prefer_server_ciphers off;
    
        ssl_session_cache   shared:SSL:10m;
        ssl_session_timeout 1d;
        ssl_session_tickets off;
    
        ssl_stapling        on;
        ssl_stapling_verify on;
        resolver            10.0.0.1 valid=300s;
        resolver_timeout    5s;
    
        add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    }

    Disabling TLSv1.0 and TLSv1.1 satisfies PCI-DSS 3.2+ requirements. Setting ssl_session_tickets off prevents session ticket key compromise from undermining forward secrecy — a known weakness when ticket keys are long-lived or never rotated. OCSP stapling (ssl_stapling on) eliminates the client-side OCSP round-trip during the TLS handshake, reducing first-connection latency by 50–200 ms. The ssl_prefer_server_ciphers off directive allows modern TLS 1.3 clients to select their preferred cipher rather than forcing the server's ordering.


    3. Rate Limiting to Prevent Abuse

    Production APIs routinely face credential-stuffing bots, scrapers, and accidental runaway API clients. Nginx's

    ngx_http_limit_req_module
    implements a leaky-bucket algorithm that smooths burst traffic and rejects excess requests before they ever reach the application tier, protecting backend resources at the edge.

    http {
        # Shared memory zone keyed by client IP — 10 MB stores ~160,000 IPs
        limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
        limit_req_zone $binary_remote_addr zone=login_limit:5m  rate=1r/s;
    
        server {
            listen 443 ssl http2;
            server_name solvethenetwork.com;
    
            location /api/ {
                limit_req           zone=api_limit burst=20 nodelay;
                limit_req_status    429;
                limit_req_log_level warn;
                proxy_pass          http://app_backend;
            }
    
            location /auth/login {
                limit_req        zone=login_limit burst=5;
                limit_req_status 429;
                proxy_pass       http://app_backend;
            }
        }
    }

    burst=20 nodelay allows a client to send up to 20 requests instantly before the rate limit applies, accommodating legitimate traffic bursts such as a page load triggering multiple parallel API calls. Returning HTTP 429 instead of the default 503 is the correct RFC 6585 response and is better understood by API consumer retry logic. The login endpoint uses a much tighter 1 request/second limit without

    nodelay
    , meaning burst requests queue rather than being immediately forwarded, adding latency that makes brute-force attacks impractical.


    4. Static Content Serving with Gzip Compression

    Nginx excels at serving static assets directly from disk, bypassing application server overhead entirely. Combined with gzip compression and aggressive browser caching, this pattern dramatically reduces bandwidth consumption and improves Time to First Byte for asset-heavy frontends.

    http {
        gzip              on;
        gzip_comp_level   5;
        gzip_min_length   256;
        gzip_proxied      any;
        gzip_vary         on;
        gzip_types
            application/javascript
            application/json
            application/xml
            image/svg+xml
            text/css
            text/html
            text/plain
            font/woff2;
    
        server {
            listen 443 ssl http2;
            server_name solvethenetwork.com;
    
            root /var/www/solvethenetwork/public;
    
            location ~* \.(js|css|png|jpg|svg|woff2|ico)$ {
                expires     1y;
                add_header  Cache-Control "public, immutable";
                access_log  off;
            }
    
            location / {
                try_files $uri $uri/ /index.html;
            }
        }
    }

    The immutable cache directive tells browsers they will never need to revalidate a cached asset during the max-age window — valid only when filenames include a content hash (standard with Webpack, Vite, and similar build pipelines). Disabling access logs for static files reduces I/O significantly on high-traffic servers. gzip_comp_level 5 is a practical sweet spot: levels above 6 yield diminishing compression gains at increasing CPU cost.


    5. Caching Proxy for Backend Response Caching

    Nginx's built-in proxy cache serves as a full HTTP cache in front of slow or expensive backend services, dramatically reducing upstream load for read-heavy workloads such as product catalogs, news feeds, and API responses that change infrequently.

    proxy_cache_path /var/cache/nginx/solvethenetwork
        levels=1:2
        keys_zone=app_cache:20m
        max_size=4g
        inactive=60m
        use_temp_path=off;
    
    server {
        listen 443 ssl http2;
        server_name solvethenetwork.com;
    
        location /api/products/ {
            proxy_cache              app_cache;
            proxy_cache_key          "$scheme$host$request_uri";
            proxy_cache_valid        200 10m;
            proxy_cache_valid        404  1m;
            proxy_cache_use_stale    error timeout updating http_500 http_502 http_503 http_504;
            proxy_cache_lock         on;
            proxy_cache_lock_timeout 5s;
            add_header               X-Cache-Status $upstream_cache_status;
            proxy_pass               http://app_backend;
        }
    }

    proxy_cache_use_stale is the most operationally important directive in this block. When the upstream returns a 5xx error or times out, Nginx continues serving the last valid cached response rather than propagating the error to clients — effectively acting as a circuit breaker. proxy_cache_lock prevents cache stampedes: when a cached entry expires, only one request is forwarded upstream while all concurrent requests for the same resource wait and are served the freshly cached response, avoiding a thundering herd. The

    X-Cache-Status
    header surfaces cache hits and misses in responses, making cache behavior transparent during debugging.


    6. WebSocket Proxying

    WebSocket connections require special proxy handling. The initial HTTP Upgrade handshake must be forwarded correctly, and the resulting long-lived TCP connection must not be severed by proxy read timeouts that assume short-lived request/response cycles.

    map $http_upgrade $connection_upgrade {
        default  upgrade;
        ''       close;
    }
    
    server {
        listen 443 ssl http2;
        server_name solvethenetwork.com;
    
        location /ws/ {
            proxy_pass              http://10.0.1.20:3000;
            proxy_http_version      1.1;
            proxy_set_header        Upgrade    $http_upgrade;
            proxy_set_header        Connection $connection_upgrade;
            proxy_set_header        Host       $host;
            proxy_set_header        X-Real-IP  $remote_addr;
            proxy_read_timeout      3600s;
            proxy_send_timeout      3600s;
        }
    }

    The

    map
    block dynamically sets the
    Connection
    header: requests carrying an
    Upgrade
    header receive
    Connection: upgrade
    to initiate the protocol switch, while standard HTTP requests receive
    Connection: close
    . Without this map, HTTP/1.1 keepalive connections and WebSocket upgrades would conflict. Extending proxy_read_timeout and proxy_send_timeout to 3600 seconds prevents Nginx from closing idle WebSocket connections that are simply waiting for server-push events.


    7. Geo Module for IP-Based Access Control

    The

    ngx_http_geo_module
    assigns Nginx variables based on client IP address, enabling geography-aware routing, RFC 1918 range detection, and internal/external traffic differentiation without an external WAF or application-layer logic.

    geo $is_internal_ip {
        default        0;
        10.0.0.0/8     1;
        172.16.0.0/12  1;
        192.168.0.0/16 1;
    }
    
    server {
        listen 443 ssl http2;
        server_name solvethenetwork.com;
    
        # Admin panel restricted to RFC 1918 ranges
        location /admin/ {
            if ($is_internal_ip = 0) {
                return 403;
            }
            proxy_pass http://app_backend;
        }
    
        # Metrics endpoint for internal monitoring only
        location /metrics {
            if ($is_internal_ip = 0) {
                return 404;
            }
            proxy_pass http://10.0.1.10:9100;
        }
    }

    Returning 404 rather than 403 for sensitive internal endpoints like

    /metrics
    avoids confirming the existence of the endpoint to external probers. For advanced country-level routing, the ngx_http_geoip2_module (compiled against MaxMind GeoIP2 databases) enables access control and routing decisions based on country code, supporting GDPR-compliant data residency enforcement without any application code changes.


    8. Security Headers to Harden HTTP Responses

    Browser security policies enforced via HTTP response headers form a critical defense layer against XSS, clickjacking, MIME-type sniffing, and information leakage. These directives should be applied globally in the

    http
    block or per relevant
    server
    block.

    server {
        listen 443 ssl http2;
        server_name solvethenetwork.com;
    
        server_tokens off;
    
        add_header X-Frame-Options            "SAMEORIGIN"                              always;
        add_header X-Content-Type-Options     "nosniff"                                 always;
        add_header X-XSS-Protection           "1; mode=block"                           always;
        add_header Referrer-Policy            "strict-origin-when-cross-origin"         always;
        add_header Permissions-Policy         "geolocation=(), microphone=(), camera=()" always;
        add_header Content-Security-Policy    "default-src 'self'; script-src 'self'; object-src 'none'; base-uri 'none';" always;
        add_header Strict-Transport-Security  "max-age=63072000; includeSubDomains; preload" always;
    }

    The always flag ensures these headers are appended even on error responses (4xx, 5xx). This matters because attackers can deliberately trigger error pages to probe security policy gaps. server_tokens off suppresses the Nginx version string from the

    Server
    response header, removing a low-effort fingerprinting vector that automated scanners exploit to identify unpatched versions. The
    Content-Security-Policy
    header with
    object-src 'none'
    blocks Flash and other plugin-based injection vectors entirely.


    9. Custom JSON Log Formats for Structured Observability

    Default Nginx combined log format is inadequate for production observability. Emitting structured JSON logs that capture upstream timing, cache status, and request identifiers enables direct ingestion into Elasticsearch, Loki, or Splunk without a parsing step.

    log_format json_access escape=json
        '{'
            '"time":"$time_iso8601",'
            '"remote_addr":"$remote_addr",'
            '"method":"$request_method",'
            '"uri":"$request_uri",'
            '"status":$status,'
            '"body_bytes_sent":$body_bytes_sent,'
            '"request_time":$request_time,'
            '"upstream_response_time":"$upstream_response_time",'
            '"upstream_addr":"$upstream_addr",'
            '"cache_status":"$upstream_cache_status",'
            '"http_referrer":"$http_referer",'
            '"http_user_agent":"$http_user_agent",'
            '"request_id":"$request_id"'
        '}';
    
    access_log /var/log/nginx/solvethenetwork.access.log json_access buffer=32k flush=5s;
    error_log  /var/log/nginx/solvethenetwork.error.log  warn;

    The buffer=32k flush=5s parameters batch log writes to reduce fsync pressure at high request rates while guaranteeing logs reach disk within 5 seconds for near-real-time observability. The

    $request_id
    variable (available since Nginx 1.11.0) is a UUID automatically generated per request. Forward it to backends via
    proxy_set_header X-Request-ID $request_id
    to enable end-to-end distributed tracing across the full request path.


    10. Header Manipulation for Upstream Communication

    Production deployments regularly need to inject, strip, or rewrite headers as requests transit the proxy layer — communicating client context to backends, hiding internal implementation details from clients, and meeting third-party integration requirements.

    server {
        listen 443 ssl http2;
        server_name solvethenetwork.com;
    
        location /api/ {
            # Forward client identity context to the backend
            proxy_set_header  X-Real-IP           $remote_addr;
            proxy_set_header  X-Forwarded-For     $proxy_add_x_forwarded_for;
            proxy_set_header  X-Forwarded-Proto   $scheme;
            proxy_set_header  X-Request-ID        $request_id;
    
            # Strip any client-supplied internal header before forwarding
            proxy_set_header  X-Internal-Token    "";
    
            # Remove technology disclosure headers from upstream responses
            proxy_hide_header X-Powered-By;
            proxy_hide_header X-Backend-Server;
            proxy_hide_header X-AspNet-Version;
    
            proxy_pass http://app_backend;
        }
    }

    Setting

    proxy_set_header X-Internal-Token ""
    (empty string) explicitly removes any client-supplied header with that name before the request is forwarded, preventing clients from spoofing internal service authorization tokens. proxy_hide_header strips technology disclosure headers from upstream responses before they reach the client — a defense-in-depth measure that reduces the attack surface by preventing automated scanners from identifying backend technology stacks.


    Putting It All Together: A Production-Ready Configuration

    In real deployments these patterns are layered into a single coherent configuration. Below is a representative production server block for sw-infrarunbook-01 running the solvethenetwork.com stack, combining TLS hardening, rate limiting, proxy caching, WebSocket support, and structured logging.

    # /etc/nginx/sites-available/solvethenetwork.com.conf
    # Host: sw-infrarunbook-01 | Owner: infrarunbook-admin
    
    map $http_upgrade $connection_upgrade {
        default  upgrade;
        ''       close;
    }
    
    upstream app_backend {
        least_conn;
        server 10.0.1.10:8080 max_fails=3 fail_timeout=30s;
        server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }
    
    limit_req_zone $binary_remote_addr zone=global_limit:20m rate=30r/s;
    limit_req_zone $binary_remote_addr zone=login_limit:5m  rate=1r/s;
    
    proxy_cache_path /var/cache/nginx/solvethenetwork
        levels=1:2 keys_zone=app_cache:20m max_size=4g inactive=60m use_temp_path=off;
    
    server {
        listen 80;
        server_name solvethenetwork.com;
        return 301 https://$host$request_uri;
    }
    
    server {
        listen 443 ssl http2;
        server_name solvethenetwork.com;
    
        ssl_certificate     /etc/nginx/ssl/solvethenetwork.com.fullchain.pem;
        ssl_certificate_key /etc/nginx/ssl/solvethenetwork.com.key;
        ssl_protocols       TLSv1.2 TLSv1.3;
        ssl_ciphers         ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
        ssl_session_cache   shared:SSL:10m;
        ssl_session_timeout 1d;
        ssl_session_tickets off;
        ssl_stapling        on;
        ssl_stapling_verify on;
        resolver            10.0.0.1 valid=300s;
    
        server_tokens off;
        add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
        add_header X-Frame-Options           "SAMEORIGIN"   always;
        add_header X-Content-Type-Options    "nosniff"      always;
        add_header Referrer-Policy           "strict-origin-when-cross-origin" always;
    
        access_log /var/log/nginx/solvethenetwork.access.log json_access buffer=32k flush=5s;
        error_log  /var/log/nginx/solvethenetwork.error.log warn;
    
        root /var/www/solvethenetwork/public;
    
        location ~* \.(js|css|png|jpg|svg|woff2|ico)$ {
            expires    1y;
            add_header Cache-Control "public, immutable";
            access_log off;
        }
    
        location /api/ {
            limit_req           zone=global_limit burst=50 nodelay;
            limit_req_status    429;
            proxy_cache         app_cache;
            proxy_cache_valid   200 5m;
            proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
            proxy_cache_lock    on;
            add_header          X-Cache-Status $upstream_cache_status;
            proxy_set_header    X-Real-IP        $remote_addr;
            proxy_set_header    X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_set_header    X-Request-ID     $request_id;
            proxy_hide_header   X-Powered-By;
            proxy_pass          http://app_backend;
        }
    
        location /auth/login {
            limit_req        zone=login_limit burst=5;
            limit_req_status 429;
            proxy_pass       http://app_backend;
        }
    
        location /ws/ {
            proxy_pass         http://10.0.1.20:3000;
            proxy_http_version 1.1;
            proxy_set_header   Upgrade    $http_upgrade;
            proxy_set_header   Connection $connection_upgrade;
            proxy_read_timeout 3600s;
            proxy_send_timeout 3600s;
        }
    
        location / {
            try_files $uri $uri/ /index.html;
        }
    }
    Operational note: After every configuration change on sw-infrarunbook-01, validate with
    nginx -t
    before applying. Use
    nginx -s reload
    rather than a full service restart to preserve in-flight connections. In the solvethenetwork.com deployment pipeline, the reload is gated behind a syntax validation step in CI to prevent broken configurations from reaching production. Store your Nginx configs in version control and use configuration management (Ansible, Chef, or Puppet) to enforce consistency across multiple edge nodes.

    Frequently Asked Questions

    What is the difference between Nginx as a web server and as a reverse proxy?

    When acting as a web server, Nginx serves files directly from disk in response to client requests. When acting as a reverse proxy, Nginx forwards client requests to one or more backend servers (such as Node.js, Gunicorn, or Tomcat processes), receives the response, and relays it back to the client. In production, Nginx almost always operates as both simultaneously — serving static assets directly and proxying dynamic requests to application backends.

    How do I safely reload an Nginx configuration without dropping connections?

    Run <code>nginx -t</code> to validate the configuration first. If the test passes, run <code>nginx -s reload</code> (or <code>systemctl reload nginx</code> on systemd systems). Nginx will fork new worker processes with the updated configuration while old workers finish processing in-flight requests before exiting. Active connections are never dropped. A full <code>systemctl restart nginx</code> should be reserved for situations where a reload is insufficient, such as changing the number of worker processes or binding to new ports.

    What is the difference between proxy_read_timeout and proxy_connect_timeout?

    <code>proxy_connect_timeout</code> sets the maximum time Nginx will wait to establish a TCP connection to the upstream server. <code>proxy_read_timeout</code> sets the maximum time Nginx will wait between successive data reads from the upstream after the connection is established. For most APIs, a short <code>proxy_connect_timeout</code> (2–5 seconds) is appropriate to detect dead backends quickly, while <code>proxy_read_timeout</code> should be set to the maximum expected response time of slow operations such as database queries or file exports.

    How does Nginx handle upstream failover with max_fails and fail_timeout?

    Nginx implements passive health checking via the <code>max_fails</code> and <code>fail_timeout</code> parameters on each server entry in an upstream block. If a backend returns the number of errors specified by <code>max_fails</code> within the <code>fail_timeout</code> window, Nginx marks it as unavailable and stops routing requests to it for the duration of <code>fail_timeout</code>. After that window expires, Nginx sends a probe request; if it succeeds, the backend is restored to rotation. For active health checking that does not require a real client request to detect failures, the <code>health_check</code> directive is available in Nginx Plus.

    Why should ssl_session_tickets be disabled in a hardened TLS configuration?

    TLS session tickets allow the server to offload session state to the client by encrypting it with a server-side ticket key. If that ticket key is compromised — or is never rotated — an attacker can decrypt all past sessions that used tickets, undermining forward secrecy. Disabling session tickets forces the use of the session cache (stored in server memory), where expiry and rotation are controlled server-side. If you must use session tickets, rotate ticket keys at least every 24 hours across all Nginx nodes.

    What is the purpose of the keepalive directive in an upstream block?

    The <code>keepalive</code> directive in an upstream block sets the maximum number of idle keepalive connections that each Nginx worker process maintains to each upstream group. Without it, Nginx opens a new TCP connection for every proxied request. With <code>keepalive 32</code>, workers reuse existing connections, eliminating the TCP and TLS handshake overhead for subsequent requests. This is particularly impactful when Nginx is proxying to upstream HTTPS services or when request rates are high. It requires <code>proxy_http_version 1.1</code> and clearing the <code>Connection</code> header to function correctly.

    How do I prevent the Nginx proxy cache from serving stale content to all users when one cache entry expires simultaneously?

    Enable <code>proxy_cache_lock on</code>. This directive ensures that when a cached entry expires and multiple concurrent requests arrive for the same resource, only one request is forwarded to the upstream while the others wait. Once the upstream responds and the entry is re-cached, all waiting requests are served from the cache. Without this, a cache expiry under load can trigger a thundering herd where hundreds of requests simultaneously hit the upstream, potentially overloading it.

    Can Nginx replace a dedicated load balancer like HAProxy?

    For HTTP and HTTPS workloads, Nginx is a fully capable replacement for HAProxy and is often preferred because it combines load balancing, TLS termination, static file serving, and caching in a single process. HAProxy has an edge for TCP-level (Layer 4) load balancing, more granular health checking options, and more detailed per-backend statistics. In practice, the choice depends on requirements: if you need advanced TCP proxying, ACL-based routing, or real-time stats via a socket interface, HAProxy is strong. For HTTP-centric infrastructure with caching and SSL termination, Nginx is an excellent and operationally simpler choice.

    How do I debug which rate limit zone is blocking a request?

    Set <code>limit_req_log_level warn</code> (or <code>error</code>) for the relevant zone and monitor <code>/var/log/nginx/error.log</code>. When a request is delayed or rejected, Nginx logs the zone name and client IP. You can also add a custom response header (e.g., <code>add_header X-RateLimit-Zone $limit_req_status</code>) for debugging in non-production environments. The <code>$limit_req_status</code> variable is not natively available in open-source Nginx, but you can use the <code>limit_req_dry_run on</code> directive (available since Nginx 1.17.1) to test zone behavior without actually blocking requests.

    What does server_tokens off actually hide in an Nginx response?

    <code>server_tokens off</code> removes the Nginx version number from two places: the <code>Server</code> HTTP response header (which would otherwise read something like <code>nginx/1.24.0</code>) and from Nginx-generated error pages. It does not hide that Nginx is being used — the <code>Server: nginx</code> header still appears. To fully suppress the server identification, you need to compile Nginx with a custom <code>--with-http_headers_module</code> or use the <code>more_set_headers</code> directive from the <code>headers-more-nginx-module</code> third-party module to set <code>Server: </code> to an empty string or a custom value.

    Related Articles