InfraRunBook
    Back to articles

    Nginx CORS Error Troubleshooting

    Nginx
    Published: Apr 12, 2026
    Updated: Apr 13, 2026

    A practical guide to diagnosing and fixing Nginx CORS errors, covering missing headers, unhandled preflight OPTIONS requests, wildcard credential conflicts, header exposure issues, and Nginx...

    Nginx CORS Error Troubleshooting

    Symptoms

    You open the browser console and see something like this:

    Access to fetch at 'https://api.solvethenetwork.com/v1/data' from origin
    'https://app.solvethenetwork.com' has been blocked by CORS policy: No
    'Access-Control-Allow-Origin' header is present on the requested resource.

    Or the preflight variant:

    Access to XMLHttpRequest at 'https://api.solvethenetwork.com/v1/users' from origin
    'https://app.solvethenetwork.com' has been blocked by CORS policy: Response to
    preflight request doesn't pass access control check: It does not have HTTP ok status.

    The frontend breaks. Requests fail silently or throw network errors. Meanwhile, the backend logs show nothing wrong — the upstream returned a 200 and has no idea there's a problem. API calls work perfectly in Postman and curl but the browser refuses to complete them. That's the CORS trap, and it's caught almost every engineer at least once.

    CORS (Cross-Origin Resource Sharing) is enforced entirely by the browser. Your server can respond with a 200, but if the response is missing the right headers, the browser will block JavaScript from reading it. Nginx sits between your app and the world, and when it's misconfigured — or when it's actively interfering with headers your upstream already set — you'll hit these errors every time.


    Root Cause 1: Access-Control-Allow-Origin Is Missing

    This is the most common cause. The browser makes a cross-origin request and the response doesn't include the

    Access-Control-Allow-Origin
    header at all. No header, no access — the browser drops the response immediately.

    Why it happens: You've set up Nginx as a reverse proxy to an upstream backend running on

    192.168.10.20:8080
    , but neither Nginx nor the upstream application adds the CORS header. If the upstream is an internal microservice that was only ever called from server-side code, it likely never needed CORS headers before. When you start calling it from a browser frontend hosted on a different origin, nobody has thought to add the headers yet.

    How to identify it: Check the actual response headers using curl from sw-infrarunbook-01:

    curl -sI -X GET https://api.solvethenetwork.com/v1/data \
      -H "Origin: https://app.solvethenetwork.com"

    If the response looks like this with no

    Access-Control-Allow-Origin
    in sight, that's your problem:

    HTTP/2 200
    server: nginx
    content-type: application/json
    content-length: 348
    date: Sat, 12 Apr 2026 10:45:22 GMT

    How to fix it: Add the CORS header in your Nginx server block. A minimal fix looks like this:

    server {
        listen 443 ssl;
        server_name api.solvethenetwork.com;
    
        location /v1/ {
            add_header 'Access-Control-Allow-Origin' 'https://app.solvethenetwork.com' always;
            proxy_pass http://192.168.10.20:8080;
        }
    }

    The

    always
    parameter is critical. Without it, Nginx only adds the header to 2xx and 3xx responses. Error responses — 4xx, 5xx — won't carry the CORS header, and the browser will present a confusing "blocked by CORS policy" error instead of the actual API error message underneath. In my experience this causes hours of wasted debugging because the developer thinks they have a CORS misconfiguration when they actually have a 401 or a 500 that never surfaced.


    Root Cause 2: Preflight OPTIONS Request Not Handled

    Before sending a non-simple request — anything with a custom header like

    Authorization
    , or a method like PUT or DELETE — the browser fires a preflight. This is an HTTP OPTIONS request sent automatically. If Nginx doesn't handle it correctly, the preflight fails and the actual request never gets sent.

    Why it happens: Most backend frameworks and application routes don't register OPTIONS handlers by default. When the preflight hits Nginx and gets proxied to the upstream, the upstream returns a 404 or 405. Nginx passes that through faithfully, the browser sees a non-2xx preflight response, and everything stops right there.

    How to identify it: In the browser's Network tab, look for an OPTIONS request just before the failing one. If it's returning 404 or 405, that's the issue. You can also test it directly from the command line:

    curl -sI -X OPTIONS https://api.solvethenetwork.com/v1/data \
      -H "Origin: https://app.solvethenetwork.com" \
      -H "Access-Control-Request-Method: POST" \
      -H "Access-Control-Request-Headers: Authorization, Content-Type"

    A failing preflight response looks like this:

    HTTP/2 404
    server: nginx
    content-type: text/html
    date: Sat, 12 Apr 2026 10:47:10 GMT

    A healthy one should return 204 or 200 and include the appropriate CORS response headers.

    How to fix it: Handle OPTIONS explicitly in Nginx before the request ever reaches the upstream. Return a 204 directly from Nginx with the right CORS response headers:

    location /v1/ {
        if ($request_method = 'OPTIONS') {
            add_header 'Access-Control-Allow-Origin' 'https://app.solvethenetwork.com' always;
            add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
            add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, X-Requested-With' always;
            add_header 'Access-Control-Max-Age' 1728000;
            add_header 'Content-Length' 0;
            return 204;
        }
    
        add_header 'Access-Control-Allow-Origin' 'https://app.solvethenetwork.com' always;
        proxy_pass http://192.168.10.20:8080;
    }

    The

    Access-Control-Max-Age
    header tells the browser how long in seconds it can cache the preflight result. Setting it to 1728000 (twenty days) significantly reduces the number of preflight round-trips for frequent API consumers. Don't forget to also add the main CORS headers outside the
    if
    block — they need to be present on the actual response too, not just the preflight.


    Root Cause 3: Credentials With Wildcard Origin

    This one bites people who try to "just make CORS work" by setting

    Access-Control-Allow-Origin: *
    and then wonder why authenticated requests still fail.

    Why it happens: When JavaScript sends a request with

    credentials: 'include'
    (or
    withCredentials: true
    in XHR), the browser enforces a stricter rule: the response cannot use
    *
    as the allowed origin. It must reflect the exact origin of the requesting page. Additionally, the response must include
    Access-Control-Allow-Credentials: true
    . Miss either of these and the browser will block the response even if the status code is 200.

    How to identify it: The browser console is explicit about this one:

    Access to fetch at 'https://api.solvethenetwork.com/v1/profile' from origin
    'https://app.solvethenetwork.com' has been blocked by CORS policy: The value of
    the 'Access-Control-Allow-Origin' header in the response must not be the wildcard
    '*' when the request's credentials mode is 'include'.

    Confirm the headers with curl:

    curl -sI https://api.solvethenetwork.com/v1/profile \
      -H "Origin: https://app.solvethenetwork.com" \
      -H "Cookie: session=abc123"

    If you see

    Access-Control-Allow-Origin: *
    in the response, that's your bug.

    How to fix it: Use a

    map
    block in Nginx to maintain a whitelist of allowed origins and reflect only those back in the response:

    map $http_origin $cors_origin {
        default                              '';
        'https://app.solvethenetwork.com'    'https://app.solvethenetwork.com';
        'https://admin.solvethenetwork.com'  'https://admin.solvethenetwork.com';
    }
    
    server {
        listen 443 ssl;
        server_name api.solvethenetwork.com;
    
        location /v1/ {
            if ($request_method = 'OPTIONS') {
                add_header 'Access-Control-Allow-Origin' $cors_origin always;
                add_header 'Access-Control-Allow-Credentials' 'true' always;
                add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
                add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type' always;
                add_header 'Access-Control-Max-Age' 86400;
                add_header 'Content-Length' 0;
                return 204;
            }
    
            add_header 'Access-Control-Allow-Origin' $cors_origin always;
            add_header 'Access-Control-Allow-Credentials' 'true' always;
            proxy_pass http://192.168.10.20:8080;
        }
    }

    Unknown origins get an empty string, which effectively blocks CORS for unrecognized callers. Never use a catch-all regex to populate

    $cors_origin
    if you're also enabling credentials — that recreates the wildcard problem in a less obvious way. I've seen regex patterns in map blocks that matched any origin and still had
    Access-Control-Allow-Credentials: true
    set. That's worse than using
    *
    because at least
    *
    is legible.


    Root Cause 4: Wrong Headers Exposed — Or Not Exposed at All

    Your app sends a CORS request, the response comes back with a 200, but JavaScript can't read certain response headers it needs — like a custom

    X-Request-ID
    or
    X-Rate-Limit-Remaining
    . The call succeeds, but the frontend gets
    null
    back from
    response.headers.get('X-Request-ID')
    even though the header is sitting right there in the raw HTTP response.

    Why it happens: By default, browsers only expose a small set of safe response headers to JavaScript code:

    Cache-Control
    ,
    Content-Language
    ,
    Content-Length
    ,
    Content-Type
    ,
    Expires
    , and
    Last-Modified
    . Any other response header your backend sends is invisible to the frontend unless the server explicitly lists it in
    Access-Control-Expose-Headers
    . The browser receives the headers fine — it just won't let JavaScript touch them.

    How to identify it: Confirm the header exists in the raw response:

    curl -sI https://api.solvethenetwork.com/v1/data \
      -H "Origin: https://app.solvethenetwork.com"

    You might see:

    HTTP/2 200
    x-request-id: f3a2d1c9-4b87-4e01-a562-df23c8e99101
    x-rate-limit-remaining: 87
    access-control-allow-origin: https://app.solvethenetwork.com
    content-type: application/json

    The headers are there, but without

    Access-Control-Expose-Headers
    , the browser hides them from JavaScript. If your frontend code is reading these headers and getting null, this is the cause.

    How to fix it: Add the

    Access-Control-Expose-Headers
    directive to your Nginx location block, listing every non-standard response header your frontend needs to read:

    location /v1/ {
        add_header 'Access-Control-Allow-Origin' 'https://app.solvethenetwork.com' always;
        add_header 'Access-Control-Expose-Headers' 'X-Request-ID, X-Rate-Limit-Remaining, X-Total-Count' always;
        proxy_pass http://192.168.10.20:8080;
    }

    This is a whitelist — you have to name each header explicitly. Keep the list tight. Exposing headers you don't need is unnecessary surface area, and adding them carelessly is how internal implementation details end up readable by third-party JavaScript in the browser.


    Root Cause 5: Nginx Stripping Headers Set by the Upstream

    This one is subtle and tends to take longer to find. Your backend application correctly sets all the CORS headers. Everything looks right in the upstream logs. But by the time the response leaves Nginx, the CORS headers are gone. The browser gets a clean response with no CORS headers at all.

    Why it happens: There are a few mechanisms by which Nginx strips upstream response headers. The first is an explicit

    proxy_hide_header
    directive — either set intentionally somewhere or inherited from a parent context you forgot about. The second, and more insidious, is the
    add_header
    inheritance behavior: in Nginx, child-level
    add_header
    directives completely override any
    add_header
    directives in a parent context. They don't append — they replace. So if you have security headers set at the server level and a location block that adds its own
    add_header
    directives, only the location block's headers are sent. Whatever the parent set is silently dropped.

    How to identify it: First, check what the upstream is returning by bypassing Nginx entirely. SSH into sw-infrarunbook-01 and hit the backend directly:

    curl -sI http://192.168.10.20:8080/v1/data \
      -H "Origin: https://app.solvethenetwork.com"

    If the upstream response includes the CORS headers:

    HTTP/1.1 200 OK
    access-control-allow-origin: https://app.solvethenetwork.com
    access-control-allow-credentials: true
    content-type: application/json

    But the same request through the Nginx proxy returns:

    HTTP/2 200
    server: nginx
    content-type: application/json

    Nginx is eating those headers. Next, scan your Nginx config for

    proxy_hide_header
    directives:

    grep -rn "proxy_hide_header" /etc/nginx/

    Then check for conflicting

    add_header
    entries across multiple config levels:

    grep -rn "add_header" /etc/nginx/ | grep -i "access-control"

    How to fix it: Remove any

    proxy_hide_header Access-Control-Allow-Origin;
    lines that shouldn't be there. For the inheritance problem, consolidate all
    add_header
    directives for a given location into a single include file rather than mixing server-level and location-level declarations:

    # /etc/nginx/conf.d/cors_headers.conf
    add_header 'Access-Control-Allow-Origin' $cors_origin always;
    add_header 'Access-Control-Allow-Credentials' 'true' always;
    add_header 'Access-Control-Expose-Headers' 'X-Request-ID, X-Rate-Limit-Remaining' always;
    add_header 'X-Content-Type-Options' 'nosniff' always;
    add_header 'X-Frame-Options' 'DENY' always;
    location /v1/ {
        include /etc/nginx/conf.d/cors_headers.conf;
        proxy_pass http://192.168.10.20:8080;
    }

    Everything for that location is declared in one place, and the override behavior can't surprise you later. If instead you want Nginx to pass the upstream's CORS headers through unchanged without re-adding them, use

    proxy_pass_header
    to explicitly allow them through:

    location /v1/ {
        proxy_pass_header Access-Control-Allow-Origin;
        proxy_pass_header Access-Control-Allow-Credentials;
        proxy_pass http://192.168.10.20:8080;
    }

    Root Cause 6: Duplicate CORS Headers Causing Browser Rejection

    This one confuses people because the browser console says CORS is blocked, but when you check the raw response headers,

    Access-Control-Allow-Origin
    is clearly present. The problem is that it's present twice.

    Why it happens: Both your upstream application and Nginx are setting

    Access-Control-Allow-Origin
    independently. The browser receives two values for the same header and fails the CORS check. The spec requires exactly one valid origin value. Two headers — even two identical ones — is a malformed response as far as the browser is concerned.

    How to identify it:

    curl -v https://api.solvethenetwork.com/v1/data \
      -H "Origin: https://app.solvethenetwork.com" 2>&1 | grep -i "access-control-allow-origin"

    If you see:

    < access-control-allow-origin: https://app.solvethenetwork.com
    < access-control-allow-origin: https://app.solvethenetwork.com

    That's a duplicate header and that's your bug.

    How to fix it: Pick one layer — either the upstream application or Nginx — and let only that layer set the CORS headers. If Nginx is the authority, strip the upstream's CORS headers before adding your own:

    location /v1/ {
        proxy_hide_header Access-Control-Allow-Origin;
        proxy_hide_header Access-Control-Allow-Credentials;
        proxy_hide_header Access-Control-Allow-Methods;
        proxy_hide_header Access-Control-Allow-Headers;
    
        add_header 'Access-Control-Allow-Origin' 'https://app.solvethenetwork.com' always;
        add_header 'Access-Control-Allow-Credentials' 'true' always;
        proxy_pass http://192.168.10.20:8080;
    }

    Root Cause 7: HTTP vs HTTPS Origin Mismatch

    The browser includes the scheme in the origin value.

    https://app.solvethenetwork.com
    and
    http://app.solvethenetwork.com
    are completely different origins. If your Nginx config is hardcoded to allow the HTTPS variant but the browser is sending a request from the HTTP version — maybe because of a missing redirect, a local dev environment, or a mixed-content scenario — the origin won't match and CORS will be denied.

    How to identify it: In the browser's Network tab, select the failing request and look at the Request Headers section for the

    Origin
    header value. Compare it character-for-character to what your Nginx config or map block allows. Then test both explicitly:

    # Test HTTPS origin
    curl -sI https://api.solvethenetwork.com/v1/data \
      -H "Origin: https://app.solvethenetwork.com"
    
    # Test HTTP origin
    curl -sI https://api.solvethenetwork.com/v1/data \
      -H "Origin: http://app.solvethenetwork.com"

    If only the HTTPS version returns a valid

    Access-Control-Allow-Origin
    header, you've confirmed the mismatch.

    How to fix it: Either add the HTTP variant to your allowed origins map (if you genuinely need to support it), or enforce HTTPS everywhere so the mismatch never occurs. Enforcing HTTPS is almost always the right call:

    server {
        listen 80;
        server_name app.solvethenetwork.com api.solvethenetwork.com;
        return 301 https://$host$request_uri;
    }

    Prevention

    CORS errors in production are almost always avoidable. The patterns that prevent them require discipline during initial setup rather than scrambling after the fact.

    Keep CORS configuration in one place. Use a dedicated include file — something like

    /etc/nginx/conf.d/cors.conf
    — and pull it into every location block that needs it. Don't scatter
    add_header
    directives across server blocks, location blocks, and parent contexts. The inheritance behavior in Nginx will eventually create a header gap you don't expect, usually right before a production deployment.

    Maintain an explicit origin allowlist. Don't use

    Access-Control-Allow-Origin: *
    for authenticated endpoints. Use a
    map
    block to define known origins and only reflect those back. When a new consumer needs access, adding it to the allowlist is an intentional, visible change in version control rather than an invisible side effect of a deploy.

    Test preflight in your deployment pipeline. Add a curl-based smoke test that fires an OPTIONS request after each deploy. Something like this, run from sw-infrarunbook-01 as infrarunbook-admin:

    #!/bin/sh
    RESPONSE=$(curl -sI -X OPTIONS https://api.solvethenetwork.com/v1/data \
      -H "Origin: https://app.solvethenetwork.com" \
      -H "Access-Control-Request-Method: POST" \
      -H "Access-Control-Request-Headers: Authorization")
    
    echo "$RESPONSE" | grep -qi "access-control-allow-origin" || {
      echo "CORS preflight check failed"
      exit 1
    }
    echo "CORS preflight check passed"

    Validate headers after every Nginx config change. Run

    nginx -t
    before reloading, but also run a real header check with curl. Syntax validity doesn't mean your headers are correct — it just means the config parsed. A quick curl against the endpoint takes thirty seconds and catches problems before users do.

    Decide ownership early and document it. Either Nginx handles CORS or the upstream application handles CORS — not both. Write that decision down somewhere your team will find it. When CORS breaks six months from now (and someone will change something), the first question will be "who's responsible for these headers?" and having a clear, documented answer saves significant debugging time.

    Monitor for CORS errors at the browser level. If you're collecting frontend error telemetry, log CORS failures explicitly. A sudden spike usually means someone changed an origin, deployed a new service, or modified Nginx config without updating the allowlist. Catching it in monitoring beats getting a bug report from a user who thinks the whole site is broken.


    Related Articles

    Frequently Asked Questions

    Why do CORS errors only appear in the browser and not in curl?

    CORS is a browser security mechanism. Curl and Postman don't enforce it — they send and receive headers without restriction. Only browsers implement the CORS policy, which is why a request that succeeds in curl can still be blocked when made from JavaScript running in a browser tab.

    What does the 'always' parameter do in Nginx add_header?

    Without 'always', Nginx only appends the header to successful responses (2xx and 3xx). With 'always', the header is added to all responses including 4xx and 5xx errors. For CORS headers this is critical — if your API returns a 401 or 500, the browser needs Access-Control-Allow-Origin present on that error response or it will block JavaScript from reading the error body.

    Can I use Access-Control-Allow-Origin: * when sending cookies or Authorization headers?

    No. The browser spec explicitly forbids it. When a request includes credentials (cookies, HTTP auth, or client certificates), the Access-Control-Allow-Origin response header must reflect the exact requesting origin — not a wildcard. You must also include Access-Control-Allow-Credentials: true. Use a map block in Nginx to maintain a whitelist of allowed origins and reflect them dynamically.

    Why do my CORS headers disappear when I add a new add_header directive to a location block?

    Nginx's add_header inheritance works by replacement, not by appending. If a parent context (like the server block) sets headers with add_header and the location block also uses add_header, the location block's directives completely replace the parent's. The fix is to consolidate all add_header directives for a location into a single include file so there's no mixing between context levels.

    How do I allow multiple origins in Nginx without using a wildcard?

    Use a map block to create a whitelist of allowed origins. The map checks the incoming $http_origin against known values and sets a variable (e.g. $cors_origin) to either the matched origin or an empty string. Then use that variable in your add_header directive. Unknown origins get an empty string, which means no CORS header is sent for them.

    What is Access-Control-Expose-Headers and when do I need it?

    By default, browsers only expose a small set of standard response headers to JavaScript (like Content-Type and Cache-Control). If your API returns custom headers such as X-Request-ID or X-Rate-Limit-Remaining and your frontend code tries to read them, they'll return null unless you list them in Access-Control-Expose-Headers. Add this header to your Nginx location block with a comma-separated list of the custom headers your frontend needs to access.

    Related Articles