Symptoms
You've configured Nginx to serve static files, deployed your config, reloaded the daemon — and now every request for a static asset comes back with a 404. The browser shows Nginx's default error page, your CDN is pulling nothing but failures, and users are complaining about missing stylesheets, broken images, and JavaScript that won't load.
When you hit the URL directly with curl, you see something like this:
$ curl -I https://solvethenetwork.com/assets/app.css
HTTP/1.1 404 Not Found
Server: nginx/1.24.0
Date: Thu, 17 Apr 2026 10:22:14 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 153
Connection: keep-alive
The Nginx error log at
/var/log/nginx/error.logwill almost always give you the first real clue. Check it immediately before touching anything else:
2026/04/17 10:22:14 [error] 12345#0: *1 open() "/usr/share/nginx/html/assets/app.css" failed (2: No such file or directory), client: 10.0.1.50, server: solvethenetwork.com, request: "GET /assets/app.css HTTP/1.1", host: "solvethenetwork.com"
That path Nginx tried to open —
/usr/share/nginx/html/assets/app.css— is the single most important piece of information in the entire debugging process. It tells you exactly where Nginx is looking for the file. Everything that follows is a matter of reconciling what Nginx expects versus what's actually on disk.
Root Cause 1: Root Path Is Wrong
This is the most common cause, and it's embarrassing how often it bites experienced engineers. The
rootdirective in your server block or location block is pointing at a directory that either doesn't exist or doesn't contain the files you think it does.
Why does this happen? Usually because the deployment script put files somewhere different from where the Nginx config expects them. Or someone edited the config on one server without updating the others. Or the default Nginx package root (
/usr/share/nginx/html) was left in place when the actual web root is somewhere like
/srv/www/solvethenetwork.com. Nginx doesn't warn you about this — it just looks in the wrong place and returns 404.
To identify it, read the error log carefully. The path Nginx tried to open is constructed by appending the request URI to the
rootvalue. If the error log shows it tried
/usr/share/nginx/html/assets/app.cssbut your files live at
/srv/www/solvethenetwork.com/assets/app.css, the root is wrong. Verify what's on disk and what the config says:
infrarunbook-admin@sw-infrarunbook-01:~$ ls /srv/www/solvethenetwork.com/assets/
app.css app.js logo.png
infrarunbook-admin@sw-infrarunbook-01:~$ grep -rn "root" /etc/nginx/sites-enabled/solvethenetwork.com
/etc/nginx/sites-enabled/solvethenetwork.com:12: root /usr/share/nginx/html;
The fix is to update the
rootdirective to point at the correct path:
server {
listen 80;
server_name solvethenetwork.com;
root /srv/www/solvethenetwork.com;
location / {
try_files $uri $uri/ =404;
}
}
Always test the config before reloading — syntax errors are caught here, even if path mismatches aren't:
infrarunbook-admin@sw-infrarunbook-01:~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
infrarunbook-admin@sw-infrarunbook-01:~$ sudo systemctl reload nginx
Root Cause 2: Alias vs Root Confusion
This one trips people up constantly, and I've seen it create hours of debugging that could have been resolved in minutes. The
rootand
aliasdirectives look similar but behave very differently when used inside location blocks. Getting them mixed up produces 404s that feel completely inexplicable until you understand the path-building difference.
With
root, Nginx appends the full request URI to the root path. So if root is
/srv/www/solvethenetwork.comand the URI is
/assets/app.css, Nginx looks for
/srv/www/solvethenetwork.com/assets/app.css. With
alias, Nginx replaces the part of the URI matched by the location prefix with the alias path. So if the location is
/assets/and the alias is
/srv/www/solvethenetwork.com/static/, a request for
/assets/app.cssmaps to
/srv/www/solvethenetwork.com/static/app.css.
The bug typically looks like this in the config:
location /assets/ {
root /srv/www/solvethenetwork.com/static/;
}
That config makes Nginx look for
/srv/www/solvethenetwork.com/static/assets/app.css— the
/assets/from the URI gets appended to the root path. If your files actually live at
/srv/www/solvethenetwork.com/static/app.csswith no extra
assets/subdirectory, every single request 404s. You catch this by comparing the path in the error log with your directory structure:
2026/04/17 11:03:41 [error] 12346#0: *2 open() "/srv/www/solvethenetwork.com/static/assets/app.css" failed (2: No such file or directory)
infrarunbook-admin@sw-infrarunbook-01:~$ ls /srv/www/solvethenetwork.com/static/
app.css app.js logo.png
The directory has
app.cssat the top level of
static/, but Nginx is looking one level deeper inside
static/assets/. The
rootdirective is doubling up the location prefix. Use
aliasinstead:
location /assets/ {
alias /srv/www/solvethenetwork.com/static/;
}
Now a request for
/assets/app.cssmaps cleanly to
/srv/www/solvethenetwork.com/static/app.css. One important gotcha with alias: make sure both the location pattern and the alias path either both end with a trailing slash or neither does. A trailing slash mismatch between the location and the alias path creates its own category of 404 surprises.
Root Cause 3: File Permissions
The file exists on disk. The path in the error log matches exactly where the file lives. And you're still getting failures. Permission issues produce a different errno in the error log than a missing file — look for errno 13 rather than errno 2:
2026/04/17 11:45:09 [error] 12347#0: *3 open() "/srv/www/solvethenetwork.com/assets/app.css" failed (13: Permission denied), client: 10.0.1.50, server: solvethenetwork.com, request: "GET /assets/app.css HTTP/1.1", host: "solvethenetwork.com"
errno 13 is Permission denied. Nginx's worker process runs as a specific system user — typically
www-dataon Debian and Ubuntu, or
nginxon RHEL and CentOS. That user needs read permission on every file it serves, plus execute (traverse) permission on every directory in the path leading to that file. A directory that's mode 700 stops Nginx cold even if the file inside is readable.
Diagnose it by checking ownership and permissions directly:
infrarunbook-admin@sw-infrarunbook-01:~$ ls -la /srv/www/solvethenetwork.com/assets/
total 24
drwx------ 2 infrarunbook-admin infrarunbook-admin 4096 Apr 17 09:00 .
drwxr-xr-x 5 infrarunbook-admin infrarunbook-admin 4096 Apr 17 09:00 ..
-rw------- 1 infrarunbook-admin infrarunbook-admin 8192 Apr 17 09:00 app.css
The directory is mode
700and the file is mode
600— readable only by the owner. The
www-datauser can't get in. Verify which user Nginx workers actually run as before fixing anything:
infrarunbook-admin@sw-infrarunbook-01:~$ grep "^user" /etc/nginx/nginx.conf
user www-data;
infrarunbook-admin@sw-infrarunbook-01:~$ ps aux | grep "nginx: worker"
www-data 12348 0.0 0.1 55672 2312 ? S 09:00 0:00 nginx: worker process
Fix the permissions. Web root directories need the execute bit so Nginx can traverse them, and files need the read bit set:
infrarunbook-admin@sw-infrarunbook-01:~$ sudo find /srv/www/solvethenetwork.com -type d -exec chmod 755 {} \;
infrarunbook-admin@sw-infrarunbook-01:~$ sudo find /srv/www/solvethenetwork.com -type f -exec chmod 644 {} \;
In my experience, this issue comes up most often right after a deployment where files are rsync'd or extracted from a tarball that was created by a restricted user. The archive preserves the original restrictive permissions. Make permission normalization an explicit step in your deployment process, not an afterthought you remember when things break at 2am.
Root Cause 4: try_files Misconfigured
The
try_filesdirective is powerful and punishing when misconfigured. It instructs Nginx to try a sequence of paths and serve the first one that exists. Get it wrong and Nginx will 404 on files that are clearly present on disk, making the problem feel completely baffling.
The
$urivariable in
try_filesalways resolves relative to the active
rootor
aliasfor the current block. This means a try_files misconfiguration often masks a root or alias problem — Nginx faithfully tries the path you told it to try, but that path is wrong. One common mistake is using try_files with a named location fallback that no longer works as intended:
location /assets/ {
root /srv/www/solvethenetwork.com;
try_files $uri @backend;
}
location @backend {
proxy_pass http://10.0.1.20:8080;
}
That config looks reasonable. But if someone removes the application backend later and changes
@backendto
return 404, static files that genuinely exist stop being served. The error log just shows the file couldn't be opened via the try_files chain, not that it fell through to the named location. Another variant that causes trouble is the directory fallback mixed with a PHP handler in a location block that has no PHP configuration:
# If there's no PHP fastcgi block here, this loops or errors on missing files
location /assets/ {
try_files $uri $uri/ /index.php$is_args$args;
}
For pure static file serving with no application backend, keep try_files explicit and minimal:
location /assets/ {
root /srv/www/solvethenetwork.com;
try_files $uri =404;
}
This tries the file at the resolved path. If it doesn't exist, it returns 404 directly without any further indirection. To debug which try_files chain is actually executing, use
nginx -Tto dump the full resolved config with all includes expanded:
infrarunbook-admin@sw-infrarunbook-01:~$ sudo nginx -T 2>/dev/null | grep -A 15 "location /assets"
Then trace which location block actually matches your request URI. If you have overlapping location blocks, the more specific one wins — an exact match
= /assets/app.cssbeats a prefix match
/assets/, which beats a general
/. Understanding location block priority is essential when try_files behavior seems inconsistent.
Root Cause 5: Case Sensitivity on Linux
Linux filesystems — ext4, XFS, Btrfs — are case-sensitive by default. Windows and macOS usually aren't. This creates a persistent class of bugs when web assets are developed on a Mac or Windows machine and deployed to a Linux server. A file saved as
Logo.PNGwon't be found when the HTML references
logo.png, and the error log gives you no special indication that case is the culprit.
The error just looks like a missing file:
2026/04/17 13:05:17 [error] 12350#0: *5 open() "/srv/www/solvethenetwork.com/assets/Logo.PNG" failed (2: No such file or directory)
But when you look at what's actually on disk:
infrarunbook-admin@sw-infrarunbook-01:~$ ls /srv/www/solvethenetwork.com/assets/
app.css app.js logo.png
There it is.
logo.pngexists on disk but the request came in for
Logo.PNG. The filesystem treats them as different names. This surfaces most often when migrating from shared Windows hosting — which runs on case-insensitive filesystems — to a proper Linux server, or when a frontend developer on macOS pushes assets that work perfectly locally but break the moment they hit staging.
Find the mismatched filenames on disk with uppercase extensions:
infrarunbook-admin@sw-infrarunbook-01:~$ find /srv/www/solvethenetwork.com/assets/ \( -name "*.PNG" -o -name "*.JPG" -o -name "*.CSS" -o -name "*.JS" \)
And search your HTML and CSS source files for references that don't match what's on disk:
infrarunbook-admin@sw-infrarunbook-01:~$ grep -ri "Logo.PNG" /srv/www/solvethenetwork.com/
./templates/index.html:47: <img src="/assets/Logo.PNG" alt="logo">
The fix is normalization. Either rename the files to match the references, or update the references to match the files. Lowercase everything is my strong preference — it's unambiguous, works on every filesystem, and is trivial to enforce. Add a CI lint step that rejects filenames with uppercase extensions before they ever reach the server. That one pipeline check eliminates this entire class of bug permanently.
Root Cause 6: SELinux or AppArmor Blocking Access
If you're running RHEL, Rocky Linux, AlmaLinux, or any distribution with SELinux in enforcing mode, filesystem permissions can look perfectly correct to every Unix tool you run — and Nginx can still be denied. SELinux operates as an independent second permission layer, and it trips up a lot of people who set up their web roots on distros without it and then migrate.
The error log shows errno 13 Permission denied, identical to a standard Unix permission problem. The difference surfaces when you check the Unix permissions and they look perfectly fine:
infrarunbook-admin@sw-infrarunbook-01:~$ ls -laZ /srv/www/solvethenetwork.com/assets/app.css
-rw-r--r--. 1 infrarunbook-admin infrarunbook-admin unconfined_u:object_r:user_home_t:s0 8192 Apr 17 09:00 app.css
The SELinux context is
user_home_t. Nginx expects files it serves to have the
httpd_sys_content_tcontext. Files created in home directories or moved from non-web locations inherit the wrong context. Check the audit log for denials directly:
infrarunbook-admin@sw-infrarunbook-01:~$ sudo ausearch -c nginx -m avc --start recent
type=AVC msg=audit(1713351917.123:456): avc: denied { read } for pid=12350 comm="nginx" name="app.css" dev="sda1" ino=123456 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0
That AVC denial is your confirmation. Relabel the web root with the correct SELinux context and restore it:
infrarunbook-admin@sw-infrarunbook-01:~$ sudo semanage fcontext -a -t httpd_sys_content_t "/srv/www/solvethenetwork.com(/.*)?"
infrarunbook-admin@sw-infrarunbook-01:~$ sudo restorecon -Rv /srv/www/solvethenetwork.com/
restorecon reset /srv/www/solvethenetwork.com/assets/app.css context unconfined_u:object_r:user_home_t:s0->system_u:object_r:httpd_sys_content_t:s0
On Ubuntu systems with AppArmor, the symptom is similar but the diagnostic path is different. Check
/var/log/syslogfor lines containing
apparmor="DENIED"and operation referencing nginx. Adjust the Nginx AppArmor profile at
/etc/apparmor.d/usr.sbin.nginxto allow read access to your web root path, then reload the profile with
apparmor_parser -r.
Root Cause 7: Symlinks Not Resolved Correctly
Some deployment patterns use symlinks to manage versioned releases — the web root is a symlink pointing to the current release directory, and swapping releases means updating a single symlink. This pattern is clean and fast, but it creates a category of 404s that doesn't exist with regular directories.
When the symlink target has incorrect permissions, or when Nginx's security configuration restricts symlink following, you get 404s even though the files exist under the symlink target. Check whether the web root is a symlink and verify the target is accessible:
infrarunbook-admin@sw-infrarunbook-01:~$ ls -la /srv/www/solvethenetwork.com
lrwxrwxrwx 1 infrarunbook-admin infrarunbook-admin 42 Apr 17 09:00 /srv/www/solvethenetwork.com -> /srv/releases/solvethenetwork.com-v2.4.1
infrarunbook-admin@sw-infrarunbook-01:~$ ls -la /srv/releases/solvethenetwork.com-v2.4.1/assets/
total 24
drwxr-xr-x 2 infrarunbook-admin infrarunbook-admin 4096 Apr 17 09:00 .
-rw-r--r-- 1 infrarunbook-admin infrarunbook-admin 8192 Apr 17 09:00 app.css
If the symlink resolves correctly from the shell but Nginx still 404s, check whether
disable_symlinksis enabled somewhere in your config:
infrarunbook-admin@sw-infrarunbook-01:~$ grep -rn "disable_symlinks" /etc/nginx/
/etc/nginx/nginx.conf:18: disable_symlinks on;
The
disable_symlinks ondirective instructs Nginx to return 404 for any path that involves a symlink, as a security measure against certain symlink attacks. Set it to
off— the default — if you legitimately need symlink support, or use
if_not_ownerto allow symlinks only when the symlink owner matches the file owner. Don't leave it on unless you specifically understand the security implication you're protecting against.
Prevention
Most of these failures are entirely preventable with a small amount of discipline applied consistently. A few practices that actually make a difference in production:
Add a post-deploy smoke test that curls a known static asset.
nginx -tcatches syntax errors but won't catch path mismatches — it doesn't verify that the paths you specified actually exist and are readable. A single curl check at the end of your deploy script catches this immediately:
curl -sf https://solvethenetwork.com/assets/app.css -o /dev/null || { echo "Static file check failed"; exit 1; }
Standardize your web root path across all environments. If staging uses
/srv/www/solvethenetwork.comand production uses a different path, you will eventually deploy a config that works in one and silently fails in the other. Manage web root paths through configuration management — Ansible, Puppet, Chef — and use identical paths everywhere. The cost of divergence compounds over time.
Enforce lowercase filenames in your build pipeline. A lint step that rejects filenames with uppercase extensions or mixed-case paths catches the case sensitivity problem before it reaches any server. This is a one-time addition that permanently eliminates an entire class of bug.
Normalize permissions as part of every deployment, not as a recovery step. After extracting archives or rsyncing files, always run permission normalization before reloading Nginx. The cost is a second or two on the deploy script. Forgetting costs you a late-night incident.
Comment your alias directives. When you use
aliasinstead of
rootin a location block, add a one-line comment explaining why — specifically what path transformation it's doing. The next engineer maintaining this config (which might be you, six months from now) will be grateful. Alias vs root confusion is much less likely when the intent is documented inline.
Monitor the error log for open() failures continuously. A grep on
open() failedentries piped into a daily summary alert catches permission drift and misconfiguration before users report it at scale. This one-liner shows you the top missing paths:
grep "open() failed" /var/log/nginx/error.log | awk '{print $NF}' | sort | uniq -c | sort -rn | head -20
Static file 404s in Nginx are almost always one of these seven causes. Start with the error log every time — it tells you the exact path Nginx tried to open, which narrows the problem space immediately. From there it's systematic: does the path exist on disk, does Nginx have permission to read it, and does the config actually point where you think it does. Work through each cause methodically and you'll have it resolved in minutes rather than hours.
