InfraRunBook
    Back to articles

    WAF False Positive Troubleshooting

    Security
    Published: Apr 16, 2026
    Updated: Apr 16, 2026

    A hands-on guide to diagnosing and fixing WAF false positives in ModSecurity and OWASP CRS — covering SQL injection rule triggers, file upload blocks, overly broad signatures, missing exceptions, and paranoia level tuning.

    WAF False Positive Troubleshooting

    Symptoms

    Your monitoring dashboard looks fine. The backend is healthy. But users are calling because they can't submit the contact form, or your API client is suddenly throwing 403s on requests that worked perfectly yesterday. You check the application logs and see nothing — no errors, no exceptions. Then you look at the WAF logs and there it is: a wall of blocked requests, each one legitimate.

    Common symptoms include: legitimate POST requests returning 403 or 400 responses with a generic block page; form submissions silently failing with no feedback to the user; file uploads rejected at the WAF before they reach your storage layer; API clients receiving unexpected blocks on previously working endpoints; and login flows broken for a subset of users because their browser sends an unusual header or payload encoding. In some cases the WAF returns a custom error page. In others the connection just drops. Either way, nothing in your application logs explains it because the request never reached your app.

    False positives are the tax you pay for running a WAF. ModSecurity with the OWASP Core Rule Set is powerful, but it's also opinionated. When it fires on a legitimate request, the user sees a generic block page and you're left reverse-engineering a rule ID at 2 AM. Here's how to work through the most common causes efficiently, without disabling protections you actually need.

    Reading the WAF Audit Log First

    Before diving into specific causes, establish your diagnostic baseline. On sw-infrarunbook-01, the ModSecurity audit log lives at

    /var/log/modsec_audit.log
    . Start here whenever a block is reported:

    sudo tail -f /var/log/modsec_audit.log

    A blocked request entry looks like this:

    [16/Apr/2026:09:14:32 +0000] [sw-infrarunbook-01/sid#7f4b2c] [rid#7f4a1d]
    [192.168.10.45] [403] 0 952 "/api/v1/users/search"
    --boundary--
    Message: Warning. Pattern match at ARGS:query.
    [file "/etc/modsecurity/crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"]
    [line "452"] [id "942100"]
    [msg "SQL Injection Attack Detected via libinjection"]
    [data "Matched Data: select found within ARGS:query: please select the best option"]
    [severity "CRITICAL"]
    [ver "OWASP_CRS/3.3.5"]

    The

    id
    ,
    msg
    , and
    data
    fields are your primary diagnostic signals. The
    data
    field shows exactly what triggered the rule and what value was matched. Capture the rule ID, then look it up in the CRS rules directory — that tells you the rule's intent and whether the match makes any sense for your payload. With that context established, let's work through the most common causes.

    Cause 1: Signature Too Broad

    Some WAF signatures are written with a wide net by design. The goal is to catch attack variants including obfuscated ones, which means the regex deliberately matches a broader class of strings than strictly malicious ones. The problem is that legitimate application data sometimes falls inside that net. I've seen this most often with search fields, rich text editors, technical documentation forms, and any API payload that regularly contains code snippets or natural language with technical terms.

    A broad signature fires because its pattern is structural rather than intent-based. CRS rule 942150, for example, matches constructs like

    OR 1=1
    — which also appears in natural language like "show option A OR 1 of these categories." The WAF has no idea you're talking about categories. It sees
    OR
    followed by a digit and flags it.

    To identify whether a signature is genuinely too broad, pull the specific rule text and inspect what the regex actually matches:

    grep -r "id:942150" /etc/modsecurity/crs/rules/
    /etc/modsecurity/crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf:
    SecRule ARGS "@rx (?i:\bor\b.{0,10}\b\d+\b)" \
        "id:942150,phase:2,block,capture,t:none,t:urlDecodeUni,\
        t:htmlEntityDecode,msg:'SQL Injection Attack Detected',..."

    If the regex matches clearly benign content, you have a breadth problem. The fix is targeted — don't disable the rule globally across your entire application. Scope the exclusion to the specific argument and endpoint using

    SecRuleUpdateTargetById
    :

    # /etc/modsecurity/crs/rules/RESPONSE-999-EXCLUSIONS-BEFORE-CRS.conf
    # Exclude search_query arg from rule 942150 - triggers on natural language OR
    # Ticket: INFRA-2847 | Owner: infrarunbook-admin | Date: 2026-04-16
    SecRuleUpdateTargetById 942150 "!ARGS:search_query"

    If you want to be more surgical and limit the exclusion to one endpoint so you're not widening the gap across the whole application:

    SecRule REQUEST_URI "@beginsWith /api/v1/search" \
        "id:1000001,phase:1,pass,nolog,\
        ctl:ruleRemoveTargetById=942150;ARGS:q"

    Reload and verify the formerly blocked request now succeeds:

    nginx -t && nginx -s reload
    curl -s -o /dev/null -w "%{http_code}" \
      "https://solvethenetwork.com/api/v1/search?q=option+A+OR+1+of+these"
    # Expected: 200

    Cause 2: Request Body Triggering SQL Injection Rule

    This is the most frequent false positive I encounter in production. A developer builds a legitimate form — a product description field, a comment box, a full-text search — and a user enters something that looks like SQL to the WAF. The WAF has no context. It sees

    SELECT
    and it blocks.

    The reason this happens is that CRS REQUEST-942 ruleset rules scan request body parameters (the

    ARGS
    variable) during phase 2. Any POST body parameter is in scope. When a user types "please select the best option from the list" or pastes a code snippet containing
    WHERE id = 5
    , those strings match the pattern. LibInjection, the SQL parser behind rules like 942100, is particularly aggressive — it fingerprints token sequences, not just keywords, which means even innocent sentences can accumulate enough token weight to trigger a match.

    To identify this, look at the

    data
    field in the audit log and note the argument name that fired:

    grep "942100\|942200\|942260" /var/log/modsec_audit.log | tail -20
    [16/Apr/2026:10:22:07 +0000]
    [id "942100"] [msg "SQL Injection Attack Detected via libinjection"]
    [data "Matched Data: select found within ARGS:description:
    please select the best option from the dropdown"]
    [severity "CRITICAL"]

    The argument is

    description
    and the matched value is "please select the best option from the dropdown." Not an attack. The fix is to exclude this specific argument from the SQL injection rule family. If your application already validates and sanitizes this field server-side, this is safe to do:

    SecRuleUpdateTargetById 942100 "!ARGS:description"
    SecRuleUpdateTargetById 942200 "!ARGS:description"
    SecRuleUpdateTargetById 942260 "!ARGS:description"

    For a tighter scope tied to a specific API route so the exclusion doesn't apply site-wide:

    SecRule REQUEST_URI "@beginsWith /api/v1/products" \
        "id:1000002,phase:2,pass,nolog,\
        ctl:ruleRemoveTargetById=942100;ARGS:description,\
        ctl:ruleRemoveTargetById=942200;ARGS:description"

    Reload and verify:

    nginx -t && nginx -s reload
    curl -s -o /dev/null -w "%{http_code}" \
      -X POST https://solvethenetwork.com/api/v1/products \
      -H "Content-Type: application/json" \
      -d '{"description": "please select the best option from the dropdown"}'
    # Expected: 200

    Cause 3: File Upload Triggering Malware Rule

    File uploads are a minefield for WAF false positives. The CRS includes rules that inspect uploaded file names and content for signatures of malicious scripts: PHP webshells, embedded script tags, and executable file headers. The problem is that legitimate uploads regularly contain strings that are structurally identical to these signatures.

    In my experience, this happens with PDFs containing embedded JavaScript for interactive form fields, ZIP archives where the internal filename index includes

    .php
    , Word documents with macro-related metadata strings, and source code submission portals where users upload their own scripts. The WAF inspects the raw bytes and pattern-matches on structure — context is irrelevant to it. A PDF named
    report_export.php.pdf
    looks like a PHP file upload attempt.

    Rules in the REQUEST-933 (PHP injection) and REQUEST-934 (Node.js injection) families are the usual culprits. Check the audit log for those rule IDs:

    grep "93[3-4]" /var/log/modsec_audit.log | tail -10
    [16/Apr/2026:11:45:33 +0000]
    [id "933120"] [msg "PHP Injection Attack: PHP Script File Upload Detected"]
    [data "Matched Data: .php found within FILES_NAMES:upload:
    report_export.php.pdf"]
    [severity "CRITICAL"]

    The file is

    report_export.php.pdf
    — a PDF with
    .php
    in the name. The rule matched on the substring in the filename, not the actual file content. The contents are a valid PDF; the name triggered it.

    For filename-based false positives, scope the exclusion specifically to the

    FILES_NAMES
    variable:

    SecRuleUpdateTargetById 933120 "!FILES_NAMES:upload"

    For content-based false positives where the file body itself triggers a rule, be more surgical and tie the exclusion to the specific upload endpoint:

    SecRule REQUEST_URI "@beginsWith /api/v1/documents/upload" \
        "id:1000003,phase:1,pass,nolog,\
        ctl:ruleRemoveById=933100-933999"

    A critical caveat here: when you exclude file content inspection in the WAF, security responsibility shifts entirely to your application layer. Before adding this exclusion, confirm that your application independently validates MIME type, file extension, and file content via magic byte inspection. If it doesn't, fix the application first. A WAF exclusion without application-layer file validation is a genuine gap — the WAF exclusion and the application validation need to exist together, not as alternatives.

    Cause 4: Exception Not Added

    This one is less glamorous than the others but extremely common in teams that rotate engineers or grow quickly. Someone identifies a false positive during an incident, makes a quick fix — maybe they flip the WAF to detection-only mode, or comment out a rule in the primary config — and the ticket gets closed as resolved. Later, the WAF config gets rebuilt from scratch, a new CRS version gets deployed via an Ansible run that overwrites local changes, or the server gets reprovisioned. The fix is gone. The false positive comes back. The engineer who picks it up next has no idea it was resolved once before.

    The tell-tale sign is a block that a teammate vaguely remembers dealing with. Start by checking whether a proper, durable exclusion actually exists:

    grep -r "SecRuleRemoveById\|SecRuleUpdateTargetById\|ctl:ruleRemove" \
      /etc/modsecurity/ | grep -v "^#"

    Then cross-reference the rule IDs currently appearing in logs against what you expect to have excluded:

    grep '\[id "' /var/log/modsec_audit.log | \
      grep -oE '"[0-9]{6}"' | tr -d '"' | \
      sort | uniq -c | sort -rn | head -20
         47 942100
         31 942200
         12 941100
          8 933120

    If rule 942100 is your top firer and there's no exclusion for it anywhere in the running config, it was either never properly added, or it lived in a config file that didn't survive the last deployment.

    The correct fix is to build exceptions into a dedicated, version-controlled exclusion file. Create it once and keep it in Git alongside your other WAF configuration. Never scatter exclusions through the main ruleset or add them inline in the CRS rule files — those get overwritten on every upgrade.

    # /etc/modsecurity/crs/rules/RESPONSE-999-EXCLUSIONS-BEFORE-CRS.conf
    
    # Exclusion: Product description field - natural language triggers SQLi rules
    # Ticket: INFRA-2847 | Added: 2026-03-10 | Owner: infrarunbook-admin
    SecRuleUpdateTargetById 942100 "!ARGS:description"
    SecRuleUpdateTargetById 942200 "!ARGS:description"
    
    # Exclusion: Document upload - filename may contain .php substring
    # Ticket: INFRA-2901 | Added: 2026-04-01 | Owner: infrarunbook-admin
    SecRule REQUEST_URI "@beginsWith /api/v1/documents/upload" \
        "id:1000003,phase:1,pass,nolog,ctl:ruleRemoveById=933120"

    Add a CI step that replays your known false-positive requests against a staging WAF after every deployment. A simple curl-based test script that asserts 200 responses from these endpoints catches regressions before users do. Treat WAF exclusions like any other configuration — they need version control, peer review, and test coverage to survive the lifecycle of the system.

    Cause 5: Paranoia Level Too High

    The OWASP CRS uses a paranoia level system (PL1 through PL4) to control rule aggressiveness. PL1 activates high-confidence rules with minimal false positives — it's the default for a reason. PL4 enables everything, including rules explicitly tagged by the CRS maintainers as likely to produce false positives in normal environments. The trap is that bumping the paranoia level is a one-line config change, easy to make during a security incident, and equally easy to forget to revert afterward.

    At PL3 and PL4, rules start firing on: URL paths containing common words that happen to resemble injection patterns, headers with unusual casing, response bodies that include debug output or stack traces, and any request that accumulates multiple marginal sub-scores that push it over the anomaly threshold. What looks like useful defense during an active attack becomes a firehose of false positives when normal application traffic returns.

    Check your current paranoia level immediately when diagnosing a sudden increase in blocks:

    grep -A 10 "tx.paranoia_level" /etc/modsecurity/crs/crs-setup.conf | grep -v "^#"
    SecAction \
      "id:900000,\
       phase:1,\
       nolog,\
       pass,\
       t:none,\
       setvar:tx.paranoia_level=4"

    PL4. There's your problem. Before you change it, understand the blast radius. Count how many active rules are only enabled because of the elevated paranoia level:

    grep -rl "paranoia-level/3\|paranoia-level/4" /etc/modsecurity/crs/rules/ | \
      xargs grep -h "paranoia-level" | wc -l
    247

    247 rules are active only because of the elevated paranoia level. To reduce false positives while maintaining meaningful protection, drop to PL2 globally and apply higher paranoia selectively to genuinely sensitive paths:

    # In /etc/modsecurity/crs/crs-setup.conf — drop global level to PL2
    SecAction \
      "id:900000,\
       phase:1,\
       nolog,\
       pass,\
       t:none,\
       setvar:tx.paranoia_level=2"
    # Apply PL3 only to the admin interface where stricter rules are warranted
    SecRule REQUEST_URI "@beginsWith /admin" \
        "id:1000010,phase:1,pass,nolog,setvar:tx.paranoia_level=3"

    After changing the paranoia level, reload and monitor the anomaly score distribution to confirm the change had the expected effect:

    grep "Inbound Anomaly Score" /var/log/modsec_audit.log | \
      grep -oE "score [0-9]+" | awk '{print $2}' | \
      sort -n | uniq -c

    You want to see legitimate traffic shift to lower scores. Real attack traffic should still accumulate high scores because actual attacks hit multiple rules regardless of paranoia level. If you drop from PL4 to PL2 and your score distribution shifts dramatically left while your blocked request count falls sharply, that's the false positive problem clearing. Keep watching for a day or two to confirm you haven't inadvertently dropped coverage on something you care about.

    Cause 6: Anomaly Score Threshold Set Too Low

    Related to paranoia level but distinct: even at PL1, if

    tx.inbound_anomaly_score_threshold
    is set aggressively low, a single marginal rule match blocks the request. The default is 5. A CRITICAL rule hit scores exactly 5 — meaning one CRITICAL match equals a block with no tolerance whatsoever for any marginal ambiguity in the request.

    Check the current threshold:

    grep "inbound_anomaly_score_threshold" /etc/modsecurity/crs/crs-setup.conf
    SecAction \
      "id:900110,\
       phase:1,\
       nolog,\
       pass,\
       t:none,\
       setvar:tx.inbound_anomaly_score_threshold=3"

    A threshold of 3 means a single WARNING-level hit (score: 3) blocks the request. That's extremely aggressive for a general-purpose web application. For most production environments, a threshold of 5–10 is appropriate depending on your risk tolerance. At 10, you need a CRITICAL plus a WARNING (5+3=8, still under threshold), meaning you need multiple rule hits — which actual attacks consistently produce — before a block fires. This dramatically reduces false positives from edge-case matches.

    # Raise to 10 for better false positive tolerance
    setvar:tx.inbound_anomaly_score_threshold=10

    Don't treat this as a substitute for proper rule exclusions — a higher threshold delays some blocks but doesn't eliminate the underlying cause. Combine threshold tuning with targeted exclusions for your known false positives. The threshold is a safety valve, not the primary tuning mechanism.

    Cause 7: Missing Content-Type Declaration

    When a client sends a request with a missing or non-standard

    Content-Type
    header, ModSecurity may fail to parse the body correctly and fall back to form-encoded parsing. APIs sending JSON without the proper header are particularly susceptible — ModSecurity then misinterprets the JSON structure as argument values, and those raw JSON fragments frequently trigger injection rules because the key-value pairs land in
    ARGS
    as un-parsed strings.

    Look for this pattern in the audit log: the matched data looks like raw JSON fragments being treated as argument values:

    grep "Matched Data" /var/log/modsec_audit.log | grep -E "\{|\[" | tail -5
    [data "Matched Data: {\"user\":\"infrarunbook-admin\",\"query\":\"SELECT\"
    found within ARGS:body"]

    The entire JSON body is being treated as a single argument value. Enable JSON body processing for requests with that content type, and ModSecurity will parse the structure properly:

    SecRule REQUEST_HEADERS:Content-Type "@beginsWith application/json" \
        "id:200001,phase:1,t:none,nolog,pass,\
        ctl:requestBodyProcessor=JSON"

    With proper JSON parsing enabled, ModSecurity inspects JSON key-value pairs individually rather than treating the whole body as a monolithic string. This alone resolves a significant portion of false positives from JSON API traffic that wasn't being parsed correctly.

    Prevention

    The underlying problem with WAF false positives isn't the WAF — it's the operational practices around it. The WAF is doing exactly what it's configured to do. The gap is almost always between deployment decisions and the documentation that should have captured them.

    Treat your exclusion ruleset as code. Every exclusion belongs in a version-controlled file with a ticket reference, a date, an owner, and a reason. When someone reads that file six months from now, they need to understand why each exclusion exists without tracking down the original engineer. On sw-infrarunbook-01, keep your exclusion file at

    /etc/modsecurity/crs/rules/RESPONSE-999-EXCLUSIONS-BEFORE-CRS.conf
    and commit it to the same repository as your infrastructure configs. When CRS is upgraded, that file travels with it — your exclusions survive.

    Run ModSecurity in

    DetectionOnly
    mode before every CRS upgrade or major config change. This lets you observe what would be blocked without actually blocking it:

    # Enable detection-only temporarily for pre-deployment validation
    sed -i 's/SecRuleEngine On/SecRuleEngine DetectionOnly/' \
      /etc/modsecurity/modsecurity.conf
    nginx -t && nginx -s reload
    
    # Run your regression test suite and review the audit log output
    # Then re-enable enforcement
    sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' \
      /etc/modsecurity/modsecurity.conf
    nginx -t && nginx -s reload

    Build a regression test suite for your known false positives and wire it into CI. A script that curls your previously-blocked-but-now-allowed endpoints and asserts 200 responses catches regressions before users do. Here's a minimal example:

    #!/bin/bash
    # waf-regression-test.sh — run against staging WAF after every deployment
    
    BASE="https://solvethenetwork.com"
    FAIL=0
    
    check() {
      local desc="$1" url="$2" expected="$3"
      local code
      code=$(curl -s -o /dev/null -w "%{http_code}" "$url")
      if [ "$code" != "$expected" ]; then
        echo "FAIL: $desc - got $code, expected $expected"
        FAIL=1
      else
        echo "PASS: $desc"
      fi
    }
    
    check "Search with natural language OR" \
      "$BASE/api/v1/search?q=option+A+OR+1+of+these" "200"
    
    check "Product description POST with SELECT" \
      "$(curl -s -o /dev/null -w "%{http_code}" -X POST $BASE/api/v1/products \
        -H 'Content-Type: application/json' \
        -d '{"description": "please select the best option"}')" "200"
    
    [ $FAIL -eq 0 ] && echo "All WAF regression tests passed." || exit 1

    Monitor your WAF metrics continuously. Graph the anomaly score distribution and blocked request rate over time. A sudden spike after a deployment almost always means a new false positive introduced by a changed request format or a new CRS rule. A slow, sustained climb in blocked unique source IPs is usually real attack traffic. Knowing which situation you're in quickly is what separates a WAF that protects you from one that just frustrates your users and burns your team's time.

    Finally, document the paranoia level and anomaly score threshold alongside their rationale. "We run PL2 because PL3 generates 40+ false positives per day on the search API" is information that should live in a comment in

    crs-setup.conf
    or in your runbook — not only in someone's memory. When an incident occurs and someone wants to crank the paranoia level, they should understand the trade-off before making that change. The comment from three months ago saves a false-positive incident at 3 AM.

    Frequently Asked Questions

    What is the fastest way to identify which WAF rule is blocking a request?

    Check /var/log/modsec_audit.log and look for the [id], [msg], and [data] fields in the block entry. The data field shows the exact matched value and argument name. Use that rule ID to grep the CRS rules directory and understand the rule's intent before deciding on a fix.

    How do I exclude a specific form field from a ModSecurity rule without disabling the rule entirely?

    Use SecRuleUpdateTargetById in your exclusion file. For example: SecRuleUpdateTargetById 942100 "!ARGS:description" removes the description argument from rule 942100's scope while keeping the rule active for all other arguments across the application.

    What paranoia level should I run in production?

    PL2 is the right starting point for most production environments. PL1 is the absolute minimum and misses some useful coverage. PL3 and PL4 are appropriate for specific high-sensitivity paths like admin interfaces, but applying them globally almost always generates unmanageable false positives on normal application traffic.

    Why does raising the anomaly score threshold reduce false positives?

    ModSecurity accumulates a score from every rule that matches a request. If the total score exceeds the threshold, the request is blocked. With a low threshold (3–5), a single marginal match blocks the request. Raising it to 10 means multiple rules must fire — which real attacks consistently trigger, but legitimate edge-case traffic typically does not.

    How do I prevent WAF exclusions from being lost during CRS upgrades?

    Keep all exclusions in a dedicated file — RESPONSE-999-EXCLUSIONS-BEFORE-CRS.conf — that lives outside the CRS rule directory and is managed in version control. Never edit CRS rule files directly. Include the exclusion file in your deployment pipeline so it's applied after every CRS upgrade, and annotate each exclusion with a ticket reference and owner.

    Related Articles