InfraRunBook
    Back to articles

    Secrets Leakage in Application Troubleshooting

    Security
    Published: Apr 18, 2026
    Updated: Apr 18, 2026

    Secrets leakage during application troubleshooting is one of the most common and damaging security failures in production environments. This guide covers five root causes — from exposed env vars to baked-in container secrets — with real commands, outputs, and fixes.

    Secrets Leakage in Application Troubleshooting

    Symptoms

    You're in the middle of a late-night incident. Someone pastes a

    curl
    command into Slack to reproduce an API failure, and half a second later you realize it contains a live API key — now sitting in your company's chat history indefinitely. Or maybe a security scanner fires an alert that your GitHub repo has a database password in it. Or AWS sends an automated email saying they found your access key in a public commit and already revoked it on your behalf.

    Secrets leakage during troubleshooting is one of those problems that feels theoretical until it happens to you — and then you realize it's been happening in slow motion for months. The symptoms vary widely:

    • API keys or database passwords surfacing in Splunk, Grafana Loki, or Elasticsearch full-text searches
    • A third-party service automatically revoking a credential it detected in your repository via its own scanning pipeline
    • Unexpected 401 Unauthorized or 403 Forbidden responses after a credential was silently rotated by a provider reacting to exposure
    • Security audit findings noting that environment variables are readable from process listings by unprivileged users
    • Application crash stack traces with database connection strings — including passwords — embedded inline
    • Docker image layers containing plaintext secrets discoverable with
      docker history --no-trunc

    Each of these points back to one of a handful of root causes. Let's go through each one in the order they most commonly get engineers into trouble.


    Root Cause 1: Secret in Environment Variable Exposed

    Why It Happens

    Environment variables are the recommended mechanism for injecting secrets into applications, and that recommendation is largely correct — they're better than hardcoding values in source. The problem is that environment variables are far more visible than most engineers expect. On Linux, any running process exposes its environment through

    /proc/<pid>/environ
    . Debugging tools, orchestrators, and observability agents regularly read this file. In containerized environments, tools like
    kubectl describe pod
    will dump environment variable names (though not values), and an
    exec
    into a pod followed by a naive
    env
    call will print every value to a session that may be logged by your audit backend. And if someone passes a secret as a positional argument rather than a true env var, it shows up in
    ps
    output for every user on the system.

    How to Identify It

    Start by checking whether your application's environment is readable from outside the process:

    cat /proc/$(pgrep -f myapp)/environ | tr '\0' '\n' | grep -iE "secret|password|key|token"

    If the permissions on that file allow reads by users other than the process owner, you have a problem. You might see output like this:

    DATABASE_PASSWORD=s3cr3t-prod-pass
    STRIPE_SECRET_KEY=sk_live_abc123xyz
    AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

    Also check whether any process is passing secrets as command-line arguments:

    ps auxe | grep -iE "password=|secret=|token=|api_key="

    In Kubernetes environments, audit what an exec session would expose:

    kubectl exec -it myapp-pod -- env | grep -iE "secret|password|key|token"

    That output, if captured in your audit logs, contains the full secret values.

    How to Fix It

    The immediate fix is to stop passing secrets as command-line arguments — full stop. If you're using positional arguments anywhere, move to environment injection. For the longer-term fix, stop relying on plain environment variables and move to a secrets manager. HashiCorp Vault, AWS Secrets Manager, and the Kubernetes External Secrets Operator all support injecting secrets at runtime without baking them into the process environment permanently. When you must use environment variables, ensure

    /proc/<pid>/environ
    permissions are as restrictive as possible — the file should be readable only by the process owner and root. Audit your Kubernetes audit log policy to confirm that
    exec
    events into pods are captured, so you at least know when a secret was exposed even if you can't prevent it entirely.


    Root Cause 2: Secret Committed to Git

    Why It Happens

    In my experience, this almost always happens during a "quick fix" moment. Someone is debugging a connectivity issue locally, temporarily hardcodes a database password to verify the connection string works, and commits without reviewing the diff first. It also happens when a

    .env
    file gets added to the repository because
    .gitignore
    wasn't configured before the first
    git add .
    , or when a CI/CD pipeline configuration file includes a literal token that was supposed to be a variable reference. The critical thing engineers underestimate is that even if you delete the file and push again, the secret is still in the git history. Every clone of that repo carries it.

    How to Identify It

    The blunt approach is searching the full history:

    git log -p --all | grep -E "(password|secret|api_key|token|private_key)\s*=\s*['\"]?[A-Za-z0-9+/=_-]{8,}"

    For something more thorough, use

    trufflehog
    , which understands secret formats and can verify whether found credentials are still active:

    trufflehog git file:///opt/repos/solvethenetwork-app --only-verified

    Sample output when a live credential is found:

    Found verified result
    Detector Type: AWS
    Raw result: AKIAIOSFODNN7EXAMPLE
    Commit: a3f2c1d9e4b5f6a7b8c9
    File: config/database.yml
    Line: 14
    Author: infrarunbook-admin
    Date: 2026-03-02

    To check whether a

    .env
    file was ever committed, even if it's now in
    .gitignore
    :

    git log --all --full-history -- "**/.env"
    git log --all --full-history -- ".env"

    If those commands return commits, the file existed in history regardless of its current ignored status.

    How to Fix It

    Rotate the compromised secret first. Before touching the repository, assume the secret is burned and start the rotation process with whatever service owns it. Then rewrite the git history to remove it. The modern tool for this is

    git filter-repo
    :

    # Remove an entire file from all history
    git filter-repo --path config/database.yml --invert-paths
    
    # Or scrub a specific string across all files in all commits
    git filter-repo --replace-text <(echo 'AKIAIOSFODNN7EXAMPLE==>REMOVED')

    After rewriting, force-push to all remotes and notify every team member who has a clone to re-clone fresh — their local history still contains the secret. Add pre-commit scanning to prevent recurrence:

    pip install detect-secrets
    detect-secrets scan > .secrets.baseline
    detect-secrets audit .secrets.baseline

    Then wire it into

    .pre-commit-config.yaml
    :

    repos:
      - repo: https://github.com/Yelp/detect-secrets
        rev: v1.4.0
        hooks:
          - id: detect-secrets
            args: ['--baseline', '.secrets.baseline']

    Run

    pre-commit install
    and the scan executes before every commit, blocking anything that matches a known credential pattern.


    Root Cause 3: Log Statement Printing Secret

    Why It Happens

    This is the quietest leakage vector and frequently the longest-running. A developer adds debug logging to trace through an authentication flow. The log statement serializes the entire request object. That request object includes an

    Authorization
    header or a JSON body with a
    password
    field. The log goes to stdout, stdout goes to your log shipper, the log shipper sends it to your aggregation platform, and now the secret is indexed and searchable by anyone with query access to your logging system — possibly for months, depending on your retention policy.

    I've seen this happen most often in two patterns: a middleware layer that logs full HTTP request and response objects "just for debugging" and never gets cleaned up, and structured loggers configured to serialize entire Go structs or Python dicts that happen to contain credential fields.

    How to Identify It

    Search your raw log files for patterns that shouldn't be there:

    grep -rE "Authorization|Bearer [A-Za-z0-9._-]{20,}|password|api[_-]?key" /var/log/myapp/ | head -30

    A finding looks like this:

    2026-04-18T03:12:44Z DEBUG request={"method":"POST","url":"/api/v1/login","headers":{"Authorization":"Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c2VyMTIzIn0.abc123","Content-Type":"application/json"},"body":{"username":"infrarunbook-admin","password":"Tr0ub4dor&3"}}

    Also check for database connection strings leaking in exception stack traces:

    grep -rE "jdbc:|postgresql://|mysql://|mongodb://" /var/log/myapp/*.log

    If you're using Loki, query directly:

    {app="myapp"} |= "Authorization" | logfmt | line_format "{{.msg}}"

    How to Fix It

    Remove the offending log statements immediately. For Python applications using

    structlog
    , implement a processor that redacts sensitive fields before they reach any output handler:

    import structlog
    
    def redact_sensitive(logger, method, event_dict):
        sensitive_keys = {"password", "secret", "token", "authorization", "api_key", "private_key"}
        for key in list(event_dict.keys()):
            if key.lower() in sensitive_keys:
                event_dict[key] = "[REDACTED]"
        return event_dict
    
    structlog.configure(
        processors=[
            redact_sensitive,
            structlog.dev.ConsoleRenderer()
        ]
    )

    For logs that already exist in your aggregation platform, purge the affected time window after rotating the compromised credentials. In Loki, the delete API requires that log deletion be enabled in your configuration (

    retention_enabled: true
    and
    allow_deletes: true
    in the compactor block):

    curl -X POST "http://loki.solvethenetwork.com:3100/loki/api/v1/delete" \
      --data-urlencode 'query={app="myapp"}' \
      --data-urlencode 'start=2026-04-01T00:00:00Z' \
      --data-urlencode 'end=2026-04-18T06:00:00Z'

    In Elasticsearch, use delete-by-query against the affected index pattern:

    POST /logstash-2026.04.*/_delete_by_query
    {
      "query": {
        "match_phrase": {
          "message": "Authorization"
        }
      }
    }

    Root Cause 4: Secret in URL Query Parameter

    Why It Happens

    Passing an API key as a query parameter — something like

    ?api_key=abc123
    or
    ?token=xyz
    — is a pattern from the early era of REST APIs and still appears in some legacy third-party integrations and internally-built tooling. The problem is that URLs are logged by almost everything by default. Web server access logs, load balancer logs, CDN logs, APM traces, browser history, and Referer headers on outbound links all capture the full URL including query parameters. Every hop in the request path that touches that URL potentially stores your secret.

    What makes this worse is that engineers often don't realize their tracing tools are capturing outbound HTTP call URLs. An application making a downstream call to a vendor API with a key in the query string will have that full URL captured in Jaeger, Tempo, or Datadog APM without any additional configuration needed — it just happens automatically.

    How to Identify It

    Check your web server and reverse proxy access logs:

    grep -E "\?.*api[_-]?key=|\?.*token=|\?.*secret=|\?.*password=" /var/log/nginx/access.log | head -20

    A representative bad entry in a standard nginx access log:

    10.0.1.15 - infrarunbook-admin [18/Apr/2026:03:45:12 +0000] "GET /api/v2/reports?api_key=sk_live_abc123xyz&format=json HTTP/1.1" 200 1452 "-" "curl/7.88.1"

    That entry is in your log rotation and likely retained for 30, 60, or 90 days depending on your policy. Also check HAProxy stats and any API gateway logs:

    grep -iE "Referer.*api_key|Referer.*token" /var/log/nginx/access.log

    And query your APM backend for traces containing the offending parameter pattern. In Jaeger's HTTP API:

    curl "http://jaeger.solvethenetwork.com:16686/api/traces?service=myapp&tags=%7B%22http.url%22%3A%22api_key%22%7D"

    How to Fix It

    Move secrets from query parameters to request headers. Replace this:

    GET /api/v2/reports?api_key=sk_live_abc123xyz&format=json HTTP/1.1
    Host: api.solvethenetwork.com

    With this:

    GET /api/v2/reports?format=json HTTP/1.1
    Host: api.solvethenetwork.com
    Authorization: Bearer sk_live_abc123xyz

    Or if a vendor requires a custom header:

    X-API-Key: sk_live_abc123xyz

    Headers aren't immune from logging — they absolutely can appear in debug logs — but they won't end up in Referer headers or browser history, and most access log formats don't capture headers by default. If you need to log requests for debugging, configure your reverse proxy to explicitly exclude sensitive headers. In nginx:

    log_format sanitized '$remote_addr - $remote_user [$time_local] '
        '"$uri" $status $body_bytes_sent '
        '"$http_referer" "$http_user_agent"';
    
    access_log /var/log/nginx/access.log sanitized;

    Note that this format logs

    $uri
    (path only) rather than
    $request_uri
    (path plus query string), which prevents query parameters from being captured entirely. Rotate the exposed key and audit all consuming services to update their integration.


    Root Cause 5: Container Image Containing Secret

    Why It Happens

    Docker image builds are often treated like compilation: you run a command, you get an artifact, you push it to a registry. What's easy to forget is that every

    RUN
    ,
    COPY
    , and
    ADD
    instruction in a Dockerfile creates a new immutable layer, and those layers are independently inspectable. If you copy a
    .env
    file into the image during the build to use its values, then remove it in a subsequent
    RUN rm -f /app/.env
    instruction, the file still exists verbatim in the layer created by the
    COPY
    instruction. Anyone with pull access to that image can extract that layer and read the file.

    Build arguments are another common culprit. Passing a secret via

    --build-arg SECRET_KEY=xyz
    embeds it in the image metadata visible through
    docker inspect
    , regardless of whether it was ever written to the filesystem.

    How to Identify It

    Pull the image and inspect every layer with

    docker history
    :

    docker history --no-trunc sw-infrarunbook-01/myapp:latest

    Output revealing the problem:

    IMAGE          CREATED        CREATED BY                                            SIZE
    sha256:a1b2c3  2 hours ago    /bin/sh -c rm -f /app/.env                            0B
    sha256:d4e5f6  2 hours ago    /bin/sh -c pip install -r requirements.txt             48MB
    sha256:g7h8i9  2 hours ago    /bin/sh -c #(nop) COPY .env /app/.env                  312B
    sha256:k1l2m3  3 hours ago    /bin/sh -c #(nop) FROM python:3.12-slim                0B

    The

    rm -f
    layer is 0 bytes. The
    COPY
    layer is 312 bytes. That data is still there. Extract it:

    docker save sw-infrarunbook-01/myapp:latest -o myapp.tar
    mkdir myapp-layers && tar -xf myapp.tar -C myapp-layers
    # Find and extract the layer that matches sha256:g7h8i9
    tar -xf myapp-layers/g7h8i9.tar app/.env
    cat app/.env

    Result:

    DATABASE_URL=postgresql://infrarunbook-admin:SuperSecret123@10.0.1.20:5432/proddb
    STRIPE_SECRET_KEY=sk_live_realkey_here
    INTERNAL_API_TOKEN=eyJhbGciOiJIUzI1NiJ9.abc123

    For build-arg leakage, check image metadata directly:

    docker inspect sw-infrarunbook-01/myapp:latest | jq '.[0].Config.Env'

    Use Trivy to automate the scan:

    trivy image --scanners secret sw-infrarunbook-01/myapp:latest

    Sample output:

    myapp:latest (debian 12.5)
    
    Total: 2 (SECRET: 2)
    
    +----------+-------------+----------+----------------------------+----------------------------+
    | Target   | Type        | Severity | Secret Type                | Match                      |
    +----------+-------------+----------+----------------------------+----------------------------+
    | app/.env | Secret      | CRITICAL | Generic Database Password  | DATABASE_URL=postgresql:// |
    | app/.env | Secret      | CRITICAL | Stripe Secret Key          | sk_live_realkey_here       |
    +----------+-------------+----------+----------------------------+----------------------------+

    How to Fix It

    Use Docker BuildKit's secret mount syntax to inject secrets at build time without writing them to any layer. First, add the BuildKit syntax directive to your Dockerfile:

    # syntax=docker/dockerfile:1
    FROM python:3.12-slim
    
    WORKDIR /app
    COPY requirements.txt .
    
    # Secret is mounted as a tmpfs at /run/secrets/db_url — never written to any layer
    RUN --mount=type=secret,id=db_url \
        export DATABASE_URL=$(cat /run/secrets/db_url) && \
        python setup.py configure
    
    COPY . .
    CMD ["python", "app.py"]

    Build it by passing the secret from an environment variable:

    DOCKER_BUILDKIT=1 docker build \
      --secret id=db_url,env=DATABASE_URL \
      -t sw-infrarunbook-01/myapp:latest .

    The secret is available during the build step as a read-only tmpfs file but is not committed to any layer. Verify it's gone:

    docker history --no-trunc sw-infrarunbook-01/myapp:latest | grep -i secret
    # Should return nothing

    Also add a comprehensive

    .dockerignore
    to prevent accidental inclusion:

    .env
    .env.*
    *.pem
    *.key
    *.p12
    .secrets
    secrets/
    credentials.json

    For runtime secrets, use your orchestrator's native injection mechanism — Kubernetes

    secretRef
    in the pod spec, ECS task definition secret references pointing at Secrets Manager, or Vault Agent sidecar injection. Never bake credentials into the image itself.

    After cleaning up, rebuild the image from scratch, retag it, push the clean version, and delete the compromised tags from your registry. In a Docker Registry v2 API-compatible registry:

    # Get the digest for the compromised tag
    DIGEST=$(curl -s -I -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
      https://registry.solvethenetwork.com/v2/myapp/manifests/compromised-tag \
      | grep Docker-Content-Digest | awk '{print $2}' | tr -d '\r')
    
    # Delete it
    curl -X DELETE \
      https://registry.solvethenetwork.com/v2/myapp/manifests/$DIGEST

    Prevention

    Prevention is where you stop playing defense one incident at a time and build systems that make leakage structurally harder. The goal is to catch secrets before they reach any persistent store — not to detect them afterward.

    Start with pre-commit scanning. Install

    pre-commit
    and wire in either
    detect-secrets
    or
    gitleaks
    as a hook. The scan runs in milliseconds before every commit and will block anything matching a known credential format. This is the single highest-leverage control because it eliminates the git history problem at the source.

    Add a second gate in CI. Pre-commit hooks can be skipped locally with

    --no-verify
    . Your CI pipeline can't be bypassed. Run
    gitleaks
    against every push and pull request diff:

    gitleaks detect \
      --source . \
      --report-format json \
      --report-path /tmp/gitleaks-report.json \
      --exit-code 1

    The

    --exit-code 1
    causes the pipeline to fail hard if a secret is found. Treat that failure the same way you'd treat a failing test — nothing merges until it's resolved.

    For container images, integrate Trivy into your image build pipeline and make it a gate before the push step:

    trivy image \
      --exit-code 1 \
      --severity CRITICAL \
      --scanners secret \
      sw-infrarunbook-01/myapp:latest

    For runtime secrets, adopt a secrets manager and enforce its use. HashiCorp Vault with dynamic database credentials is one of the most effective controls available — your application gets a short-lived username and password with a TTL of minutes or hours. Even if that credential leaks into a log, it's likely already expired before anyone can act on it. The audit log in Vault also tells you exactly when and where a secret was retrieved, which is invaluable during incident response.

    Implement structured logging with an explicit allowlist rather than a blocklist. Don't try to filter out all the bad field names — define exactly which fields are permitted to appear in log output, and strip everything else. This is more robust because you don't need to predict every possible field name that could contain a credential.

    Configure your reverse proxies and load balancers to log URIs without query strings by default, and to never log

    Authorization
    ,
    X-API-Key
    , or
    Cookie
    headers. Apply this at the infrastructure layer so application teams don't need to think about it.

    Finally, rotate secrets on a schedule regardless of whether a leak is suspected. Automation that rotates database passwords, API keys, and service tokens on a defined cadence — weekly, monthly, quarterly depending on sensitivity — bounds the blast radius of any individual exposure. When a leak does happen, and one will, the question shifts from "is this secret still valid?" to "how recently was it last rotated?" That's a much better position to be in.

    Frequently Asked Questions

    How do I check if a secret was ever committed to a git repository, even if it has since been deleted?

    Use `git log --all --full-history -- path/to/file` to see if a file ever existed in the repo's history, and `git log -p --all | grep -iE 'password|secret|api_key'` to search all historical diffs for credential patterns. Tools like trufflehog and gitleaks can scan the full commit history automatically and identify verified live credentials.

    Can secrets in Docker image layers be removed without rebuilding the image?

    No. Once a secret is baked into a layer, it cannot be removed without rebuilding the image from scratch using a clean Dockerfile that never writes the secret to any layer. Use Docker BuildKit's `--mount=type=secret` syntax to inject secrets at build time without committing them to any layer, then rebuild and push a clean image.

    Why are secrets in URL query parameters more dangerous than secrets in HTTP headers?

    URLs including query parameters are logged by virtually every component in a request path — web servers, load balancers, CDNs, APM tracing tools, and browsers — by default. They also appear in HTTP Referer headers on outbound links. HTTP headers are far less likely to be captured by default logging configurations and don't propagate as Referer values.

    What should I do immediately after discovering a secret has been exposed in a log or git history?

    Rotate the secret first, before doing anything else — assume it is compromised. Then identify the scope of exposure: who had access to the logs or repository, and for how long. After rotation is confirmed, clean up the source (rewrite git history, purge log entries), and run a postmortem to identify the control that failed so it can be prevented.

    How does Docker BuildKit's secret mount differ from using build arguments for secrets?

    Build arguments passed via `--build-arg` are stored in the image metadata and visible via `docker inspect`, even if they were only used during a RUN step and never written to disk. BuildKit secret mounts are injected as a tmpfs filesystem visible only during the specific RUN instruction that requests them — they are never written to any layer or embedded in image metadata.

    Related Articles