InfraRunBook
    Back to articles

    Envoy Access Log Not Writing

    Envoy
    Published: Apr 18, 2026
    Updated: Apr 18, 2026

    Step-by-step troubleshooting guide for Envoy access logs that stop writing or never start — covers path errors, permission issues, missing filter config, format string mistakes, and disk-full failures.

    Envoy Access Log Not Writing

    Symptoms

    You've deployed Envoy — maybe as a sidecar, maybe as a standalone edge proxy in front of your backend services — and the access log file is completely silent. Requests are flowing through. You can see upstream metrics climbing on the admin endpoint at

    :9901/stats
    . But
    /var/log/envoy/access.log
    stays at zero bytes, or the file doesn't exist at all.

    Sometimes the file gets created at startup and then nothing ever lands in it. Other times it's absent entirely. You tail it, you watch it, curl a few requests through, and still nothing. No errors from Envoy itself, no obvious crash, just dead silence from a log sink that should be chatty.

    In Kubernetes environments this can be even more confusing — the pod looks healthy, the service responds, but the log path is on a hostPath volume that appears mounted yet never accumulates entries. This guide covers every common reason this happens, how to identify each one quickly, and how to fix it without guessing.


    Root Cause 1: Access Log Path Wrong

    Why it happens: Envoy's access log destination is configured inside the bootstrap file or pushed via xDS, and a simple typo in the path is more common than it sounds. A path like

    /var/log/enovy/access.log
    instead of
    /var/log/envoy/access.log
    will cause Envoy to attempt writes to a nonexistent directory. In older Envoy releases this failure was silent — the log just never materialized and the process kept running. Newer builds do surface this at startup, but only if you're watching stderr closely.

    How to identify it: Start by confirming the directory actually exists on the host.

    ls -la /var/log/envoy/

    If you get back:

    ls: cannot access '/var/log/envoy/': No such file or directory

    That's your answer. To confirm what path Envoy thinks it's writing to, pull the live config dump:

    curl -s http://127.0.0.1:9901/config_dump | python3 -m json.tool | grep -A 8 "access_log"

    You might see something like this buried inside the listener config:

    "access_log": [
      {
        "name": "envoy.access_loggers.file",
        "typed_config": {
          "@type": "type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog",
          "path": "/var/log/enovy/access.log"
        }
      }
    ]

    There it is —

    enovy
    instead of
    envoy
    . Also check startup stderr for any path-related errors:

    journalctl -u envoy --no-pager | grep -iE "access_log|path|error" | head -30

    In a container environment:

    docker logs envoy-proxy 2>&1 | grep -iE "access|path|error" | head -30

    How to fix it: Create the missing directory and assign ownership, then correct the config.

    mkdir -p /var/log/envoy
    chown envoy:envoy /var/log/envoy

    In your

    envoy.yaml
    , find and correct the path. Before:

    access_log:
      - name: envoy.access_loggers.file
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
          path: /var/log/enovy/access.log

    After:

    access_log:
      - name: envoy.access_loggers.file
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
          path: /var/log/envoy/access.log

    Reload Envoy and verify a log line appears immediately:

    kill -HUP $(pgrep envoy)
    curl -s http://127.0.0.1:8080/
    tail -5 /var/log/envoy/access.log

    Root Cause 2: Permission Denied

    Why it happens: Envoy runs as a non-root user — commonly

    envoy
    (UID 1337 in Istio setups) or a dedicated service account — and the log directory was created or locked down by root without granting write access to the service user. I've seen this happen repeatedly after security hardening passes where someone sets
    /var/log
    subdirectories to
    700 root:root
    and doesn't account for which services need write access.

    How to identify it: Check what user Envoy is running as:

    ps aux | grep envoy
    envoy   1423  0.4  1.2 348920 24576 ?  Ssl  14:21   0:05 /usr/local/bin/envoy -c /etc/envoy/envoy.yaml

    Now check the directory permissions:

    ls -la /var/log/ | grep envoy
    drwxr-x--- 2 root root 4096 Apr 19 14:00 envoy

    Mode

    750
    , owned by root. The
    envoy
    user has execute but not write access. You can confirm this definitively with:

    sudo -u envoy touch /var/log/envoy/test.log
    touch: cannot touch '/var/log/envoy/test.log': Permission denied

    If Envoy surfaces this at startup, the critical log line looks like:

    [2026-04-19 14:21:33.412][1][critical][main] [source/server/server.cc:888]
      error initializing access log file '/var/log/envoy/access.log': Permission denied

    How to fix it: The cleanest fix is correcting ownership on the log directory:

    chown envoy:envoy /var/log/envoy
    chmod 755 /var/log/envoy

    If the log file itself was previously created by root while the process was running as root:

    chown envoy:envoy /var/log/envoy/access.log

    In Kubernetes with a hostPath volume, the node-level directory must be writable by the UID set in

    securityContext.runAsUser
    . An init container handles this cleanly:

    initContainers:
    - name: fix-log-perms
      image: busybox:1.36
      command: ["sh", "-c", "mkdir -p /var/log/envoy && chmod 777 /var/log/envoy"]
      volumeMounts:
      - name: envoy-logs
        mountPath: /var/log/envoy

    After fixing permissions, verify the write test passes before reloading Envoy:

    sudo -u envoy touch /var/log/envoy/writetest && echo "Write OK" && rm /var/log/envoy/writetest

    Root Cause 3: Filter Not Configured

    Why it happens: This is the most common root cause I encounter, and it catches engineers who are transitioning to Envoy from other proxies where logging is a global setting. In Envoy, access logging is not global. It lives inside the

    HttpConnectionManager
    (HCM) filter chain on each individual listener, or inside the
    tcp_proxy
    filter for TCP listeners. If you define a route, configure a cluster, and wire up the listener but forget to add the
    access_log
    stanza inside the HCM block, you'll get exactly zero log entries. No errors, no warnings — the proxy works perfectly, requests route correctly, nothing gets logged.

    How to identify it: Pull the config dump and look at what's inside each HCM block:

    curl -s http://127.0.0.1:9901/config_dump | python3 -m json.tool > /tmp/envoy-config.json
    grep -n "http_connection_manager\|access_log" /tmp/envoy-config.json | head -40

    If you see

    http_connection_manager
    sections with no
    access_log
    key underneath them, that's the problem. A listener without logging configured looks like this:

    {
      "name": "envoy.filters.network.http_connection_manager",
      "typed_config": {
        "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
        "stat_prefix": "ingress_http",
        "route_config": { "..." },
        "http_filters": ["..."]
      }
    }

    No

    access_log
    key. Compare that to a correctly configured HCM:

    {
      "name": "envoy.filters.network.http_connection_manager",
      "typed_config": {
        "@type": "...",
        "stat_prefix": "ingress_http",
        "access_log": [
          {
            "name": "envoy.access_loggers.file",
            "typed_config": {
              "@type": "type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog",
              "path": "/var/log/envoy/access.log"
            }
          }
        ],
        "route_config": { "..." },
        "http_filters": ["..."]
      }
    }

    How to fix it: Add the

    access_log
    stanza directly inside the HCM
    typed_config
    block in your YAML. Here's the complete minimal correct structure:

    - name: envoy.filters.network.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        access_log:
          - name: envoy.access_loggers.file
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
              path: /var/log/envoy/access.log
        route_config:
          name: local_route
          virtual_hosts:
            - name: backend
              domains: ["*"]
              routes:
                - match: { prefix: "/" }
                  route: { cluster: backend_cluster }
        http_filters:
          - name: envoy.filters.http.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router

    After updating and reloading, send a test request and confirm the log file gets written:

    kill -HUP $(pgrep envoy)
    curl -s http://192.168.1.100:8080/health
    tail -1 /var/log/envoy/access.log

    Root Cause 4: Format String Error

    Why it happens: Envoy supports custom access log formats through the

    log_format
    block using either
    text_format_source
    or
    json_format
    . A malformed command operator — for example
    %REQ(:AUTHORITY%
    instead of
    %REQ(:AUTHORITY)%
    , with the closing parenthesis missing — will usually cause Envoy to reject the config at startup. But subtler format errors are more insidious. An invalid operator in a
    json_format
    block may pass YAML syntax validation yet fail at Envoy's internal config parsing stage, causing the access logger to be skipped entirely without a clean error message bubbling up to the surface.

    In my experience, the JSON format variant is where engineers run into the most trouble. People copy operator strings from blog posts written against Envoy v2 and paste them into v3 configs where the operator names have changed, or they build format strings dynamically in Helm templates and introduce rendering artifacts like unclosed brackets.

    How to identify it: Check startup output first:

    journalctl -u envoy --no-pager -n 100 | grep -iE "error|invalid|format|access_log"

    A format parse error at startup looks like:

    [2026-04-19 14:35:10.112][1][critical][main] [source/extensions/access_loggers/common/access_log_impl.cc:42]
      error: invalid access log format string: unexpected end of format string
      at position 22 in '%REQ(:AUTHORITY%'

    Always validate your config file before deploying:

    envoy --mode validate -c /etc/envoy/envoy.yaml 2>&1

    Output on a bad format string:

    error initializing configuration '/etc/envoy/envoy.yaml':
      Field 'json_format' has invalid operator '%START_TIME(%Y-%m-%dT%H:%M:%S%'
      - unmatched parenthesis

    If validation passes, double-check for subtle v2-to-v3 naming discrepancies:

    grep -n "RESPONS_CODE\|UPSTREAM_CLUSER\|DURATION_MS" /etc/envoy/envoy.yaml

    How to fix it: The most common mistakes are missing closing parentheses in

    %START_TIME(...)%
    , misspelled operator names like
    %RESPONS_CODE%
    (missing the E), and v2 operator names used in v3 configs. A correct minimal text format string:

    typed_config:
      "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
      path: /var/log/envoy/access.log
      log_format:
        text_format_source:
          inline_string: "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %BYTES_SENT% \"%REQ(:AUTHORITY)%\" %DURATION%\n"

    A correct JSON format block:

    typed_config:
      "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
      path: /var/log/envoy/access.log
      log_format:
        json_format:
          timestamp: "%START_TIME%"
          method: "%REQ(:METHOD)%"
          path: "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%"
          response_code: "%RESPONSE_CODE%"
          duration_ms: "%DURATION%"
          upstream_host: "%UPSTREAM_HOST%"

    Always validate before applying:

    envoy --mode validate -c /etc/envoy/envoy.yaml && echo "Config is valid"

    Root Cause 5: Disk Full

    Why it happens: When the filesystem holding

    /var/log
    reaches 100% capacity, Envoy can't write new log entries. The process keeps running, requests proxy without issue, but every write syscall to the log file returns
    ENOSPC
    and the entry is silently dropped. There's no Envoy-level alerting on this condition — the proxy doesn't crash, it doesn't emit a warning to stderr, the log file just stops accumulating new lines. In long-running environments without proper log rotation this is surprisingly common, especially when access log verbosity is high and request volume is heavy.

    How to identify it:

    df -h /var/log/envoy/
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda1        20G   20G     0 100% /

    Confirm a write from the Envoy user fails:

    sudo -u envoy dd if=/dev/zero of=/var/log/envoy/writetest bs=1k count=1
    dd: error writing '/var/log/envoy/writetest': No space left on device
    0+0 records in
    0+0 records out

    Find what's consuming the space:

    du -sh /var/log/* | sort -rh | head -20
    ls -lhS /var/log/envoy/

    How to fix it: First, free up space immediately so logging can resume. Don't remove the open file while Envoy has it open — truncate it instead:

    # Truncate the current log file safely (Envoy keeps the file descriptor open)
    > /var/log/envoy/access.log
    
    # Compress any rotated logs that are still uncompressed
    gzip /var/log/envoy/access.log.1 2>/dev/null
    
    # Remove old compressed logs if safe to do so
    find /var/log/envoy/ -name "*.gz" -mtime +14 -delete

    Then configure logrotate to prevent this from happening again. Create

    /etc/logrotate.d/envoy
    on
    sw-infrarunbook-01
    :

    /var/log/envoy/access.log {
        daily
        rotate 7
        compress
        delaycompress
        missingok
        notifempty
        sharedscripts
        postrotate
            kill -USR1 $(pgrep envoy) 2>/dev/null || true
        endscript
    }

    Test the logrotate config:

    logrotate -d /etc/logrotate.d/envoy

    For Kubernetes, ship logs off-node with Fluent Bit or Fluentd as a DaemonSet and set a

    terminationMessagePath
    and ephemeral storage limit on the pod to prevent runaway log growth from filling node disk.


    Root Cause 6: Access Log Filter Blocking All Entries

    Why it happens: Envoy access log configurations support runtime filter conditions — you can restrict logging to only certain HTTP status codes, minimum durations, or a random sampling fraction. These filters are a powerful way to reduce log volume in high-traffic environments. But if someone added a filter with an overly aggressive condition — or introduced a typo that makes the filter unmatchable — you end up with a perfectly configured access logger that never emits a single line.

    How to identify it: Look for a

    filter
    key inside the
    access_log
    stanza in the config dump:

    curl -s http://127.0.0.1:9901/config_dump | python3 -m json.tool | grep -B2 -A20 '"access_log"'

    A misconfigured filter that will never match looks like this:

    "access_log": [
      {
        "name": "envoy.access_loggers.file",
        "filter": {
          "status_code_filter": {
            "comparison": {
              "op": "GE",
              "value": {
                "default_value": 9999,
                "runtime_key": "access_log.min_status_code"
              }
            }
          }
        },
        "typed_config": { "..." }
      }
    ]

    A

    status_code_filter
    with
    GE 9999
    will never match any real HTTP status code. Nothing gets logged. Similarly, a
    runtime_fraction
    filter with
    numerator: 0
    will sample zero percent of requests.

    How to fix it: If you want to log everything, remove the

    filter
    block entirely. If you want selective logging, correct the filter. To log only 5xx responses:

    access_log:
      - name: envoy.access_loggers.file
        filter:
          status_code_filter:
            comparison:
              op: GE
              value:
                default_value: 500
                runtime_key: access_log.min_status_code
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
          path: /var/log/envoy/access.log

    To log all requests without filtering, the stanza has no

    filter
    key at all — just
    name
    and
    typed_config
    .


    Root Cause 7: Extension Not Registered or Wrong API Version

    Why it happens: Envoy's v3 API requires the correct

    @type
    URL for every
    typed_config
    block. If you copy a config from an older blog post targeting the v2 API, or use a custom-built Envoy image with the file access logger extension disabled, the logger simply won't initialize. This is less common than the other causes but I have seen it in environments running stripped-down Envoy builds for binary size or CVE-surface reasons.

    How to identify it: Check what Envoy version and extensions are present:

    envoy --version
    curl -s http://127.0.0.1:9901/server_info | python3 -m json.tool | grep -i "access_log"

    If the file access logger extension isn't listed in

    server_info
    , it wasn't compiled in. Startup stderr will show:

    [critical] Didn't find a registered implementation for type:
      'envoy.extensions.access_loggers.file.v3.FileAccessLog'

    Also check that you're using the correct v3

    @type
    string:

    grep -n "@type" /etc/envoy/envoy.yaml | grep access

    The correct v3 type URL is:

    type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog

    Not the old v2 path, which was

    envoy.file_access_log
    as a string name rather than a typed URL.

    How to fix it: Update the

    @type
    to the correct v3 URL shown above. If the extension genuinely isn't compiled in, switch to an official Envoy image which always includes all standard extensions:

    docker pull envoyproxy/envoy:v1.29-latest
    docker run --rm envoyproxy/envoy:v1.29-latest envoy --version

    Prevention

    Most of these failures are preventable with a few consistent practices. The single most effective habit is running

    envoy --mode validate
    against every config change before deploying it anywhere — this catches format string errors, wrong type URLs, and structural mistakes that would otherwise only surface at runtime.

    envoy --mode validate -c /etc/envoy/envoy.yaml && echo "Config OK"

    For path and permission issues, add a smoke test to your deployment pipeline that sends one request immediately after Envoy starts and checks that the log file has grown:

    BEFORE=$(stat -c %s /var/log/envoy/access.log 2>/dev/null || echo 0)
    curl -s http://192.168.1.100:8080/healthz > /dev/null
    sleep 1
    AFTER=$(stat -c %s /var/log/envoy/access.log 2>/dev/null || echo 0)
    [ "$AFTER" -gt "$BEFORE" ] && echo "Logging OK" || echo "WARNING: log file did not grow"

    Set up logrotate from day one — not after the first disk-full incident. Mount the log directory on a dedicated partition with a fixed size and a

    df
    -based alert at 80% capacity. In Kubernetes, set ephemeral storage limits and run a log shipping DaemonSet so logs leave the node before they fill it.

    Monitor the

    access_log.log_count
    stat via the admin API. If requests are flowing but this counter stays flat, something is wrong with the log sink:

    curl -s http://127.0.0.1:9901/stats | grep access_log

    Finally, keep access log configs explicit. Don't inherit log settings from a shared base config if you don't fully understand what's in it, and always confirm after each config change that log entries are actually appearing before you declare the deployment successful. Two minutes of verification after a deploy saves hours of debugging later.

    Frequently Asked Questions

    Why is my Envoy access log file empty even though requests are going through?

    The most common reasons are that the access_log stanza is missing from the HttpConnectionManager filter (Envoy does not log globally — it must be configured per listener), or an access log filter condition is blocking all entries. Pull the config dump from the admin endpoint at :9901/config_dump and confirm the access_log block exists inside the HCM typed_config and has no overly restrictive filter.

    How do I check what access log path Envoy is actually using at runtime?

    Run: curl -s http://127.0.0.1:9901/config_dump | python3 -m json.tool | grep -A 8 "access_log" — this shows the live configuration including the path field under the FileAccessLog typed_config, which may differ from what's in your static config file if xDS is pushing updates.

    Can Envoy silently fail to write access logs without crashing?

    Yes. Envoy does not crash or surface a persistent error when access log writes fail due to disk full, permission denied, or a missing directory — it continues proxying traffic normally. Always monitor log file growth independently of Envoy's own health checks.

    How do I validate an Envoy config before deploying it to production?

    Run: envoy --mode validate -c /etc/envoy/envoy.yaml — this performs full config parsing including format string validation, type URL checks, and structural validation without starting the proxy. Any format string errors or wrong @type URLs will be reported with the exact field and position.

    What is the correct @type URL for the Envoy file access logger in v3 API?

    The correct v3 @type is: type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog — using the older v2 name string envoy.file_access_log in a typed_config block will cause the extension to fail to initialize in current Envoy builds.

    Related Articles