InfraRunBook
    Back to articles

    Linux Cron Job Not Running

    Linux
    Published: Apr 12, 2026
    Updated: Apr 12, 2026

    Cron jobs that fail silently are one of the most frustrating Linux troubleshooting scenarios. This guide covers every common root cause — from syntax errors to swallowed redirects — with real commands and fixes.

    Linux Cron Job Not Running

    Symptoms

    You set up a cron job. You tested the script manually and it ran perfectly. But when the scheduled time arrives, nothing happens. No output, no errors, no sign anything executed at all. Or maybe the job ran once and then mysteriously stopped after a system change. Sound familiar?

    Common symptoms include:

    • The script runs fine manually but never fires on schedule
    • Cron mail or syslog shows nothing around the expected run time
    • Log files the script should be writing to remain untouched
    • Application state that depends on the cron job is stale or incorrect
    • The job appears in
      crontab -l
      but leaves no evidence of execution
    • A job that previously worked stopped running after a package update or config change

    I've seen this problem derail production deployments, cause missed backups, and trigger middle-of-the-night incident calls. The frustrating part is that the failure is almost always silent. Cron doesn't yell at you — it just quietly does nothing. Let's work through the most common causes systematically.


    Root Cause 1: Cron Daemon Not Running

    This is the one engineers check last but should always check first. If the cron daemon isn't running, nothing gets scheduled — full stop. On modern systemd-based distributions the service is typically

    crond
    on RHEL/CentOS/Rocky systems and
    cron
    on Debian/Ubuntu.

    Check daemon status immediately:

    systemctl status cron
    # or on RHEL-family systems:
    systemctl status crond

    A healthy daemon looks like this:

    ● cron.service - Regular background program processing daemon
         Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)
         Active: active (running) since Sat 2026-04-12 08:00:01 UTC; 4h 12min ago
           Docs: man:cron(8)
       Main PID: 842 (cron)
          Tasks: 1 (limit: 4915)
         Memory: 1.1M
            CPU: 12ms
         CGroup: /system.slice/cron.service
                 └─842 /usr/sbin/cron -f

    If you see

    inactive (dead)
    or
    failed
    instead, that's your answer. In my experience this happens most often after a hasty system update where the package maintainer scripts stopped the service and it never came back up, or after someone ran a broad cleanup script that killed more than intended.

    Fix it and make sure it survives future reboots:

    systemctl start cron
    systemctl enable cron

    If the daemon keeps crashing on start, check the journal before trying again:

    journalctl -u cron -n 50 --no-pager

    Look for permission errors on spool directories, corrupted crontab files, or missing binaries. The journal will tell you exactly what went wrong.


    Root Cause 2: Crontab Syntax Error

    Cron is notoriously unforgiving about syntax. A single malformed line doesn't just fail that one job — depending on the implementation, it can prevent the entire crontab from loading. And cron won't alert you loudly when this happens.

    The five time fields are:

    minute hour day-of-month month day-of-week
    , followed by the command. Getting these wrong is surprisingly easy, especially with special characters like
    /
    for step values or
    -
    for ranges. Here are broken entries I've encountered on real servers:

    # BROKEN - missing the minute field entirely
    * 2 * * /opt/scripts/backup.sh
    
    # BROKEN - semicolons used instead of spaces between fields
    0;2;*;*;* /opt/scripts/backup.sh
    
    # BROKEN - day-of-week value 8 is invalid (valid range is 0-7)
    0 2 * * 8 /opt/scripts/backup.sh
    
    # BROKEN - no newline at end of file (last line silently dropped)
    0 3 * * 5 /opt/scripts/weekly_report.sh

    To catch syntax errors, check syslog immediately after editing your crontab:

    grep CRON /var/log/syslog | tail -20
    # or on RHEL systems:
    grep CRON /var/log/cron | tail -20

    When cron encounters a syntax error, you'll see something like this:

    Apr 12 10:15:03 sw-infrarunbook-01 cron[842]: (infrarunbook-admin) ERROR (Syntax error, this crontab file will be ignored)

    That phrase — "this crontab file will be ignored" — means every single job in that crontab is dead, not just the malformed line. The corrected versions:

    # FIXED - all five time fields present
    0 2 * * * /opt/scripts/backup.sh
    
    # FIXED - valid day-of-week (0 and 7 both mean Sunday)
    0 2 * * 5 /opt/scripts/backup.sh

    One subtle trap: crontab files must end with a newline character. Some editors omit the trailing newline, and cron silently drops the last line of the file. After editing with

    crontab -e
    , always verify all lines are present with
    crontab -l
    . You can also expose hidden characters with:

    crontab -l | cat -A

    Lines ending in

    $
    are normal. A missing final
    $
    means the file lacks the trailing newline and your last job may not load.


    Root Cause 3: Script Permission Issue

    You tested the script as root and it worked fine. But cron runs each job as the user who owns that crontab entry, and that user may not have execute permission on the script, or read/traverse permission on the directories leading to it.

    The CMD line will appear in syslog — meaning cron attempted the job — but the script exits immediately with a permissions error that goes unreported:

    Apr 12 10:30:01 sw-infrarunbook-01 CRON[12345]: (infrarunbook-admin) CMD (/opt/scripts/backup.sh)
    Apr 12 10:30:01 sw-infrarunbook-01 CRON[12346]: (infrarunbook-admin) END (/opt/scripts/backup.sh)

    The job started and ended in under a second with nothing to show for it. Check permissions explicitly:

    ls -la /opt/scripts/backup.sh
    -rw-r--r-- 1 root root 1247 Apr 10 09:22 /opt/scripts/backup.sh

    No execute bit. Root wrote the script and never made it executable, and

    infrarunbook-admin
    only has read access — not execute. Fix it:

    chmod 755 /opt/scripts/backup.sh
    # or more conservatively if only the owner should execute:
    chmod 750 /opt/scripts/backup.sh

    Don't forget to check directory permissions too. If the script lives in a directory the cron user can't traverse, you'll get the same failure even with correct file permissions:

    ls -ld /opt/scripts/
    drwx------ 2 root root 4096 Apr 10 09:00 /opt/scripts/

    That directory is accessible only to root. Fix the directory too:

    chmod 755 /opt/scripts/

    The fastest way to simulate exactly what cron will do is to switch to the cron user and run the command in a clean shell:

    su - infrarunbook-admin -c '/opt/scripts/backup.sh'

    If it fails here, it'll fail in cron. Fix the problem in this context first and cron will follow.


    Root Cause 4: Environment Variables Missing

    This is the one that bites even experienced engineers repeatedly. Cron does not source your shell profile. It doesn't load

    ~/.bashrc
    ,
    ~/.bash_profile
    , or
    /etc/profile
    . The environment cron provides is intentionally minimal — a nearly empty shell with almost nothing inherited from your interactive session.

    The default PATH cron uses looks like this:

    /usr/bin:/bin

    That's it. No

    /usr/local/bin
    , no
    /usr/sbin
    , none of the custom entries you've added to your profile over the years. So when your script calls
    python3
    installed at
    /usr/local/bin/python3
    , cron can't find it. When it calls
    aws
    at
    /usr/local/bin/aws
    or
    node
    at
    /usr/local/nvm/versions/node/v20.0.0/bin/node
    , those fail too — and they fail silently unless you've fixed the redirect situation described in the next section.

    Beyond PATH, I've also seen breakage from missing

    HOME
    , missing
    LANG
    causing locale errors in scripts that process text, and absent application-specific variables like database connection strings that tools expect to find in the environment.

    To identify what environment cron actually provides, add a temporary introspection job:

    * * * * * env > /tmp/cron_env.txt 2>&1

    Wait a minute, then compare that file against a normal interactive session:

    cat /tmp/cron_env.txt
    HOME=/home/infrarunbook-admin
    LOGNAME=infrarunbook-admin
    PATH=/usr/bin:/bin
    SHELL=/bin/sh
    USER=infrarunbook-admin

    The cleanest long-term fix is to use fully qualified absolute paths in your scripts for every external command:

    #!/bin/bash
    /usr/bin/find /var/backups -mtime +30 -delete
    /usr/local/bin/aws s3 sync /var/backups s3://backups-infrarunbook/
    /usr/bin/logger "Backup cleanup complete"

    Alternatively, declare PATH explicitly at the top of your crontab file, which applies to all jobs in that file:

    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    MAILTO=infrarunbook-admin@solvethenetwork.com
    
    0 2 * * * /opt/scripts/backup.sh

    For scripts that need application-specific variables like database credentials or API tokens, source a dedicated environment file at the top of the script itself:

    #!/bin/bash
    set -a
    source /etc/app/env.conf
    set +a
    
    # rest of script follows

    The

    set -a
    flag marks every variable set or modified for automatic export, so everything sourced from the config file becomes available to child processes the script spawns.


    Root Cause 5: Redirect Swallowing Errors

    This is the silent killer of cron debugging. The job runs, something goes wrong, but your output redirection makes all the error messages disappear before you can see them. The most destructive pattern looks like this:

    0 2 * * * /opt/scripts/backup.sh > /dev/null 2>&1

    That

    2>&1
    redirects stderr to stdout, and then
    > /dev/null
    discards everything. Your script blows up spectacularly, cron faithfully runs it on schedule, and all output vanishes into the void. You never know it failed. I've seen this single line mask broken scripts that hadn't worked in months.

    Another variant that causes confusion:

    0 2 * * * /opt/scripts/backup.sh >> /var/log/backup.log

    This captures stdout but stderr still goes to cron's mail mechanism — or nowhere, if mail isn't configured. Scripts that write error messages to stderr (which is correct behavior) won't show those messages in the log file. You open the log, see nothing, and assume the script ran cleanly.

    To expose what's really happening, redirect both streams to a timestamped log file:

    0 2 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1

    If you have

    moreutils
    installed, use
    ts
    to add timestamps to each output line — invaluable when diagnosing jobs that hang or run longer than expected:

    0 2 * * * /opt/scripts/backup.sh 2>&1 | ts '[%Y-%m-%d %H:%M:%S]' >> /var/log/backup.log

    Check the log after the next run:

    tail -50 /var/log/backup.log

    You might find error messages that were always being generated but silently discarded. I've uncovered gems like these by fixing redirects on supposedly broken cron jobs:

    [2026-04-12 02:00:01] /opt/scripts/backup.sh: line 14: pg_dump: command not found
    [2026-04-12 02:00:01] /opt/scripts/backup.sh: line 22: /mnt/nas/backups: Permission denied
    [2026-04-12 02:00:03] rsync: [sender] stat "/var/app/data" failed: No such file or directory (2)

    Each of those is a completely different underlying root cause — missing PATH, wrong permissions on a mount point, a nonexistent source directory — all invisible because of a single redirect. Fix the redirect problem first. Once you can see what's happening, everything else becomes diagnosable.

    If you genuinely want quiet jobs that only alert on failure, the right pattern notifies you on non-zero exit rather than discarding everything unconditionally:

    0 2 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 || echo "backup.sh failed at $(date)" | mail -s "Cron failure on sw-infrarunbook-01" infrarunbook-admin@solvethenetwork.com

    Root Cause 6: Wrong User Context and System Crontab Format

    System-wide cron jobs placed in

    /etc/cron.d/
    or added to
    /etc/crontab
    require an explicit username field between the time specification and the command. User crontabs installed via
    crontab -e
    do not include that field. Mixing these formats is a reliable way to produce a job that appears correct but never runs.

    System crontab format — note the username field:

    # /etc/cron.d/backup
    # minute hour dom month dow USER command
    0 2 * * * root /opt/scripts/backup.sh

    User crontab format (via

    crontab -e
    ) — no username field:

    # minute hour dom month dow command
    0 2 * * * /opt/scripts/backup.sh

    If you drop a user crontab entry into

    /etc/cron.d/
    without adding the username field, cron will interpret the first token of your command as the username and try to run the rest of the path as the command. The resulting error in syslog looks like:

    Apr 12 10:15:04 sw-infrarunbook-01 cron[842]: (/opt/scripts/backup.sh) USER (not found)

    The fix is to add the correct username to the entry in

    /etc/cron.d/
    :

    0 2 * * * infrarunbook-admin /opt/scripts/backup.sh

    Root Cause 7: Timing and Timezone Mismatch

    Cron interprets time fields in the system's local timezone by default. If your server runs UTC — which is the right choice for servers — but you scheduled a job thinking in local time, the job will appear not to run when you expect it.

    Check the system timezone:

    timedatectl
                   Local time: Sun 2026-04-12 10:45:00 UTC
               Universal time: Sun 2026-04-12 10:45:00 UTC
                     RTC time: Sun 2026-04-12 10:45:00
                    Time zone: UTC (UTC, +0000)
    System clock synchronized: yes
                  NTP service: active
              RTC in local TZ: no

    If the server is UTC and you need a job to run at 09:00 US Eastern (UTC-5 in winter, UTC-4 in summer), you have to schedule it at 14:00 or 13:00 UTC respectively. Some cron implementations support the

    CRON_TZ
    variable directly in the crontab to make this explicit:

    CRON_TZ=America/New_York
    0 9 * * 1-5 /opt/scripts/morning_report.sh

    Both Vixie cron (common on Debian/Ubuntu) and cronie (RHEL/Rocky Linux) support

    CRON_TZ
    . Using it also makes the intent self-documenting — a future engineer reading the crontab knows exactly what timezone the job targets without having to reverse-engineer the math.


    Root Cause 8: Script Exits Early or Stale Lockfile

    Sometimes the job starts and exits before completing any useful work. A script that opens with

    set -e
    will abort on the first non-zero exit code from any command — including commands whose failure you don't care about. A
    grep
    that finds no matches returns exit code 1. A
    mkdir
    on a directory that already exists returns a non-zero code. Either will silently terminate your entire script if you're not handling exit codes explicitly.

    Another common variant is lockfile contention. If your script creates a lockfile to prevent overlapping runs and a previous instance crashed without cleaning up, every subsequent run will abort at the lockfile check. The script technically "ran" — it just exited immediately.

    Look for stale lockfiles:

    find /var/run /tmp -name "*.lock" -user infrarunbook-admin -ls

    If the lockfile contains a PID, verify whether that process is actually running:

    cat /var/run/backup.lock
    # output: 19832
    
    ps -p 19832
    # output: no process found — this is a stale lock

    Remove the stale lockfile and the next scheduled run will proceed normally:

    rm /var/run/backup.lock

    Going forward, implement proper PID-based lockfile validation in the script itself so stale locks are automatically detected and cleared rather than blocking indefinitely.


    Prevention

    Most cron failures are preventable with a handful of disciplined habits applied consistently. The single biggest quality-of-life improvement is enabling cron mail. Set

    MAILTO
    at the top of every crontab to an address that actually gets read:

    MAILTO=infrarunbook-admin@solvethenetwork.com
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    
    0 2 * * * /opt/scripts/backup.sh

    With

    MAILTO
    set, any stdout or stderr output from a cron job gets emailed to that address. It's old-school but effective. You stop flying blind immediately.

    Always test new cron jobs by running the exact command as the cron user in a clean environment that mimics what cron provides:

    su - infrarunbook-admin
    env -i HOME=/home/infrarunbook-admin PATH=/usr/bin:/bin SHELL=/bin/sh /opt/scripts/backup.sh

    The

    env -i
    flag strips out your current environment, giving you something very close to what cron actually provides. Anything that breaks in this test will break in cron. Fix it here first.

    Use absolute paths everywhere in your scripts — not just for the binaries you call, but for config files, log files, temp files, and every path assumption your script makes. Cron's working directory is the cron user's home directory, not the directory where the script lives. A script that references

    ./config.ini
    will behave differently under cron than it does when you run it from
    /opt/scripts/
    .

    Add structured logging to your scripts rather than relying entirely on cron's mail mechanism. Using

    logger
    writes directly to syslog and puts your cron activity alongside other system events, which makes correlating failures with other things happening on the system much easier:

    #!/bin/bash
    logger -t backup-job "Starting nightly backup"
    
    # ... script work ...
    
    logger -t backup-job "Backup completed — $(du -sh /var/backups | cut -f1) written"

    Then your job history is queryable with standard tools:

    journalctl -t backup-job --since "24 hours ago"

    Finally, for any job where success actually matters — backups, report generation, data pipeline steps — consider adding a dead man's switch check. Have the script make a simple HTTP GET call to an internal monitoring endpoint at the end of a successful run. If that call doesn't arrive within the expected window, your monitoring system alerts you. This gives you positive confirmation that the job ran and completed successfully, not just that cron attempted it. The difference between those two things is exactly what most of these failure modes exploit.

    Frequently Asked Questions

    Why does my cron job work when I run it manually but not on schedule?

    The most common reason is a missing environment variable or PATH difference. Cron runs with a minimal environment — typically PATH=/usr/bin:/bin — and does not source your shell profile. Use absolute paths for all commands in your script, or set PATH explicitly at the top of your crontab file. To confirm, run the script as the cron user with a stripped environment: env -i HOME=/home/youruser PATH=/usr/bin:/bin /path/to/script.sh

    How do I see why a cron job failed?

    First, make sure output isn't being discarded — remove any > /dev/null 2>&1 from the cron entry and redirect to a log file instead: >> /var/log/myjob.log 2>&1. Also check syslog for cron-related entries: grep CRON /var/log/syslog. If MAILTO is set in your crontab, cron will email you any output from failed jobs. Set MAILTO=youremail@solvethenetwork.com at the top of your crontab.

    How do I check if the cron daemon is running?

    Run systemctl status cron on Debian/Ubuntu systems, or systemctl status crond on RHEL/CentOS/Rocky Linux. Look for 'active (running)' in the output. If it's stopped, start it with systemctl start cron and enable it to survive reboots with systemctl enable cron.

    What does 'this crontab file will be ignored' mean in syslog?

    It means cron found a syntax error in your crontab and is refusing to load any of its jobs — not just the malformed line. Run crontab -l to review your entries and look for missing fields, invalid characters, or a missing trailing newline at the end of the file. Fix the syntax error, save, and cron will automatically reload the corrected file.

    Can I use environment variables in a crontab file?

    Yes. You can define variables at the top of a crontab file and they will apply to all jobs in that file. Common examples include PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin and MAILTO=admin@solvethenetwork.com. Some cron implementations also support CRON_TZ to set the timezone for time field interpretation, which is useful when your server runs UTC but you want to schedule jobs in local time.

    Related Articles