Linux Job Scheduling

How I learned to stop worrying and love the Cron.
Runtime Environment—Advanced crontab Format

This is the area that confuses users of cron the most. They specify commands they run every day from their interactive shells, and then they put them in their crontab and they don't work or they behave differently than they expected.

For example, if you write a program called “fardels” and put it in &HOME/bin, then add $HOME/bin to your PATH, cron might send you mail like this:

/bin/sh: fardels: command not found

The PATH cron uses is not necessarily the same as the one your interactive shell uses.

It is necessary to understand that the environment in which cron jobs run is not the environment in which they operate every day.

First of all, none of their normal environment variables are initialized as they are in their login shells. The following environment variables are set up by the cron dæmon:

SHELL=/bin/sh
LOGNAME  set from /etc/passwd entry for the crontab's UID.
HOME  set from /etc/passwd entry for the crontab's UID.

We've been holding out on you. There's another kind of entry allowed in your crontab file. Lines of the form iname=value are allowed to set environment variables that will be set when jobs are run out of the crontab. You may set any environment variable except LOGNAME.

An important one to note is MAILTO. If MAILTO is undefined, the output of jobs will be mailed to the user who owns the crontab. If MAILTO is defined but empty, mailed output is suppressed. Otherwise, you may specify an e-mail address to which to send the output of cron jobs.

Finally, any percent sign in the command portion of a job entry is treated as a newline. Any data which follows the first percent sign is passed to the job as standard input, so you can use this to invoke an interactive program on a scheduled basis.

Permissions

The ability to have and use a crontab is controlled in a manner very similar to the at subsystem. Two files, /etc/cron.allow and /etc/cron.deny, determine who can use crontab. Just as in the case of at, the cron.allow is checked first. If it exists, only the users listed there may have cron jobs. If it does not exist, the cron.deny file is read. All users except those listed there may have cron jobs.

If neither file exists (and this is quite unlike “at”), all users may have crontabs.

The cron Dæmon

There is hardly anything to document here. The cron dæmon (which is called either cron or crond) takes no arguments and does not respond to any signals in a special way. It examines the /var/spool/cron directory at start-up for files with names matching user names in /etc/passwd. These files are read into memory. Once per minute, cron wakes up and walks through its list of jobs, executing any that are scheduled for that minute.

Each minute, it also checks to see if the /var/spool/cron directory has changed since it was last read, and it rereads any modifications, thus updating the schedule automatically.

System crontab

I've led you through a merry dance so far. I've got you thinking that only users have crontabs, and that all scheduled jobs run as the crontab's owning user. That's almost true. Cron also has a way to specify crontabs at a “system” level. In addition to checking /var/spool/cron, the cron dæmon also looks for an /etc/crontab and an /etc/cron.d directory.

The /etc/crontab file and the files in /etc/cron.d are “system crontabs”. These have a slightly different format from that discussed so far.

The key difference is the insertion of a field between the “day of week” field and the command field. This field is “run as user” field. Thus:

02 4 * * * root run-parts /etc/cron.daily

will run “run-parts /etc/cron.daily” as root at 2 minutes past 4 a.m. every single day.

Final Notes

There you have it. While Linux does not ship with a mature and complete batch process management tool, still the combination of at and cron permit considerable flexibility and power.

Bear in mind that we have covered the Linux versions of these tools as shipped with most current distributions. While just about every UNIX system on the market has these tools, some things vary.

Expect at queues to be different. Not all crons support names or ranges. Most do not support lists of ranges or the increment feature. No other cron with which I am familiar supports setting environment variables in the crontab. I don't think any other at supports “teatime” as a time specification.

This boils down to a basic piece of advice. Always check the local documentation. If in doubt, experiment.

Resources

email: mschwarz@sherbtel.net

Michael Schwarz (mschwarz@sherbtel.net) is a consultant with Interim Technology Consulting in Minneapolis, Minnesota. He has 15 years of experience writing UNIX software and heads up the open-source SASi project. He has been using Linux since he downloaded the TAMU release in 1994, and keeps the SASi project at http://alienmystery.planetmercury.net/.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Executing a job every friday at 10

Vabuk's picture

you can anybody please tell me how can i execute a job every friday at 10:00 PM

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix