Linux Job Scheduling
These time specifications may be optionally followed by a date specification. Date specifications come in a number of forms, including:
These mean what you would expect. “at teatime tomorrow” will run the commands at 4 p.m. the following day. Note that if you specify a time already passed (as in “at noon today” when it is 3 p.m.), the job will be run at once. You do not get an error. At first you might think this a bad thing, but look at it this way. What if the system had been down since 10 a.m. and was only being restarted now at 3 p.m.? Would you want a critical job skipped, or would you want it to run as soon as possible? The at system takes the conservative view and assumes you will want the job run.
<month_name> <day> [<year>]where month_name is “jan” or “feb”, etc., and day is a day number. The year is optional, and should be a four-digit year, of course.
MM/DD/YYYY YYYY-MM-DDDon't listen to what the “man at” page tells you! At least in Red Hat 6.1, it is wrong! I suspect it is wrong in certain other releases as well, and I'm willing to bet this is because the documentation has not caught up with Y2K fixes to this subsystem. The at shipped with Red Hat 6.1 handles dates in the two formats above. It appears to handle 2-digit years correctly, turning values less than 50 into 20xx and those greater than 50 into 19xx. I did not test to find the exact pivot point, and I do not recommend that you bother to, either. If you use two-digit years at this point, be prepared to pay a price! Depending on your version of at to treat two-digit years a certain way is foolish. Use four-digit years. Haven't we learned our lesson? (If you worked with computers from 1995 to 1999, you felt the pain as work came to an almost complete halt while we pored over every system with microscopes, looking for date flaws in the designs of our systems. Don't make a Y2.1K problem! PLEASE!!!)
Another way you can modify a time specification is to apply a relative time to it. The format of a relative time specification is + <count> <time units>, where “count” is simply a number and “time units” is one of “minutes”, “hours”, “days” or “weeks”.
So, you can say:
at 7pm + 2 weeks
and the programs will be scheduled for two weeks from today at 7 p.m. local time.
One of the most common forms is this:
at now + x units
to specify a program or programs to be run so many units from now. Something I often use this for is in shutting down my home machine's dial-up connection from work. I dial in before I leave for work, and then I kill it before my wife gets home (I'm too cheap to buy a second line). I use ssh to log in from work, and I like to close all my windows cleanly, so I frequently do something like this:
# ps fax | grep wvdial 599 ? S 0:00 \_ wvdial 875 pts/2 S 0:00 \_ grep wvdial # at now + 10 minutes at> kill 599 at> warning: commands will be executed using /bin/sh job 9 at 2000-04-17 16:30 # exit $ exitI then have ten minutes to disconnect cleanly from my home system before my phone connection gets dropped.
Note that the plain old Bourne shell is used for all commands run by at. (Also note: I had to type ctrl-d, the *nix EOF character to close the interactive at session. More on this in the section on the at command line. This is just one factor affecting the behavior of at scheduled commands. Here are some other facts to bear in mind. The present working directory, environment variables (with three exceptions, see below), the current userid and the umask that were in effect when the at command was issued are retained and will be used when the commands are executed. The three environment variable exceptions are TERM, DISPLAY and “_” (which usually contains the last command executed in the shell). The output of the commands is mailed to the user who issued the at command. If the at command is issued in an su shell (meaning, if you “became” another user), the output mail will be sent to the login user, but the programs will run under the su user.
The ability to use at is controlled by two files: /etc/at.deny and /etc/at.allow.
The /etc/at.allow file is checked first. If it exists, only user names in this file are allowed to run at. If the /etc/at.allow file does not exist, then the /etc/at.deny file is checked. All user names not mentioned in that file may run at.
If neither file exists, only the superuser may run at.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Interview with Patrick Volkerding
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide