Linux Job Scheduling
The at command runs either the commands passed on standard input (passed in through a pipe, or typed at the “at>” prompts as in the example above), or it runs the commands specified in the file named by the -f parameter.
The general form of the at command line is:
at [-V] [-q <queue>] [-f <file>] [-mld] <TIME>
where “queue” is a queue name. Queue names are letters, a-z or A-Z. See the section called “Queues” for more details.
“file” is the name of a file containing commands to run.
“TIME” is a time specification as discussed in detail above.
The remaining switches are -m (send mail to the user when the job is complete, even if no output was produced); -l (an alias for atq. See the atq section below); -d (an alias for atrm. See the atrm section below).
The atq command lists jobs queued by the current user (unless run as superuser, in which case pending jobs for all users are listed).
Here's a sample:
mars:20:~$ atq 5 2000-06-20 15:00 a 6 2000-07-04 15:00 a 10 2000-04-24 14:33 f mars:21:~$
The first column is the job number, followed by the scheduled run time, followed by the queue. In this case, two jobs are in queue “a” and one in queue “f”. See the section on queues for more information.
You can use the -q switch to look at jobs only in a particular queue.
The atrm command is used to delete jobs from the atq. For example, consider the queue in the atq example above. The following session illustrates the use of atrm:
mars:21:~$ atrm 6 mars:22:~$ atq 5 2000-06-20 15:00 a 10 2000-04-24 14:33 f mars:23:~$
You may list any number of job numbers on the command line.
The batch command is a variation of at that, rather than scheduling a job for a time in the future, submits a job now, but that job will not start until the system's load average falls below 0.8. What is load average? The simplest way to think of it is the number of processes that are waiting to run. Most of the time, programs are idle, waiting for hardware or for input, or waiting for the kernel to complete a request. When a program actually has something to do, it is in a runnable state. If the system is not busy, the kernel generally gives control to such a program right away. When some other program is in the middle of running, the program that has just become runnable must wait. The instantaneous system load is the number of runnable processes that are not running. The load average is an average of this instantaneous load over a short period of time. Thus, a system that is below 1.0 load average has some idle time. A system that is at and hovers near 1.0 is fully busy, and at theoretical maximum capacity. A system that is over 1.0 has no idle time, and processes are waiting for a chance to run. Note that this does not necessarily mean the system becomes perceptibly slower to users, but it does mean the maximum capacity of the system has been reached and programs are running slower than they might on a less busy system.
The batch command schedules a job for “right now”, but will delay the start of the job until there is idle time (load average less than 0.8) on the system. Note that this test is for starting the job. Once it is started, it will run to completion, no matter how busy the system becomes during the run.
Note that this section is quite Linux-specific. Other UNIX operating systems I have used have queues, but they are different from those documented here. Always consult local documentation. AIX doesn't work this way, for example.
Queues are a way of grouping jobs together in separate lists. They are named from a-z and A-Z. The at command by default puts jobs on queue “a”, whereas the batch puts jobs on queue “b” by default.
Queue names with “greater” values run at higher “niceness”. Nice values are a way that Linux (and other UNIX systems) set job priorities. The default nice level of a job is “0”, which means “normal”. Jobs can have nice values from -20 (highest possible priority) to +19 (lowest possible priority). Only the superuser can give jobs a negative nice value. We won't say anymore about nice here, as a discussion of the kernel scheduler is well beyond our scope. Just know that jobs in the “z” queue run at a lower priority (and thus slower and with less impact on other running jobs) than do jobs in the “a” queue.
Jobs that are running will be in the “=” queue, which is reserved for running jobs.
Queue names are case sensitive! Rembember, there are a-z queues and A-Z queues. The A-Z queues are special. If you use at to put a job on a queue with a capital letter, then the job is treated as if it were submitted to the batch command at the run time instead of the at command.
In other words, putting a job on an uppercase queue is like combining at and batch. When the job runs, it runs immediately if the load average is below 0.8, otherwise it waits until the load average falls below that point. In no case will the job start before its scheduled time.
Phew! All of that and we still haven't looked at the dæmon that takes care of all this! I hope you are beginning to see that “at”, while not a complete batch processing system, certainly provides a great deal of capability.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide