Linux Job Scheduling
The at command runs either the commands passed on standard input (passed in through a pipe, or typed at the “at>” prompts as in the example above), or it runs the commands specified in the file named by the -f parameter.
The general form of the at command line is:
at [-V] [-q <queue>] [-f <file>] [-mld] <TIME>
where “queue” is a queue name. Queue names are letters, a-z or A-Z. See the section called “Queues” for more details.
“file” is the name of a file containing commands to run.
“TIME” is a time specification as discussed in detail above.
The remaining switches are -m (send mail to the user when the job is complete, even if no output was produced); -l (an alias for atq. See the atq section below); -d (an alias for atrm. See the atrm section below).
The atq command lists jobs queued by the current user (unless run as superuser, in which case pending jobs for all users are listed).
Here's a sample:
mars:20:~$ atq 5 2000-06-20 15:00 a 6 2000-07-04 15:00 a 10 2000-04-24 14:33 f mars:21:~$
The first column is the job number, followed by the scheduled run time, followed by the queue. In this case, two jobs are in queue “a” and one in queue “f”. See the section on queues for more information.
You can use the -q switch to look at jobs only in a particular queue.
The atrm command is used to delete jobs from the atq. For example, consider the queue in the atq example above. The following session illustrates the use of atrm:
mars:21:~$ atrm 6 mars:22:~$ atq 5 2000-06-20 15:00 a 10 2000-04-24 14:33 f mars:23:~$
You may list any number of job numbers on the command line.
The batch command is a variation of at that, rather than scheduling a job for a time in the future, submits a job now, but that job will not start until the system's load average falls below 0.8. What is load average? The simplest way to think of it is the number of processes that are waiting to run. Most of the time, programs are idle, waiting for hardware or for input, or waiting for the kernel to complete a request. When a program actually has something to do, it is in a runnable state. If the system is not busy, the kernel generally gives control to such a program right away. When some other program is in the middle of running, the program that has just become runnable must wait. The instantaneous system load is the number of runnable processes that are not running. The load average is an average of this instantaneous load over a short period of time. Thus, a system that is below 1.0 load average has some idle time. A system that is at and hovers near 1.0 is fully busy, and at theoretical maximum capacity. A system that is over 1.0 has no idle time, and processes are waiting for a chance to run. Note that this does not necessarily mean the system becomes perceptibly slower to users, but it does mean the maximum capacity of the system has been reached and programs are running slower than they might on a less busy system.
The batch command schedules a job for “right now”, but will delay the start of the job until there is idle time (load average less than 0.8) on the system. Note that this test is for starting the job. Once it is started, it will run to completion, no matter how busy the system becomes during the run.
Note that this section is quite Linux-specific. Other UNIX operating systems I have used have queues, but they are different from those documented here. Always consult local documentation. AIX doesn't work this way, for example.
Queues are a way of grouping jobs together in separate lists. They are named from a-z and A-Z. The at command by default puts jobs on queue “a”, whereas the batch puts jobs on queue “b” by default.
Queue names with “greater” values run at higher “niceness”. Nice values are a way that Linux (and other UNIX systems) set job priorities. The default nice level of a job is “0”, which means “normal”. Jobs can have nice values from -20 (highest possible priority) to +19 (lowest possible priority). Only the superuser can give jobs a negative nice value. We won't say anymore about nice here, as a discussion of the kernel scheduler is well beyond our scope. Just know that jobs in the “z” queue run at a lower priority (and thus slower and with less impact on other running jobs) than do jobs in the “a” queue.
Jobs that are running will be in the “=” queue, which is reserved for running jobs.
Queue names are case sensitive! Rembember, there are a-z queues and A-Z queues. The A-Z queues are special. If you use at to put a job on a queue with a capital letter, then the job is treated as if it were submitted to the batch command at the run time instead of the at command.
In other words, putting a job on an uppercase queue is like combining at and batch. When the job runs, it runs immediately if the load average is below 0.8, otherwise it waits until the load average falls below that point. In no case will the job start before its scheduled time.
Phew! All of that and we still haven't looked at the dæmon that takes care of all this! I hope you are beginning to see that “at”, while not a complete batch processing system, certainly provides a great deal of capability.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- RSS Feeds
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- Readers' Choice Awards
- The Secret Password Is...
- All the articles you talked
1 hour 57 min ago
- All the articles you talked
2 hours 43 sec ago
- All the articles you talked
2 hours 2 min ago
6 hours 26 min ago
- Keeping track of IP address
8 hours 17 min ago
- Roll your own dynamic dns
13 hours 31 min ago
- Please correct the URL for Salt Stack's web site
16 hours 42 min ago
- Android is Linux -- why no better inter-operation
18 hours 57 min ago
- Connecting Android device to desktop Linux via USB
19 hours 26 min ago
- Find new cell phone and tablet pc
20 hours 24 min ago