Kill: The Command to End All Commands
Linux is a powerful operating system. With its demand-paged memory management and swap file facility, it lets you start as many processes as you choose. Of course, that number is subject to overall system memory capacity (physical memory plus swap) and your CPU's ability to perform all the tasks you have requested. Starting processes is easy, and when things slow to a crawl, stopping them is just as easy.
The Linux kill command is one of two that will meet your need when you grow tired of waiting for a process to terminate. With it, you can, in the words of my 1992 Linux Programmer Manual, terminate a process with extreme prejudice. All you need to know is a number called the process PID. Note that kill doesn't always terminate another process. In essence, kill sends a signal to a specified process. If that signal is not caught and handled by the process (not all can be), the process is terminated. All of the resources that were in use by the process are released for use by other running processes.
What are processes, PIDs and signals? How are they discovered?
Recall that Linux is a multi-tasking operating system. When Linux boots, it starts a program called init, which in turn starts other programs. Many of these are background tasks like update, which periodically flushes data to the disk. Another example is getty, which watches a serial port for some sign of activity. A more visible example is the shell you use to perform useful work. It runs in the foreground, which means that it waits on your keystrokes. Each copy of each program running on your system is called a process.
Just as the US government passes out Social Security Numbers (we use Social Insurance Numbers here in Canada) to uniquely identify each individual, Linux assigns each process a unique number as an identifier. This number is called the process ID or PID.
When a process is started, it is given the next available PID, and when it terminates, its PID is released for eventual re-use. To determine the PID of any process belonging to you, enter ps at the prompt. The ps command will print, for each of your processes, a line containing the process's PID, the amount of time the process has used and the command with which the process was started. The output from ps looks like:
PID TT STAT TIME COMMAND 6651 p0 S 0:01 -ksh<\n> 6661 p1 S 0:00 -ksh 6738 p2 S 0:00 -ksh 6746 p2 S 0:00 wheel 6747 p2 S 0:00 wheel 7002 p0 S 0:01 elm 7193 p1 R 0:00 ps
Signals are a form of process communication. Because they can come from another process, the kernel or the process itself, they might be better thought of as events that occur as a program runs. A crude example might be the bell most of us remember from our early days in school; when the bell rang, we reacted by switching from playful children to industrious students.
The signals we will use below are the termination signal SIGTERM, the interrupt signal SIGINT and the kill signal SIGKILL. These signals usually occur because another process sent them. You probably already use one of them; typing ctrl-c sends the interrupt signal SIGINT to your current foreground process. Other signals—such as SIGPIPE, which is sent to a process writing to a broken pipe—usually come from the kernel. There are about 30 signals, all of which can be referred to by numbers or by names, but the numbers change between platforms and some signals are unavailable on some platforms. The complete list of signals can be found on the signal(7) manual page; enter man 7 signal to see it or enter kill -1 for a short version of this list.
For each signal there is a default action, almost all of which terminate the process. For most signals, a program may specify another action—this is called catching or handling the signal—or may specify that no action occurs, which is called ignoring the signal. The signal SIGKILL cannot be caught or ignored; it always terminates processes.
For example, suppose you use cat to list a large text file without first determining the size of the file. Instead of watching hundreds, perhaps thousands of lines scroll by too quickly to read, you send the cat process the interrupt signal by pressing Ctrl-c. Fortunately, cat was not programmed to catch SIGINT, and the cat process is terminated immediately.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- New Products
- Weechat, Irssi's Little Brother
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
4 min 14 sec ago
- This has already been done
5 min 14 sec ago
- Reply to comment | Linux Journal
50 min 28 sec ago
- Welcome to 1998
1 hour 38 min ago
- notifier shortcomings
2 hours 2 min ago
3 hours 39 min ago
- Android User
3 hours 41 min ago
- Reply to comment | Linux Journal
5 hours 34 min ago
8 hours 23 min ago
- This is a good post. This
13 hours 36 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?