A Process Smorgasbord
François, vite! Our guests will be here any moment. Quoi? You say you already have prepared everything? Excellent, François! I see you have brought up a healthy supply of the 1998 Barossa Valley Shiraz. It will pair nicely with tonight's menu, don't you think? Qu'avez-vous dit? Ah, the theme of this issue...it is System Administration, mon ami, and tonight I have decided to look into process management. But of course, François, even something as basic as processes can be the center of exciting dishes.
What did you say, François? Ah, our guests are here! Welcome, mes amis, to Chez Marcel, home of fine Linux cooking, tantalizing atmosphere and incredible wine. Your tables and the wine are ready. Please sit and my faithful waiter will fill your glasses. I must tell you that François is being unusually efficient tonight. I don't know what has gotten into him. One less thing to manage, non?
As everyone in this restaurant knows, everything running on your Linux machine is a process; every shell, every open connection to the Internet, every game—everything. In some cases, programs will spawn multiple processes of their own. These are child processes. Technically, every process is a child process of some parent except one—that is, init, the master process. Children can spawn more children, and they can spawn more. You can use ps to list them all, but in time, monitoring all this procreation can be quite exhausting. Mon Dieu, mes amis, I think I need a sip of my wine right now.
To obtain a quick-and-dirty view of what process is the child of what other process, you can type the pstree command. Note the first few lines of output below and init's position:
init-+-apmd |-atd |-bdflush |-cardmgr |-crond |-gpm |-kalarmd |-kapmd |-kappdock-+-wmWeather | `-wmmultipop3 |-kdeinit-+-artsd | |-autorun | |-kdeinit | |-kdeinit---2*[bash] | |-kdeinit---bash | |-kdeinit-+-bash---lavaps
While this looks neat, it is somewhat lacking in information. You can get the same effect (but with more information) by using your old friend, the ps command. The f option displays a forest view through which you can see the process trees. A little joke, mes amis.
Transforming the sea of processes into something that readily catches the eye is exactly what George MacDonald had in mind when he created Treeps. This program is an interactive, graphical process monitor with an on-the-fly, color-coded display, thereby making it easier to nail down individual tasks. This one is certainly worth the download, mes amis. Get the source at the following URL: www.orbit2orbit.com/gmd/tps/treepsfm.html.
To build Treeps, extract the source (with tar -xzvf treeps-1.2.1.tar.gz, then run the ./Setup script from the installation directory. After the prebuild configuration takes place, you'll be told to do a make install. You then run the program by typing treeps &.
The initial view is of your own processes as launched from init, and this is where the fun starts. Moving your mouse pointer over a process displays some basic information, the equivalent of what you would get from a ps x. Right-click and a pop-up menu will give you the options of renicing the process, viewing its man page and so on.
From the bar of buttons on the top, click the various selections. Aside from a view of your own processes, you can choose to see the dæmon processes or simply everything. If you click the information button (the one with an “i” on it), your mouse pointer changes to an “i” as well. Click any process, and an information window appears with more details about the running process than you would have thought were there. From that window, you can drill down even further. Click the File/Dir button, and you can see every file open by that process. For the truly curious, the Mem maps button displays where in memory every chunk of code resides.
The features are numerous to say the least, but the color-coding is what really caught my eye. While the program is running, turn on the “color map viewer” by clicking on the color-bar button. By cycling through the various options, you can highlight processes based on user ID, group ID, total CPU time, current CPU load, process status (sleeping, running, zombie, etc.), resident memory, image size and more.
Under the Program menu, there's another little treat called System Info, which brings up the System Information App Launcher. From this button-laden window, you quickly can view tons of information about your system, from your routing table to loaded modules, kernel level, PCI devices, uptime, disk partitions, runlevels and so on.
Viewing things from a different perspective can give you a new appreciation for even the familiar. Indeed, it can be a mind-expanding experience, non? As whimsical as the next item on our menu appears to be, I found it a lot of fun to watch and work with. Whether it makes a great process monitor or not will depend on your feelings toward lava lamps. Written by John Heidemann, LavaPS was inspired by the idea of calm computing from “The Coming Age of Calm Technology” by Mark Weiser and John Seely Brown.
The idea here is that processes are represented as fluid blobs in a lava lamp. The larger the blob, the greater its memory usage. The faster it moves, the greater its CPU usage. Like any decent process monitor, it allows for identification of processes, renicing and killing. Start by visiting the lavaps web site (www.isi.edu/~johnh/SOFTWARE/LAVAPS/index.html) and picking up a copy of the package.
For the Red Hat users out there, prepackaged RPMs are available. For the others, never fear—building LavaPS is simply another example of the famous (dare I say “classic”) extract and build five-step:
tar -xzvf lavaps-1.20.tar.gz cd lavaps-1.20 ./configure make su -c make install
To start your lava lamp, type lavaps &. You'll see a small lava lamp appear on your desktop. Right-click, and a menu pops up offering a number of options. The proc menu tells you the process ID and the name of the process. It also allows you to send various signals (such as kill) to the process, from forceful termination to temporary suspension.
Running LavaPS to monitor and administer processes certainly sets an otherworldly kind of mood. The one thing I did not like is that the default lamp was actually fairly small on my 1024 × 768 display. Overriding this requires that you set X resources. This is done easily by modifying the $HOME/.lavapsrc configuration file. In mine, the only thing I changed was the geometry. Here's what my .lavapsrc file looks like:
Speaking of otherworldly, mes amis, the strangest excursion into the secret life of your processes is probably highlighted by an unusual game of Doom, the classic 3-D shooter from ID Software. Back in 1997, ID Software released the source code to Doom, and many ports followed. One of them was XDoom, a UNIX X Window System version on which David Koppenhofer's psDooM is based. As psDooM was inspired by XDoom, David was inspired by Dennis Chao, and Dennis was inspired by Vernor Vinge. If you are curious, check out the link to Dennis' “Doom as a tool for system administration” (see Resources).
Anyhow, the idea behind psDooM is to provide a strange alternative to process management. The monsters roaming the halls have red process IDs floating above their heads along with the last seven characters of the command name.
Source tarballs are available from the psDooM web site, but the easiest way to install psDooM is to pick up the precompiled binaries. Installation is fairly simple: run the install.sh script:
tar -xzvf psdoom-2000.05.03-bin.tar.gz cd psdoom-bin su -c "./install.sh"
An IWAD is required to run psDooM, specifically Doom 1, Doom 2 or Ultimate Doom. The shareware Doom 1 IWAD also will work. If you don't happen to have your own Doom WAD, you can download a copy from the www.doomworld.com site. I visited the site and picked up a copy of the file:
unzip shareware_doom_iwad.zip su -c cp DOOM1.WAD /usr/local/games/psdoom/doom1.wadThat's it. Now, we are ready to run psDooM:
cd /usr/local/games/psdoom ./psdoom -2Notice the -2 option above. By default, you'll find the screen quite small. This option increases the size of your default screen. If you've never played Doom before, I must warn you, mes amis, that it can get a little violent. Monsters and bad boys roam the halls, and your life hangs in the balance. Wounding a process monster is equivalent to renicing that process (renice +5). Keep shooting, and you will kill the process. System permissions are honoured, however. You can kill a monster process that belongs to another user (or the system), but it will be resurrected. Only your own processes will stay dead.
Perhaps you might want to consider not doing this as root ever, and certainly not on the corporate server.
Once again, mes amis, it looks as though the clock has been racing toward closing time. It has been wonderful having you here at Chez Marcel. I certainly hope you enjoyed your exploration of process administration. I must admit I am still a little shaken from my psDooM experience. Perhaps a little more wine to soothe the nerves. François, if you would be so kind as to refill our guests' wineglasses and, of course, mine. Until next month. A votre santé! Bon appétit!
Marcel Gagné lives in Mississauga, Ontario. He is the author of Linux System Administration: A User's Guide (ISBN 0-201-71934-7), published by Addison-Wesley (and is currently at work on his next book). He can be reached via e-mail at firstname.lastname@example.org.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Reply to comment | Linux Journal
6 hours 34 min ago
- Reply to comment | Linux Journal
6 hours 50 min ago
- Favorite (and easily brute-forced) pw's
8 hours 41 min ago
- Have you tried Boxen? It's a
14 hours 33 min ago
- seo services in india
19 hours 5 min ago
- For KDE install kio-mtp
19 hours 5 min ago
- Evernote is much more...
21 hours 6 min ago
- Reply to comment | Linux Journal
1 day 5 hours ago
- Dynamic DNS
1 day 6 hours ago
- Reply to comment | Linux Journal
1 day 7 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?