- LJ Index, October 2010
- Controlling Your Processes
- diff -u: What's New in Kernel Development
- Drobo FS: the Good, the Bad and the Ugly
- Make Your Android Follow Whatever Three Laws You Decide
- Non-Linux FOSS
- Google TV: Are You Awesome, or Absurd?
- LJ Store's Featured Product of the Month: Root Superhero
- They Said It
- Linux Journal Insider Podcast
LJ Index, October 2010
1. Number of “companies” that contributed patches to kernel 2.6.12 (released in June 2005): 82
2. Number of individuals that contributed patches to kernel 2.6.12: 359
3. Number of patches contributed to kernel 2.6.12: 1,725
4. Number of “companies” that contributed patches to kernel 2.6.24 (released in January 2008): 190
5. Number of individuals that contributed patches to kernel 2.6.24: 977
6. Number of patches contributed to kernel 2.6.24: 9,831
7. Number of “companies” that contributed patches to kernel 2.6.34 (released in May 2010): 188
8. Number of individuals that contributed patches to kernel 2.6.34: 1,175
9. Number of patches contributed to kernel 2.6.34: 9,443
10. Percent of kernel 2.6.34 patches contributed by hobbyists/consultants/academics/unknowns: 27.93
11. Percent of kernel 2.6.34 patches contributed by Red Hat: 9.98
12. Percent of kernel 2.6.34 patches contributed by Intel: 5.29
13. Percent of kernel 2.6.34 patches contributed by Novell: 4.34
14. Percent of kernel 2.6.34 patches contributed by IBM: 3.94
15. Percent of kernel patches since 2005 contributed by hobbyists/consultants/academics/unknowns: 38.84
16. Percent of kernel patches since 2005 contributed by Red Hat: 12.52
17. Percent of kernel patches since 2005 contributed by Novell: 7.32
18. Percent of kernel patches since 2005 contributed by IBM: 7.15
19. Percent of kernel patches since 2005 contributed by Intel: 6.71
20. Number of Platinum members ($500,000) of the Linux Foundation: 6
Controlling Your Processes
To use a stage metaphor, all the processes you want to run on your machine are like actors, and you are the director. You control when and how they run. But, how can you do this? Well, let's look at the possibilities.
The first step is to run the executable. Normally, when you run a program, all the input and output is connected to the console. You see the output from the program and can type in input at the keyboard. If you add an & to the end of a program, this connection to the console is severed. Your program now runs in the background, and you can continue working on the command line. When you run an executable, the shell actually creates a child process and runs your executable in that structure. Sometimes, however, you don't want to do that. Let's say you've decided no shell out there is good enough, so you're going to write your own. When you're doing testing, you want to run it as your shell, but you probably don't want to have it as your login shell until you've hammered out all the bugs. You can run your new shell from the command line with the exec function:
This tells the shell to replace itself with your new shell program. To your new shell, it will look like it's your login shell—very cool. You also can use this to load menu programs in restricted systems. That way, if your users kill off the menu program, they will be logged out, just like killing off your login shell. This might be useful in some cases.
Now that your program is running, what can you do with it? If you need to pause your program temporarily (maybe to look up some other information or run some other program), you can do so by typing Ctrl-z (Ctrl and z at the same time). This pauses your program and places it in the background. You can do this over and over again, collecting a list of paused and “backgrounded” jobs. To find out what jobs are sitting in the background, use the jobs shell function. This prints out a list of all background jobs, with output that looks like this:
+ Stopped man bash
If you also want to get the process IDs for those jobs, use the -l option:
+ 26711 Stopped man bash
By default, jobs gives you both paused and running background processes. If you want to see only the paused jobs, use the -s option. If you want to see only the running background jobs, use the -r option. Once you've finished your sidebar of work, how do you get back to your paused and backgrounded program? The shell has a function called fg that lets you put a program back into the foreground. If you simply execute fg, the last process backgrounded is pulled into the foreground. If you want to pick a particular job to put in the foreground, use the % option. So if you want to foreground job number 1, execute fg %1. What if you want your backgrounded jobs to continue working? When you use Ctrl-z to put a job in the background, it also is paused. To get it to continue running in the background, use the bg shell function (on a job that already has been paused). This is equivalent to running your program with an & at the end of it. It will stay disconnected from the console but continue running while in the background.
Once a program is backgrounded and continues running, is there any way to communicate with it? Yes, there is—the signal system. You can send signals to your program with the kill procid command, where procid is the process ID of the program to which you are sending the signal. Your program can be written to intercept these signals and do things, depending on what signals have been sent. You can send a signal either by giving the signal number or a symbolic number. Some of the signals available are:
1: SIGHUP — terminal line hangup
3: SIGQUIT — quit program
9: SIGKILL — kill program
15: SIGTERM — software termination signal
30: SIGUSR1 — user-defined signal 1
31: SIGUSR2 — user-defined signal 2
If you simply execute kill, the default signal sent is a SIGTERM. This signal tells the program to shut down, as if you had quit the program. Sometimes your program may not want to quit, and sometimes programs simply will not go away. In those cases, use kill -9 procid or kill -s SIGKILL procid to send a kill signal. This usually kills the offending process (with extreme prejudice).
Now that you can control when and where your program runs, what's next? You may want to control the use of resources by your program. The shell has a function called ulimit that can be used to do this. This function changes the limits on certain resources available to the shell, as well as any programs started from the shell. The command ulimit -a prints out all the resources and their current limits. The resource limits you can change depend on your particular system. As an example (this crops up when trying to run larger Java programs), say you need to increase the stack size for your program to 10000KB. You would do this with the command ulimit -s 10000. You also can set limits for other resources like the amount of CPU time in seconds (-t), maximum amount of virtual memory in KB (-v), or the maximum size of a core file in 512-byte blocks (-c).
The last resource you may want to control is what proportion of the system your program uses. By default, all your programs are treated equally when it comes to deciding how often your programs are scheduled to run on the CPU. You can change this with the nice command. Regular users can use nice to alter the priority of their programs down from 0 to 19. So, if you're going to run some process in the background but don't want it to interfere with what you're running in the foreground, run it by executing the following:
nice -n 10 my_program
This runs your program with a priority of 10, rather than the default of 0. You also can change the priority of an already-running process with the renice program. If you have a background process that seems to be taking a lot of your CPU, you can change it with:
renice -n 19 -p 27666
This lowers the priority of process 27666 all the way down to 19. Regular users can use nice or renice only to lower the priority of processes. The root user can increase the priority, all the way up to –20. This is handy when you have processes that really need as much CPU time as possible. If you look at the output from top, you can see that something like pulseaudio might have a negative niceness value. You don't want your audio skipping when watching movies.
The other part of the system that needs to be scheduled is access to IO, especially the hard drives. You can do this with the ionice command. By default, programs are scheduled using the best-effort scheduling algorithm, with a priority equal to (niceness + 20) / 5. This priority for the best effort is a value between 0 and 7. If you are running some program in the background and don't want it to interfere with your foreground programs, set the scheduling algorithm to “idle” with:
ionice -c 3 my_program
If you want to change the IO niceness for a program that already is running, simply use the -p procid option. The highest possible priority is called real time, and it can be between 0 and 7. So if you have a process that needs to have first dibs on IO, run it with the command:
ionice -c 1 -n 0 my_command
Just like the negative values for the nice command, using this real-time scheduling algorithm is available only to the root user. The best a regular user can do is:
ionice -c 2 -n 0 my_command
That is the best-effort scheduling algorithm with a priority of 0.
Now that you know how to control how your programs use the resources on your machine, you can change how interactive your system feels.
diff -u: What's New in Kernel Development
Linux hibernation may be getting faster soon, or maybe just eventually. Nigel Cunningham came up with an entirely new approach to how to shut down each part of the system, such that it all could be stored on disk and brought back up again quickly. Unfortunately, Pavel Machek and Rafael J. Wysocki, the two co-maintainers of the current hibernation code, found his approach to be overly complex and so difficult to implement, that it really never could be accomplished. Nigel had more faith in his idea though. He felt that exactly those places that Pavel and Rafael had found to be overly complex actually were the relatively simpler portions to do. There was no agreement during the thread of discussion, so it's not clear whether Nigel will go ahead with his idea.
Some filesystems, notably FAT, have trouble slicing and dicing files into smaller pieces without having a lot of extra room available on the disk to copy the data. But logically, it shouldn't be necessary to copy any data, if the data isn't changing. Nikanth Karthikesan wanted to split up files even when the disk was virtually full, so he wrote a few system calls, sys_split() and sys_join(), to alert the system to the fact that no copying would be necessary. There was some debate over the quality of Nikanth's code, but David Pottage also pointed out that this type of feature could turn video editing from a many-hour task to a many-minute task, in certain key cases. He remarked, “Video files are very big, so a simple edit of removing a few minutes here and there in an hour-long HD recoding will involve copying many gigabytes from one file to another.” In general, developers need a pretty strong reason to add new system calls, so it's not yet clear whether Nikanth's code will be included, even if he addresses the various technical issues that were raised in the discussion.
One thing that can happen on any running system is that RAM bits can flip as the result of high-energy particles passing through the chip. This happens in space, but also on the ground. Brian Gordon recently asked about ways of fixing those Single Event Upsets (SEUs). Andi Kleen and others suggested using ECC (Error Correction Codes) RAM, which could compensate for a single bit flip and could detect more than one bit flip. But Brian was interested in regular systems that were built on a budget and didn't have access to high-priced error-correcting RAM. Unfortunately, Andi said that this would be a very difficult feature to implement. Brian had talked about some kind of system that would use checksums and redundancy to maintain memory integrity, but Andi felt that even if that could be implemented in the kernel, it probably would require the user-space application to be aware of the situation as well. So that wouldn't be a very general-purpose solution after all. Brian may keep researching this, but it seemed like he really wanted to find a general solution that wouldn't require rewriting any user applications.
Drobo FS: the Good, the Bad and the Ugly
Those of us familiar with the original Drobo, which was an external RAID device that housed standard SATA drives, always were disappointed with the speed and lack of network connectivity the awesome-named device sported. When Data Robotics announced the Drobo FS, a faster and network-connected big brother to the original Drobo, I decided it was time to get the little beastie in order to replace the full-size Linux tower in my house that was running software RAID on a handful of internal drives. The Drobo FS offers some great features:
NAS functionality at gigabit speeds, with support for SMB and other protocols.
Apple Time Machine compatibility, for seamless backups for any Apple computers on your network.
DroboApps, which are plugins that run on the embedded Linux operating system. These vary from a BitTorrent client to an NFS server.
Simple expandability by hot swapping a smaller hard drive with a bigger one.
The good news is that the Drobo FS (I got mine with five 2TB hard drives) was easy to set up, and it proved to be decently fast on the network. Although the speeds I saw on my home network weren't something I'd expect from an enterprise-class device, I really didn't consider the Drobo FS an enterprise-level device, so I was happy with the 20MB/sec transfer rates. Sure, it could be faster, but for bulk storage, it works well.
Unfortunately, although I was excited about DroboApps, in practice, they're not as well integrated as I would like. Sure, they do the job, but configuration is inconsistent, and for the most part, it's done on config files stored in SMB shares. For many DroboApps, restarting the unit is the only way to activate changes. Also, the Drobo Dashboard is Windows/Mac-only, so for anything but the simplest of setups, one of those operating systems is required for configuration.
Worst of all was the filesystem corruption I experienced a week after firing up the Drobo FS. My unit lost power when a circuit breaker in my house tripped, and upon reboot, it wouldn't work at all. To their credit, Data Robotics' technical support responded to my problem on a Sunday (I reported the problem on Saturday), and a quick fsck got my Drobo FS back to working. Unfortunately, in order to start fsck, I had to use an undocumented command inside the Windows Dashboard program.
Even with its downfalls, I think the Drobo FS has the potential to be a powerful and reliable NAS for the home or small businesses. Perhaps my filesystem corruption was the exception rather than the rule. Either way, if you're looking for a way to store vast quantities of data in a device that is simple to use and grow, the Drobo FS is worth a look. I'd recommend it even considering the problems I've had during the past few weeks. But be sure to buy a UPS with it too, in case you happen to lose power!
Make Your Android Follow Whatever Three Laws You Decide
A while back, I thought I'd write a long tutorial on how to root an Android phone and install a custom-compiled ROM on it. This is a useful and fun activity, because it can land you a phone running a more modern version of Android than it officially supports. Of course, it also voids any warranty on your device, so it's not without risk.
It turns out, writing an article for Droid-modding isn't really required. Assuming your phone has been hacked, a quick Google search will give you the directions to root your device (the simplest and least exciting part of hacking an Android phone). After that, installing Rom Manager from the Marketplace will allow you to flash a wide variety of custom ROMs onto your phone. I could walk you through the process, but it's really not terribly difficult. With all hacking and warranty-voiding activities, be aware that, although unlikely, it is possible you could ruin your phone and need to revert back to cans and string for communication. Don't say I didn't warn you.
Oh, and if you're looking for an inexpensive, yet widely supported device for hacking, the old Motorola Droid is inexpensive and most likely still available. It's not the newest phone in the Android world, but mine is happily running Froyo (Android 2.2) even though at the time of this writing, it hasn't been released for the Droid. Happy hacking!
With open source, it's “release early and release often”, so things change. With proprietary software, it's “wait till their wallets have recovered and then release” (or something like that), so things can become a little stale feeling. If your Windows desktop feels that way, or if it just doesn't suit you, get yourself a new look and feel with Emerge Desktop.
Emerge Desktop is a replacement “shell” for Windows (not a shell like bash, but a shell like KDE or GNOME—that is, the desktop environment). On Windows, this normally is provided by Windows Explorer, which, for convenience, is the name of both the window manager and the file manager on Windows. But, you don't have to use Windows Explorer. You can install an alternate window manager, and that's what Emerge Desktop is.
Among other things, Emerge Desktop provides a system tray (the place where all those little icons appear on the taskbar), a desktop right-click menu for accessing all your programs (which replaces the Start button), a taskbar and virtual desktops. There's also a clock that doubles as a place to enter commands to run.
Emerge Desktop features are provided as individual applets (the system tray, the taskbar and so on) that can be enabled or disabled optionally and that also can be run independently of the Emerge Desktop and used with another desktop shell if desired. Applets communicate with each other via the emergeCore applet.
Emerge Desktop is written in C++ and uses the MinGW compiler. It's available for both 32- and 64-bit Windows systems. The latest release of Emerge Desktop at the time of this writing is 0.5 (released July 2010). The source code for Emerge Desktop is licensed under the GPL.
Google TV: Are You Awesome, or Absurd?
Google has planted itself firmly into our lives, at times treading the line between evil empire and freedom fanatic. Whether you search the Internet with its Web site, call your mom from its mobile phone OS, or share links with Google Buzz (does anyone really use Buzz?), most likely, you use Google every day. Google wants you to use its stuff at night as well—more specifically, when you watch television. The new Google TV platform is a software environment, much like Android is a platform for mobile phones. The question remains whether Google will consolidate all the different desires users have for their viewing experience, or merely offer “one more thing” we need to attach to an HDMI port.
I've used Roku, XBMC, MythTV, Boxee, Popcorn Hour, GeeXboX, ASUS O!Play, Freevo and probably that many again that I can't remember. Sadly, every one of them falls short in one area or another. Whether it's an inability to play streaming media, an incompatibility with local media on my server or a horrible user interface, I'm always stuck with two or three devices I need to switch between in order to fulfill my family's multimedia demands.
Hopefully, Google TV will fix that. Hopefully, the API is open enough that features can be added without taking away from the user interface. Hopefully, the software platform will be flexible enough to work on multiple hardware platforms. Hopefully, Google TV doesn't end up being evil. We'll be sure to keep an eye on the big G's latest infiltration into your home, and hopefully, we'll be able to report nothing but good news. Until then, we'll need to keep buying television sets with lots of HDMI ports.
LJ Store's Featured Product of the Month: Root Superhero
Kyle “Hack and /” Rankin (the model of this shirt) refers to it as his Root Superhero T-shirt. You too can be Root Superhero!
Reviewers of the shirt have made such bold statements as: “Who doesn't want to be like Kyle Rankin?”, “OMGPONIES!” and “Why does Kyle look suspiciously like Chris O'Donnell as Callen in NCIS Los Angeles fame (who also played Robin)?”
Get yours today for just $14.95 at www.linuxjournalstore.com.
They Said It
Well informed people know it is impossible to transmit the voice over wires and that were it possible to do so, the thing would be of no practical value.
—Boston Post, 1865
I have not failed. I've just found 10,000 ways that won't work.
There is no reason for any individual to have a computer in their home.
—Ken Olson (President of Digital Equipment Corporation) at the Convention of the World Future Society in Boston, 1977
We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the universe is winning.
There are two ways of constructing a software design; one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
—C. A. R. Hoare
Linux Journal Insider Podcast
Before each new issue hits newsstands, listen to Shawn Powers and Kyle Rankin as they give you a special behind-the-scenes look at the month's topics and discuss featured articles. You'll hear their unique perspectives on all that's new and interesting at Linux Journal. Listen to the podcast to go in depth with the technologies they're most excited about and projects they're working on. They'll give you useful information and additional commentary related to each new issue, providing a completely new dimension to your enjoyment of Linux Journal. Kyle and Shawn always inform as well as entertain, so be sure to check out each episode and subscribe using your favorite podcast player. You can listen on-line at LinuxJournal.com or download an MP3 to take with you: www.linuxjournal.com/podcast/lj-insider.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- SUSE LLC's SUSE Manager
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide