The devfs filesystem work by Richard Gooch may be coming out of the kernel. At the end of December 2002, Adam J. Richter announced a patch to replace devfs with a new mechanism based on RamFS. The new system attempted to mimic devfs' behavior in many ways, though Adam did not intend to include all of the devfs functionality in the RamFS system. He wanted his implementation to be, in part, a cleanup of the devfs interface, so that features used only by few systems might be replaced with other methods. As a result of this restructuring, he managed to reduce the size of the code to one-fourth of what it had been. The devfs system always has been controversial, and Linus Torvalds' decision to include it in the official tree was even more so. Folks like Alexander Viro and others have firmly refused to use it on grounds that it simply wasn't coded well enough. Shortly before Adam announced his own work, Alexander had begun an invasive cleanup and restructuring of the devfs code. Richard, having struggled for years to produce devfs and make it available in timely patches, seems to have vanished entirely from the kernel mailing list.
The sysfs filesystem is intended to be a replacement for /proc and other methods of exposing kernel data to user space. It began as a tool for driver writers, but its use was broadened in 2002 to all parts of the kernel. Since then, there has been an ongoing effort to migrate a variety of other interfaces to sysfs. In January 2003, /proc/cpufreq came under the knife when Dominik Brodowski marked that interface deprecated in favor of a new sysfs interface in the cpufreq core code. Patrick Mochel also had a hand in this, making sure Dominik's work matched up with all the latest sysfs features. Later that month, Stanley Wang sent some code to Greg Kroah-Hartman to replace pcihpfs with a sysfs interface. In this case, however, sysfs was not up to the task as the needed hot-plugging code was not yet fully in place. No problem. Greg coded up the needed sysfs feature and sent it to Patrick.
One day in January 2003, Alan Cox happened to mention that the tty code in the 2.5 tree was badly broken and had been for a while, primarily as a result of locking changes in the kernel preemption code. This came as a surprise to many people, and some wondered why this was the first they'd heard of it, especially because the 2.5 tree was already in feature-freeze, headed for 2.6 or 3.0. Greg Kroah-Hartman looked at the problem and was horrified. He said it was not going to be easy to fix and was most likely something for the next development tree. But Alan said this wasn't an option, because the tty code was broken already and had to be fixed before the next stable series.
Traditionally, the Linux kernel has been compilable only with the GNU C compiler, and even then it often has been necessary to use a particular version of the compiler to compile particular versions of the kernel. The kernel always has depended on GCC extensions, and the relationship between kernel and compiler has been intertwined for years, like an old married couple. Therefore, various people were shocked to learn that the kernel also could be compiled with Intel's C++ compiler, icc. Apparently, Intel has had this as a goal for quite some time, and they've even submitted patches to Linus with the sole purpose of enabling their compiler to handle the kernel source tree.
It's always nice to learn that the feature you desire already has been implemented. According to the documentation (at least as of late January 2003), the only filesystem with quota support was ext2. However, apparently work has been going on behind the scenes, because ReiserFS, ext3, UFS and UDF now support quotas.
This utility scans any portion (or all) of the filesystem tree and provides fairly detailed statistics regarding the files on that system. If you happen to be running Debian or a Debian-based system, such as Knoppix, you can receive even more information on the associated dpkg files. This program uses the access times rather than creation or modification times to tell you how “old” or stale a file is. Chances are, files not accessed during the past five years are either historical archives or cruft. Requires: Perl.
—David A. Bandel
Football Manager is a game where you are the manager of a soccer team. Graphics are crude, but the game is a lot of fun. It's a game of strategy where you buy and sell players and choose who will play the game this week. Once you've done your job, sit back for 30 seconds to watch a few shots at the goal and see who won. Then, see your team's rating rise or fall compared with other teams in the league. If I don't remove this game I'll never get any work done—it's more addictive than Adventure. Requires: libSDL, libm, libX11, libXext, libdl, libpthread, glibc.
—David A. Bandel
If you're a pilot, you know maintaining a logbook is not a big chore. But, when someone wants to know how many hours of which type you have, it becomes a little more difficult. This logbook is like the professional logbook for pilots with all the entries you'll need, plus two user-definable fields. With one click you can see all totals to date. And, by running a small script on the data file (you'll have to create that yourself), you can create a data file for 60 or 90 days back to see how your totals are for currency. Requires: libgtk, libgdk, libgmodule, libglib, libdl, libXi, libXext, libX11, libm, glibc, pilot's license and airplane (last two optional).
—David A. Bandel
This hardware lister shows quite a bit of detail, including IRQ, module used and more for cards and other hardware. If you need a great quantity of detail on a system for an inventory, you might want to look at this program. About the only thing missing is the MAC address on the network cards, but that's easy enough to get. Requires: libstdc++, libm, libgcc_s, glibc.
—David A. Bandel
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide