Letters to the Editor
I think your treatment of modules both in your article about 2.0 and in Mr. Crow's article about them is totally wrong. You focus only on memory savings. Mr. Crow even goes so far as to explain that for drivers used permanently, it is better to compile them in the kernel due to the 2K loss per module when they are page-aligned—besides, you lose twelve pages due to kerneld.
Let me answer this first. If you have 20 modules loaded (an unrealistically high number) you lose 40K—add the kerneld and you are still under 100K. Well, what's the matter? This is Linux, not stinking DOS. We have no 640K barrier, and 100K will not cause a noticeable difference in speed (unless you have only 4MB of RAM, but today this is rare).
Consider a beginner using 1.2: he must choose from dozens of boot diskettes the one able to support his hardware. Despite this, the kernel has so many unnecessary drivers that it is 1MB too big (and that WILL make an important difference in speed). And despite being so large, this kernel lacks things he wants, like sound support. So our beginner (still barely able to copy a file) is confronted with the task of compiling a new kernel. It is not so difficult, as we know, but this has an undesirable effect: Linux gets the reputation of being a “hackers only” OS—you can't put it in the hands of a person without some computing experience.
Now consider a (future) 2.0 distribution: only one boot image (well, perhaps half a dozen, if you want to optimize for Pentiums). All the drivers are modules. At installation time, the user answers some questions about his hardware, and the installation procedure builds the config files for kerneld and /etc/rc.d/conf.modules for “permanent modules”. The user reboots and he is running.
Your kernel is perhaps slightly suboptimal, but recompiling is no longer a requisite. That means handling drivers in Linux becomes a lot easier than editing CONFIG.SYS or AUTOEXEC.BAT. Add to this the new package managers, some configuration tools, and a good file manager—with a little hand holding, I now have a hope to rescue my 15-year-old niece from the clutches of the Evil Empire.
—Jean Francois Martinez firstname.lastname@example.org
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Peppermint 7 Released
- Sony Settles in Linux Battle
- Libarchive Security Flaw Discovered
- Maru OS Brings Debian to Your Phone
- Profiles and RC Files
- Snappy Moves to New Platforms
- The Giant Zero, Part 0.x
- Client-Side Performance
- Git 2.9 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide