Letter to the Editor
Here's my favorite credit card. When I use it, I frequently hear the cashier say, “Wow. Cool card!” I used to get excited thinking I'd made a Linux connection. Now I wait for the other shoe to drop, as it's usually followed by, “What's the penguin for?” But, sometimes it gives me a chance to evangelize just the same. Either way, it's nice to have a bit of fun while they're taking my money.
Brian Elliott Finley
That card is from linuxfund.org and helps fund free and open-source software grants and fellowships. —Ed.
Each year, Linux Journal embarks on the assembly of the Ultimate Linux Box, with the apparent goal of crafting the most powerful system possible within budget—a machine to shake the earth for miles around when switched on. This is now enough of a tradition that I wouldn't suggest tampering with it, but I wonder if some variants could be added with less coverage.
What I'm curious about is Linux systems set up with different goals in optimization. For example, what hardware exists with the lowest energy budget that is also capable of office work? The old Rebel machines came in at something like 15 watts without monitor. Can we do better? It would be instructive, but possibly less useful, to try optimizing new hardware for a similar task, but to optimize for minimum cost. Perhaps another category would be the machine that creates the least office clutter in deployment, which might well be an excuse to perform some heavy-duty case mods.
Linux is so flexible and adaptable, with so much
hardware supported, it seems shameful that the only
“ultimate” system is a fur-covered, fire-breathing,
earth-shaking, meat-eating beast of a machine.
The last trick Prentice Bisbal provides in his article [“My Favorite
bash Tips and Tricks”, April 2005] to list
files in a directory should win him a UUOF award in
the spirit of the UUOC awards. In order to list all
the entries in a directory, all you have to do when
ls doesn't work is echo *. And yes, I've had to
Prentice Bisbal asked how to show the contents of a
file using only bash [“My Favorite bash Tips and Tricks”,
April 2005]. Here's one way: while read;
do echo "$REPLY";done < file.txt. (The quotes
around $REPLY prevent the shell from expanding any
glob characters that might be in the file text.)
The IRQ article in the April 2005 issue has a number of technical problems:
“Any attempt to allocate an interrupt already in use, however, eventually crashes the system.” Not true, as the article itself points out later.
The prototype for interrupt handlers is wrong; it was changed in April 2003, for 2.5.69.
“The second argument is a device identifier, using major and minor numbers....” is wrong. dev_id is simply the same pointer passed in to request_irq().
The explanation of SA_INTERRUPT, beyond its grammatical problems, is not really correct; SA_INTERRUPT should not be used for anything anymore. SA_PROBE has never been meant for use outside of the IRQ subsystem itself, and nobody has ever passed it to request_irq().
The sample module would not compile, and in any
case, the build system has changed to the point that
you cannot build a module with a simple gcc command
Considering the rapid pace of kernel development, we should not have run an article last tested on an early 2.6 kernel. It was our mistake to run it without sending it back to the author for an update. —Ed.
B. Thangaraju responds: I was very happy to note that a person of Mr Jonathan Corbet's eminence has made his valuable suggestions on my article. The first sentence can be changed to “IRQ allocation will fail if it attempts to allocate an interrupt already in use.”
Prior to 2.5.69, interrupt handlers returned void. The prototype mentioned in the article was correct in the 2.4 kernel but in 2.6, interrupt handlers now return an irqreturn_t value.
This article was written in February 2003 and published in April 2005. I was working with the 2.4 kernel during the preparation of the article, and I tested the code with 2.6.0-0.test2.1.29 kernel. So, some of the newer developments were not in use at the time of that writing, but the scenario, as you have rightly pointed out, has changed now.
First off, I'd like to say that Linux Journal is the
absolute best Linux magazine out there in my opinion.
The how-tos are intuitive, and my career has improved
because of my subscriptions to this magazine. Now,
I would like to see an article on jivesoftware.org's
Jive Messenger Server. To me, this is where Jabber
should be as an open-source alternative to the
commercial IM servers out there. It's extremely
configurable for a plethora of back-end databases,
and runs best on...well, you know...Linux.
I enjoyed Charles Curley's article on GpsDrive in Linux Journal [April 2005]. Near the very end he suggested anyone
who knows of a mapping data source let you know.
You might consider looking at maps.google.com.
It uses an open XML standard and API for free mapping
integration. It might be worth looking at.
I'd really like to see Debian and Debian-based distros become easier for non-gurus to live with.
I tried two Debian-based distros, Mepis and Ubuntu. Each of them used about 1.5GB of hard drive space. Mepis used 150MB of RAM, but to be fair, it included lots of extra desktop gizmos. Ubuntu used 90MB of RAM. I also especially appreciated Ubuntu because it comes default with GNOME. Fedora 3 uses 2.5GB of hard drive space and 90MB of RAM for its home computer configuration.
Debian users will tell you that apt-get is more efficient than RPM because RPM's dependencies are other packages, while apt-get's dependencies are individual files. They'll also tout that apt-get does a better job of taking care of dependencies for you. But, guess what? With apt-get, you have to know exactly which packages you need to make a software system work.
Let's take MySQL for example. To make it work, you need the mysql-common, mysql-server and mysql-client packages. Technically, mysql-common will install without mysql-server and mysql-client. But it doesn't do you much good. With apt-get, you already have to know this. You also have to know the package name of any add-ons you might want, like graphical administration tools or Apache plugins. And yes, I was using the graphical interface to apt-get, not the command line.
With RPM, you would run into the same problem; however, Fedora's application management tool includes categories for common programs like MySQL. So I just click that I want MySQL, and Fedora selects all the necessary packages for me. I can then click details and select or de-select optional components.
The problem isn't so bad with MySQL, but now let's talk about more complex package structures, like GNOME (or KDE). There are dozens of GNOME packages available via apt-get. Which ones do I need? I don't know. Is there one that will install all of the other necessary ones as dependencies? I don't know. Do I want any of the packages that aren't explicit dependencies? I don't know. With apt-get, I'd have to spend hours reading the descriptions of all the packages. With Fedora, I just click GNOME, and I get the important stuff and a list of the optional stuff to choose from.
My grandma could probably install KDE for Fedora. But
Debian needs work. There needs to be “master” packages
that install all of the required stuff for a given
complex system and then prompt you to make choices
about the add-on stuff.
R. Toby Richards
I found several flaws with Clay Dowling's article “Using C for CGI Programming” [April 2005]. He seems to not realize that there is software that caches compiled PHP bytecode that can speed up execution quite a bit. An example is Turck MMCache: turck-mmcache.sourceforge.net/index_old.html.
An interesting statement: “The fairly close times of the two C versions
tell us that most of the execution time is spent loading the
Well, duh! It seems downright absurd to go through the hassle of
coding CGIs in C, and then use the old fork-exec model. Why not write
the applications as Apache modules? This would have sped up execution
time significantly. Besides, a lot of the cross-platform issues
already have been resolved in the Apache Portable Runtime.
My daughter, Angel Sakura, and I were reviewing a back article on Linux VPNs. She really ate it up.
I like your articles okay so far, but your RSS feed
sucks. That is the longest damn title I ever saw, and
I don't even want to hear about Linux by the time
you're done blowing your own horn.
I thoroughly enjoyed Doc Searls' Linux for Suits column (“The No Party System”) in the April 2005 issue of LJ. However, I feel that he left out one excellent example of his point. Toward the end of the article, he discusses the new Linux version of SageTV as well as the many benefits provided by ReplayTV as a result of it being based on a Linux system. I have never used SageTV nor have I owned a ReplayTV or TiVo (although I have quite a few friends who do), but I've been a dedicated user of MythTV (www.mythtv.org) for almost two years now.
From everything I've seen or read, MythTV seems to be head and shoulders better than the other options out there, including Windows Media Center Edition, SageTV, ReplayTV and TiVo, and it's only on version 0.17! Now I know that most people would normally be scared off by a version number that low, but trust me, Myth is already incredibly polished and user-friendly at this stage of the game. MythTV can do pretty much anything your TiVo or ReplayTV can, plus more. And, with the possible exception of some new hardware, depending on what you've got sitting in your basement/closet, it's completely free! There is most definitely a bit of up-front setup required to get it going in the first place, but once the system is up and running, it's a piece of cake to use.
Myth can handle everything from time-shifting television to storing and playing back your music library (in almost any format), to watching DVDs (or DVDs that you've ripped to the hard drive, effectively providing movies on demand), to weather information, to managing your digital picture galleries, to playing your favorite arcade/NES/SNES/atari games on your TV. And the best part is, if there's a feature you want that Myth doesn't already have, you can always write it yourself. The developers are always happy to include new patches and features from the user community.
If you're interested in seeing the power of Linux
and the Open Source community, I'd highly suggest that
you at least take a look at MythTV.
Doc did. See page XX. —Ed.
A few weeks ago, after dropping my laptop on the
floor, I went shopping on the HP Web site. On the
nx5000 page, HP still touted that it came with a
choice of XP or SUSE 9.2, but when I went to the
configuration pages (I tried all of them), there was
no such choice. I e-mailed HP shopping support and
thus far have received only an automated
acknowledgement. A week later, I was asked to complete
a survey of HP E-mail support, and I did so, noting
how completely useless it was. I checked “Yes, you
may contact me about my response to the survey”, but
they never followed up on that either. I've since
given up and bought a refurbished ThinkPad, but I
have to conclude that HP has quietly discontinued
their Linux laptop.
The nx5000 is no longer manufactured. We checked with Elizabeth Phillips at HP, and she says that Linux on HP notebooks and desktops lives on. Through a “Factory Express” program, you can get Linux on any desktop or notebook. ORDERING INFO TK. —Ed.
No photo qualified this month, but continue to send photos to email@example.com. Photo of the month gets you a one-year subscription or a one-year extension. —Ed.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Nice article, thanks for the
7 hours 18 min ago
- I once had a better way I
13 hours 4 min ago
- Not only you I too assumed
13 hours 21 min ago
- another very interesting
15 hours 14 min ago
- Reply to comment | Linux Journal
17 hours 8 min ago
- Reply to comment | Linux Journal
1 day 2 min ago
- Reply to comment | Linux Journal
1 day 18 min ago
- Favorite (and easily brute-forced) pw's
1 day 2 hours ago
- Have you tried Boxen? It's a
1 day 8 hours ago
- seo services in india
1 day 12 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?