Readers sound off.


Tech Tip Video Tip

I recently started enjoying Shawn Powers' tech tip videos [on LinuxJournal.com], and I have one suggestion for a possible improvement in usability/accessibility. I don't think a full transcript is necessary, but a bulleted list of files, directories, commands and URLs mentioned in them might be cool. Thanks for any consideration.

Dallas Legan

I have gotten many similar suggestions for tech tip videos, and something along the lines of “show notes” for a one-minute video might make sense. The tech tips have been floundering of late (hopefully, by the time this prints that will be different) due to my house fire, but once I get back on track, I'll try to include appropriate text. If nothing else, it will make searching for the videos easier!—Shawn Powers

Not Writing Filesystems Often Is a Feature!

In the March 2010 Letters, Peter Bratton complained how explore2fs could not write NTFS files and recommended www.fs-driver.org, a non-open-source driver. Perhaps not writing foreign format filesystems is a feature, not a limitation?

All software can be expected to have bugs, and filesystem bugs are especially devastating, as they easily can destroy a system. If I need interoperability between Windows and Linux, I often use an intermediate fat filesystem, where I can put files from both systems, and I won't be devastated if I uncover a bug.

It often is a good idea to mount foreign format filesystems -r (read-only). I do that on dual-boot machines when I run Linux.


Just-in-Time Content

Thanks for a great magazine. The April 2010 issue on Software Development had an article on Selenium. I had just started looking for a Web automation and testing tool, and this article and our corresponding use of Selenium has saved me a lot of time. Thanks for content that applies directly to what we do. I love just-in-time editorial content. I can't wait to see what I will need for next month. Keep up the good work.

John Beauford

Re: Legally Using Linux

I wanted to comment on Luke's letter asking about Linux licensing on page 12 of the May 2010 issue of LJ. In his example, he mentions Red Hat, and says that it is hard to determine the licenses for everything and ponders if all the work is left up to the user.

I can tell you from the perspective of a Fedora, Red Hat and CentOS user that determining the license something is under is easy. For installed software, just do this:

rpm -qi {packagename}

That queries the information about the package, and one of the fields is the software license.

If a package isn't installed but you have a copy of the .rpm where you can get to it, just add a p to the flag:

rpm -qip {package-filename}.rpm

Red Hat has lawyers, and it even has a person investigating licenses for the Fedora Project. Red Hat takes licenses and licensing very seriously and shies away from things that are known to be licensed under questionable terms. So, for example, Red Hat doesn't ship with any Adobe products pre-installed, nor MP3 playback nor decoder, just to name a few. There have been several occasions when Fedora has dropped a package because there was some uncertainty about the license. If you think Red Hat puts its customers at risk with the software it ships, you are mistaken. Red Hat has done its homework (see www.redhat.com/legal/open_source_assurance_agreement.html).

For an example article on Fedora's license guy doing his homework, see the following Linux Weekly News article: lwn.net/Articles/312262. That certainly isn't the only article on the subject. There also is a project that has a goal of aiding in license verification, although it has been a while since I've read about it. I won't say anything other than check out the Web site: fossology.org.

I think licensing on Linux is much easier than the EULAware no one reads on most proprietary OSes. I'm also not aware of any business nor end user using a mainstream Linux distribution that has gotten sued for license violation of products installed from the distro's stock repository. It just doesn't happen. That might change if Microsoft decides to go after Linux users for the violations of its patents, as it has claimed in the media.

Scott Dowdle

We discussed this letter in the Linux Journal Insider podcast as well [see www.linuxjournal.com/podcast/lj-insider for our monthly podcast on each new issue]. Thank you for the info. It's greatly appreciated.—Ed.

Dave Taylor's Trap, Part II

In the May 2010 issue, a letter titled “Dave Taylor's Trap” recommended against setting a trap on 0 (zero). A trap on 0 is quite useful. It is a trap on EXIT. The bash(1) man page states “If a sigspec is EXIT (0), the command arg is executed on exit from the shell.” Trap on EXIT (0) is available in other Bourne shell-compatible shells, such as sh, ksh, dash and zsh. I recommend trapping on “0 1 2 3 15”, as in the following example, to remove a tmp file when a script exits or is killed. The shell does not execute a trap on EXIT (0) while inside another trap, avoiding what otherwise would be a recursive loop if you left the EXIT trap set while trapping on it:

tmpfile=`tempfile` || exit 1
trap 'rm -f $tmpfile; exit $exitval' 0 1 2 3 15
# do some work with $tmpfile

Paul Jackson

Dave Taylor's Trap, Part III

A letter in the April 2010 issue complained about Dave Taylor using signal 0 with the shell trap command, and he apologized for the “error”. This is not an error; including signal 0 is a common and extremely useful feature of the trap built in. You and the letter-writer are correct that there is no actual signal 0, but in the case of trap, signal 0 specifies the event of normal termination of the script.

Therefore, one can use trap "cleanup_code" 0 to invoke cleanup code upon normal exit. I use it all the time to get rid of temp files and other debris. Thanks for your helpful column.


Dave Taylor replies: <slaps forehead> Thanks! I knew there was a reason that the trap 0 was a good idea, I just spaced on what it was. Now I can sleep well at night.

Using Text Editors for Writing Code

Dave Taylor's comment, in his May 2010 Work the Shell column on converting HTML forms, about using vi for a few code hacks, started me thinking about using text editors for writing code. Just how important is the choice of text editors when working with code? Emacs and vi appear to be the most frequent recommendations. Why are they so popular or recommended so often, and what about gedit, joe, Leafpad, pico or nano as a text editor for hacking code?

Philip S. Ruckle, Jr.


Regarding Michael J. Hammel's “Running Remote Applications” in the February 2010 issue, the author seems to mistake the purpose and workings of XDMCP. XDMCP allows you to select an X client from a list (using the XDMCP chooser) and to log in with the remote display (X server) preconfigured for the session.

On page 61, the author states, “The use of the -display option is tied to the configuration of XDMCP on the X server.” This is not true; the -display option can be used at any login shell and can be pointed at any available X server. For example, a user can use an SSH login and enter the -display option to point at a different server. In fact, the -display option can be used to start an X client on an arbitrary X server. The only configuration required of the X server is that TCP connections must be allowed.

It appears from the article that TCP connections are enabled only for gdm/kdm; enabling TCP connections to the X server (at least in Ubuntu Karmic) can be done by removing the -nolisten tcp option shown in the /etc/X11/xinit/xserverrc file.

It also is not necessary to switch runlevels to restart gdm (or kdm); the display manager is a special-purpose X server and can be “restarted” by killing the running xdm with a Ctrl-Alt-Backspace or by using the kill command at the command line.

One thing that was not clarified is that VNC is essentially a new X server with a network-based remote display. VNC originally was designed in exactly this way: the code to actually present a display was removed from X and the networking facilities added in. Running VNC will not usually share the current display; this is done with other tools.

David Douthitt


Regarding Kyle Rankin and Bill Childers' “/opt vs. /usr/local” in the March 2010 issue, there are a few facts that are missing or misstated in the exchange between these two brilliant minds.

First, the real contrast between /usr/local and /opt is the layout: /opt is used by putting all of the software's files in one directory, and /usr/local creates a new hierarchy with /usr/local/bin, /usr/local/sbin, /usr/local/etc and so on. This is hinted at but never explicitly stated.

Second, /usr/local is, in fact, older than /opt; /opt came along with Solaris, whereas /usr/local predates Solaris.

Third, none of the packages in Linux distributions put their files in /usr/local; rather, all put their files in /usr. Solaris packages put their files in /opt. If you use tar and gzip to compile from source, you'll find your files going into /usr/local. If you compile a custom Apache, you'll find the files in /usr/local (not /usr).

Finally, consider that HP-UX in recent years switched from /opt to /usr/local. The path for files compiled into /opt becomes quite large in most installations. Adding /usr/local files means adding two directories to the path: /usr/local/bin and /usr/local/sbin.

FreeBSD and other BSDs use /usr/local exclusively for added software, just as Solaris uses /opt. Linux doesn't use /usr/local unless you compile your own software.

Tell Kyle and Bill to keep up the good work.

David Douthitt

Kyle Rankin replies: Thanks for all of the extra background into /opt/ and /usr/local. I can tell you are an experienced and learned administrator, and not just because you agree with me.

Bill Childers replies: Thanks for the historical insight! It may be worthwhile to note that the Blastwave Solaris folks do create their own bin, etc and lib directories under /opt/csw for some of the reasons you specify in your letter. I do appreciate the multiple OS point of view you referenced, as my esteemed colleague tends to see the world through penguin-colored glasses.

Using Telnet to Send E-mail

Kyle Rankin's telnet e-mail works nicely (see Kyle's Upfront piece in the May 2010 issue), but the MAIL FROM: command does not conform to the actual SMTP RFC2821, so most SMTP servers reject it with a syntax error:

MAIL FROM: <bill.gates@microsoft.com>

will do the trick with the additional “<address>”!


Kyle Rankin replies: I must work with too many postfix servers, as they are more forgiving of the syntax. Thanks for the clarification, and you get extra points for referring to the RFC.

Open Source for TV Broadcasting

This message is targeted toward Doc Searls. It seems for many years he has attended the annual NAB show in April, and then writes about the silos. LJ might be interested to learn more about how several broadcasters throughout the world, including one of Europe's largest, is using MLT for a playout server and contributing to it. This toolkit/library also serves as the engine for the up-and-coming video editors Kdenlive and OpenShot. It's something for Doc to learn about if he is attending the NAB show this year (www.mltframework.org).

Dan Dennedy

Photo of the Month

Have a photo you'd like to share with LJ readers? Send your submission to publisher@linuxjournal.com. If we run yours in the magazine, we'll send you a free T-shirt.

A view of the geekchick-mobile's license plate (I really can spell “chick” but was limited to seven characters).

Me, the cannabis activist/geek chick at a recent cannabis-related event here in Montana. (Thank goodness I brought my LJ magazines with me.) Photos submitted by Heather Masterson, Missoula, Montana.


Free Dummies Books
Continuous Engineering


  • What continuous engineering is
  • How to continuously improve complex product designs
  • How to anticipate and respond to markets and clients
  • How to get the most out of your engineering resources

Get your free book now

Sponsored by IBM

Free Dummies Books
Service Virtualization

Learn to:

  • Define service virtualization
  • Select the most beneficial services to virtualize
  • Improve your traditional approach to testing
  • Deliver higher-quality software faster

Get your free book now

Sponsored by IBM