Thanks for the article about restoring with Mondo and Mindi in the October
2003 issue. It was good, but it could have included a couple more
options, in my opinion. Option -g gives a nice graphical interface to
the restoration process. Option -l GRUB -f /dev/xxx
if one uses the GRUB loader, it will be integrated nicely. Option
FAILSAFE adds compliance with Debian/GNU. I believe this last one
is important for Debian users.
I was very interested in your article about NEC's fault-tolerant server [LJ, October 2003]. I'm an IT consultant here in Buenos Aires, Argentina with some exposure to fault-tolerant and mission-critical applications. I contacted Mr John Fitzsimmons (of Aspire Communications) four or five months ago in order to help me get the source code for NEC's FT Linux. As I understand it, the software must be released under the GPL, but as of today I wasn't able to get the software. So, it was great to know from your article that “...NEC told us they are firmly committed to releasing all changes to the public under GPL.” I think I will have to wait just a little longer. Do you have an approximate date for that release?
One other point, as the article stated, “At the time of this writing,
NEC was reviewing and documenting its kernel changes for a planned public
release, perhaps through OSDL's Carrier Grade Linux Project...”, could
you tell me what I should do to get the release from OSDL? Will it be
available to the general public or to OSDL members only?
Pablo J. Rogina
Dan Wilder replies: I am not aware of any such release to date. I am copying Mr. Fitzsimmons on this note, in hope that he may have some comment.
Just wondering if you are going to have an electronic copy of
Linux Journal for subscription. For readers overseas, the subscription rate is
kind of expensive. I guess that's due to the shipping cost.
It would be really helpful if you could provide the electronic
copy as a subscription option, with the same cost as the subscription
rate within the States, or cheaper.
You're not the first person who has asked this. We don't do this yet, but I've asked our subscription department to find out if we can. —Ed.
There are two errors in my article “Secure Mail with LDAP and IMAP, Part I” [LJ, November 2003]. First, early in the article I say (about Cyrus IMAP) “...use of databases rather than flat files to store messages has an obvious performance benefit.” Actually, although Cyrus IMAP does use database files for indexing messages, it stores the messages themselves as individual flat text files.
Second, later in the article, I correctly point out that to configure Cyrus SASL on SuSE to use LDAP, you must edit the parameter SASLAUTHD_AUTHMECH in /etc/sysconfig/saslauthd. The problem is, the Cyrus SASL package in SuSE doesn't have LDAP support compiled in. You either need to build your own LDAP-enabled package with SuSE's cyrus-sasl2 source-RPM (SRPM) or forego direct LDAP support in SASL altogether and use the PAM method instead, which is compiled in by default. In the latter case, you need to install the module pam_ldap, and then create a file, /etc/pam.d/imap, containing these lines:
#%PAM-1.0 auth required /lib/security/pam_ldap.so account required /lib/security/pam_ldap.so
Next, copy the file /usr/shage/doc/packages/pam_ldap/ldap.conf to /etc/openldap/ldap.conf, and edit it to match your environment (the most relevant settings are host, base, binddn, bindpw and TLS_REQCERT). Finally, edit /etc/sysconfig/saslauthd to include this line:
I apologize for any confusion or inconvenience that either error may
Tinyminds' Tony Brijeski (Fragadelic) recently had the pleasure
of sitting down with Marcel Gagnéover some coffee and asked
him some questions about his recently released book Moving
to Linux: Kiss the Blue Screen of Death Goodbye and his views
on where Linux is today. He even spilled the beans on the new
book he is currently working on; you'll have to read the
interview to find out about it at www.tinyminds.org.
This letter to the editor is a response to Dennis Ludwig's letter “Ada Isn't Awful”, in the November 2003 issue. I am a counter-example to Dennis Ludwig's statement “I have yet to meet an Ada basher who really knows the language.” I have programmed in Ada, mostly full-time since 1986, done serious development work on three Ada compilers and served as a resident Ada language lawyer for over ten years. It's awful, and I hate it. That said, it's not half as bad as C++, to which Ludwig primarily seems to compare Ada. I do agree with much of his cited Web article. Both languages are horribly over-complicated to a similar degree. Most working programmers in either language don't even come close to understanding their language. At least Ada is strongly type-safe, while C++ inherited all of C's worst type-unsafe and “undefined” behaviours, so lack of understanding in Ada is more likely to show up as compile errors instead of bugs.
But the best languages, such as the Pascals, Modula-2 and the Oberons, are
an order of magnitude simpler, and some of them have useful feature lists
comparable to Ada. My favorite is Modula-3, with a language definition
ten times smaller than Ada. It has a more flexible type-safe separate
compilation, far more powerful information hiding capabilities and a
choice of garbage-collected or explicitly deallocated heap objects,
none of which are in Ada. Ada does non-integer fixed-point types,
which is very rare. Otherwise, the feature-list differences are minor.
Interested readers can read my more complete summary
of the differences between Modula-3 and Ada at www.cs.wichita.edu/~rodney/languages/Modula-Ada-comparison.txt.
Rodney M. Bates
In the November 2003 issue, acknowledgements were inadvertently omitted from the TALOSS (Three-Dimensional Advanced Localization Observation Submarine Software) article. Mr Ken Lima of NUWC was the originator of the TALOSS concept and the project's manager, without whose leadership this project would not be possible. The authors also would like to thank the Office of Naval Research (ONR) Code 311 (Mr Paul Quinn, Mr Gary Toth and Dr Larry Rosenblum) for the funds to complete the project.
ONR (Dr Rosenblum) funded Dr Gregory Neilsen and Mr Gary Graf of Arizona State University to develop software that computes the intersection of irregular 3-D regions within TALOSS. That software is the same software included in TALOSS as described in the subject article. A patent application for this software already has been prepared by its inventors (Graf, Nielsen, Lima and Drury).
A major portion of the TALOSS software was written at Virginia Tech University under the direction of the NUWC by graduate student, Fernando DasNeves. The funding for his work was provided under a NAVCITTI Virginia Tech grant administered by ONR (Dr Rosenblum).
The Naval Research Laboratory contributed that laboratory's expertise in 3-D rendering and visualization to the NUWC-led project. NRL (initially Mr Rob King), conducted research on 3-D interactive devices in the immersive variant of the TALOSS. Mr Douglas Maxwell, then an NRL employee, subsequently developed a 3-D grid-based approach to ocean floor rendering within TALOSS using the digital nautical chart database. Mr Maxwell also developed the collision detection algorithms used in TALOSS. Mr Maxwell's primary role, since joining NUWC a year ago, has been as a technical consultant to the project.
Mr Richard Shell of NUWC has served as the technical lead for the TALOSS Project for the past three years. Mr Todd Drury is the NUWC software development lead for TALOSS and has been responsible for the design and development of the TALOSS software for the past three years. The bottom rendering approach currently implemented in TALOSS and referenced in the Linux Journal article is Mr Drury's work, with Mr Maxwell's as a candidate to follow. A patent for this software was submitted several months ago.
The authors would like to express regret in the inadvertent omission of
these acknowledgements. Thanks to the hard work and dedication of all
involved, this project was a resounding success.
Douglas B. Maxwell
MSME, Research Scientist/Mechanical Engineer, Naval Undersea Warfare Center
I have had a personal nagging curiosity to build my own home cluster
out of a few boxes that are readily available. Once I have it, what can
I possibly do? Regarding that question, your November 2003 issue was fantastic
but lacked the one thing I would like to read more about. Could you
maybe consider an article describing the different applications that
are available for clusters?
As a self-taught Linux user, I really enjoy and appreciate the information
that gets published in LJ. I am wanting to learn more
and am looking
for courses in Linux, but only seem to find the occasional
Linux conference or workshops that offer Linux certifications. Being
Linux-certified is my goal, but I know I still require classroom study.
Can you point me in the right direction? Thanks! Keep up the good work!
Try the directory at lintraining.com. —Ed.
I enjoyed Cal Erickson's article, “Writing Secure Programs”,
in the November 2003 issue, and definitely plan on looking at
FlawFinder and other tools.
One nit: “Do not use fgets when reading data, as this allows
I think this was a typo; it should have read “gets...allows
overflows”, because fgets has a length parameter.
You're right. As soon as you finish that “Intro to C” class that requires you to use gets, you just don't use it anymore. —Ed.
In the November 2003 cover article, “Sequencing the SARS Virus”, Krzywinski and Butterfield refer to phred, phrap and consed as the “open-source workhorses” of the Human Genome Project. Although these programs are indeed bioinformatics workhorses, and although the no-fee academic licensing terms under which they are available are generous by proprietary software standards, the terms under which they are distributed do not conform to the Open Source definition at least to the extent that they discriminate against fields of endeavor. The terms apply to academic or nonprofit use but not to business use (Open Source definition version 1.9, section 6).
BLAST, another workhorse mentioned in the article, has diverged into two
main branches, one in the public domain (which can be obtained from the
National Center for Biotechnology Information [NCBI]) and one distributed
in a fashion similar to phred, phrap and consed.
D. Joe Anderson
October 2003, page 88: in Listing 1 of the article “Building a Linux IPv6 DNS Server” by David Gordon and Ibrahim Haddad, there should have been a closing brace after the first two lines.
November 2003, page 50: in the Command-Line Bionformatics Sidebar of “Sequencing the SARS Virus” by Martin Krzywinski and Yaron Butterfield, the t in the third line of code should be tr.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide