I'd just like to pass along my praise for the article “Algorithms in Africa” by Wayne Marshall in the June 2001 issue. This is a definite upgrade over the typical Linux success story. Insightful, committed, poignant, experienced and informed, you should consider Mr. Marshall's perspective as paradigmatic for covering the emerging global presence of Linux. The principles and values here have definite application to domains such as India (where the FSF is opening a branch office), China (where the government has adopted free software, if not political freedom, as its own) and many other areas of the world such as Eastern Europe and South America. Please keep us up to date on global development. And thanks for a superb magazine.
—William G. McGrath
I just wanted to write to let you know that I've consistently found Linux Journal to have the highest quality content in the magazines that cover Linux and technology. I am constantly getting refresher courses, learning about new code and projects, and generally getting fantastic info from your publication.
I especially wanted to compliment you on your regular sections: At the Forge, Cooking with Linux and Paranoid Penguin. Much of the information is applicable to other *nix to boot, making your publication one I keep around for a long time (much to the consternation of my wife). Anyway, thanks folks, and keep up the good work.
In the July 2001 LJ article “Debugging Memory on Linux”, I noticed that the open-source memory checker I've been using was not listed in the article. The checker contains a replacement malloc library plus patches for gcc. The gcc patches wrap C++-like constructors around local variables and insert tests before memory references. This allows checked programs to detect memory overwrites of local variables and some global variables in addition to malloced buffers, and the checking catches overwrites as soon as they happen. You may freely mix object modules compiled with and without checking. The checker also includes replacements for mem* and str* routines and can detect invalid calls against checked memory objects, even from modules compiled without bounds checking.
There are links to the checker from the gcc extensions page at gcc.gnu.org/extensions.html.
In your article “Debugging Memory on Linux” in the July 2001 issue of LJ, you list Purify from Rational as a proprietary tool. As far as I can tell from their web site, they do not support Linux. Also, a while back I did talk to a Rational salesperson who said they didn't have any plans to support Linux. Do you know something else?
Sorfa replies: It looks like you are right. At the time of writing the article (early this year), there was a hint that Purify would be supported on Linux. I assumed (wrongly) that by the time the article made it to press, it would be available. It is a pity and I apologize for the incorrect info. It looks like the only proprietary alternative is Insure++.
I must respectfully disagree with Allan Hall in his letter of the July 2001 issue. Certification per se is certainly no substitute for experience, but it does show that a candidate at least took the initiative to attend some classes, read some books and pass some tests. It also usually requires putting a few hundred dollars up front.
I don't see how one could give a certified candidate anything but an edge over an uncertified one, experience levels in the two being equal.
Just want to write to let you know that Robin Rowe's article “MPEG-1 Movie Players” (May 2001) was very helpful and also convinced me to renew my subscription to Linux Journal. I wanted to play movies on my new notebook and had played with xanim before, but your recommendation of MPlayer was great. It compiles, installs and works like a charm. Thanks again.
It's articles like “CVS: an Introduction” (July 2001 issue of LJ) that keep me subscribed to Linux Journal. I've been doing basic RCS for years and knew there had to be a better way. But let's face it, the man page for CVS is a little overwhelming to the uninitiated. But the day after reading the article, I was using CVS at work (the magazine is opened on my desk to page 72 right now), and I'm feeling much better about long-range management issues now. Keep 'em coming! So many thanks to you and Ralph Krause for putting this together.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Build a Skype Server for Your Home Phone System
- Why Python?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Reply to comment | Linux Journal
2 min 39 sec ago
- Reply to comment | Linux Journal
52 min 52 sec ago
- Not free anymore
4 hours 54 min ago
8 hours 41 min ago
- Reply to comment | Linux Journal
8 hours 49 min ago
- Understanding the Linux Kernel
11 hours 4 min ago
13 hours 34 min ago
- Kernel Problem
23 hours 37 min ago
- BASH script to log IPs on public web server
1 day 4 hours ago
1 day 7 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?