Letters to the Editor
This is a story about how Linux helped in Saving Private Ryan. I thought your readers might be interested in how Linux supported the National D-Day Memorial Foundation both very inexpensively and reliably.
A friend, James Ervin, and I had been involved in the installation of our local Internet access through the cable TV company, Bedford Cablevision, as well as the installation of Linux for web servers, mail servers and firewalls. In Bedford, Virginia, where we live is a small organization called the National D-Day Memorial Foundation. We set up a web presence for them on the Internet with the cable company's help. We put together a computer (an AMD 5x86-based server) for about $350 and installed Linux for a firewall, web and mail server. They had some web pages donated by a graphics design shop, Howlin' Dog Designs. The day after the cable company installed the cable and cable modem, the pages were up on the Internet on their own server. Initial requests for web pages (http://www.dday.org/) were few, about 2000 per month.
A while later, they were contacted by a company called DreamWorks which wanted to do a movie related to D-Day. Support was provided by the Foundation to DreamWorks and eventually the movie was released. Their web traffic then increased to about 2000 requests per day, and Linux has faithfully borne the load. That is the story of how Linux worked behind the scenes during the making of Saving Private Ryan.
—Rich Kochendar firstname.lastname@example.org
I just finished setting up an extra PC as my new router to the Internet. I used the instructions from the article “Getting in the Fast Lane” by Michael Hughes in the June 1998 issue (#50), and although I used a regular modem instead of a cable modem, I was able to connect to the Internet within hours of playing with the kernel and ipfwadm. I must say I was excited to get it working and especially to browse my PC web site from the Internet using the DHCP address from my ISP. I even sent this e-mail from one of the PCs on my internal network. Keep up the good work, guys.
—Danny M. email@example.com
I just wanted to write and let everyone know that Red Hat 5.1 is excellent! From start to finish, the installation was seamless. I recommend novice users buy the boxed version made by Red Hat; it comes with e-mail support, a nice book, a boot disk and a set of three CDs. Not bad for $54.99; the book included is worth that price if you are a novice. Now that I have migrated to Linux, I find myself chanting “Cool, It Works with Linux!”
—Michael T. McGurty firstname.lastname@example.org
I was in total shock when I read the Editor's remarks “How Many Distributions?” in the September 1998 issue of Linux Journal. It seems to me to be the most anti-Linux message I have ever read. What gives you the right to tell the Linux community what is good for it? Isn't that why we don't like Bill Gates? He feels like he should lead the computer industry in the direction he sees fit.
What would have happened if someone had told Red Hat there were too many distributions? What if someone had told Linus Torvalds there were already too many x86 UNIX kernels? After all, BSD, Minix, SCO and Solaris (x86) already existed.
If someone wants to start up a new distribution, my hat is off to them. It's much harder to start up a distribution today and have it succeed than it was just two or three years ago. This is partly due to how great the current distributions are. If a new distribution has binary compatibility problems, no one will want to use it. This should encourage them to make sure their distribution complies with the Filesystem and Binary Compatibility Standards that have been proposed.
Linux is about individuality. I prefer Red Hat, FVWM 1.X and vi. Why should I use Caldera, KDE and Emacs? Too many people are caught up in Red Hat vs. Caldera, KDE vs. Gnome, vi vs. Emacs. Who cares? They all work with Linux! That's what is so great about it all. I can have an operating system tailored to me. People who enjoy Linux should express themselves in whatever manner they like, whether it's creating a new distribution or creating a new resources page.
Maybe I'll express myself by creating a new Linux magazine. I understand there's quite a monopoly in that area.
—Pete Elton email@example.com
The purpose of that column was to express my opinion, not to dictate to the Linux community.
Actually, we do have competitors—in Germany, Spain, Korea and Japan. I've also heard rumors of Linux magazines in Italy and India —Editor
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- RSS Feeds
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- So when they found it hard to
12 min 57 sec ago
35 min 8 sec ago
- Reply to comment | Linux Journal
57 min 27 sec ago
- Android has been dominating
1 hour 1 min ago
- It is quiet helping
3 hours 47 min ago
4 hours 4 min ago
- Reachli - Amplifying your
5 hours 21 min ago
6 hours 9 min ago
- good point!
6 hours 12 min ago
- Varnish works!
6 hours 21 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?