Linux comes from the Internet and thrives on the Internet. As the desktop Linux community struggles with proprietary formats and user lock-in, you might think those of us who are just doing network services have it easy. Wrong. Three key problems face Linux as an Internet platform today: security, proprietary software and the high cost of popularity. Naturally, Linux Journal will help you attack all three.
Networks need high-quality, secure software. Open-source licenses and development models are an important step toward that, but last year Linux Journal warned the shovelware mongers, I mean Linux distributions, that their default installs were loading users up with too many potentially exploitable services and that they should start locking things down by default. But did they listen? No. Now we have the “Red Hat Ramen Worm”, and proprietary software PR people blaming Linux for one distribution's irresponsible decision. From what I see, though, it could have been any of the distributions, so if you're not Red Hat don't think I'm not talking to you too.
Mick Bauer is writing about Bastille this month, and it certainly helps, but the very existence of a security package as an add-on is profoundly backward—like taking delivery of a car with no bumpers or seatbelts and having to get a local mechanic to install them. Exploits happen, but worm epidemics wouldn't be a public embarrassment for Linux if the distributions would just make a secure posture the default.
If you must run FTP (and most systems don't need to) please, please don't run some huge, deluxe, featureful FTP dæmon written back in ancient times, when there was no secure alternative to FTP for password-protected file transfers. Use a minimal “anonymous-only” dæmon such as oftpd (page 92) and be happy.
It might be hard for some of you to imagine the prospect of not having e-mail at work—but that's where Stew Benedict found himself. It's an old story for us but an inspiring one—set up a single, inexpensive Linux box to offer e-mail to lots of users. Internet e-mail dramatically changes work environments, as lots more people get information and help from the outside. Especially in bleak, brain-stifling places it changes the way people work for the better—and Stew did it on the smallest of budgets. Read how on page 100.
No networking issue would be complete without some discussion of the question “How can I get more performance out of my web site?” Ibrahim Haddad and Makan Pourzandi find an answer from the bottom up, using the classic “load balanced cluster” approach and the free Linux Virtual Server Project software [see page 84]. That's a workable solution for your business web site today—don't let yourself get bamboozled into proprietary load balancing or clustering.
When Richard Stallman wrote “Join us now and share the software, you'll be free, hackers, you'll be free,” he forgot “sane”. License managers, incompatible binary-only kernel modules and lack of tweakability are sending more than one webmaster over the edge. So get the free stuff and have time to make the site better, don't just work around some idiot marketing person's conception of how your site should work.
The Web is great, but the tragedy of popular independent web sites is that they start to require more bandwidth and server power than their creator can afford. At that point, the webmaster either “sells out” to a business that starts carrying ads, tracking users and doing other nasty stuff, or the site dies out. Naturally, the vendors of big bandwidth and big iron love this.
Freenet to the rescue. This much-hyped system really works—Peter Todd sent us his Freenet article over Freenet. Think of it as a distributed Berkeley DB, but one where you can get your data back from any Freenet “node”, not just the one where you stored it. It's a little more complicated than that, but honestly, not much.
Freenet's decentralized architecture was originally intended to keep the Man from suppressing underground newspapers and hash brownie recipes. But more importantly, it's becoming an architecture for sustaining web-like sites and Usenet-like groups while sharing the costs of servers and bandwidth among all the readers, not just slamming the creators. Get a DSL line, join Freenet (page 96), and soon your favorite sites won't have to make the decision of whether to shut down or sell out.
The fundamental advantage of packet-switched networks is that they disproportionately reward economically efficient and moral behavior. But like the HVAC system in Brazil, networks “don't fix themselves, sir.” Put some effort into security, free software and free peer-to-peer systems, and the network will pay you back many times over.
Peace and Linux.
—Don Marti, Technical Editor
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Validate an E-Mail Address with PHP, the Right Way
- Weechat, Irssi's Little Brother
- Tech Tip: Really Simple HTTP Server with Python
- New Products
- Poul-Henning Kamp: welcome to
1 hour 4 min ago
- This has already been done
1 hour 5 min ago
- Reply to comment | Linux Journal
1 hour 50 min ago
- Welcome to 1998
2 hours 39 min ago
- notifier shortcomings
3 hours 2 min ago
4 hours 39 min ago
- Android User
4 hours 41 min ago
- Reply to comment | Linux Journal
6 hours 34 min ago
9 hours 23 min ago
- This is a good post. This
14 hours 36 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?