Linux Journal Contents #204, April 2011
Drupal 7: the Webchick behind the Wheel
by Katherine Druckman
An interview with Angela Byron, co-maintainer of Drupal 7.
Drush: Drupal for People Who Hate Mice
by James Walker
It makes life easier for those of us who spend some of our working hours hacking away at the command prompt.
by Avi Deitcher
Surprisingly, the language of the browser is powerful, easy to use and well-suited to high-performance server-side programming—when done right.
Zotonic: the Erlang Content Management System
by Michael Connors
It's easy to use and open source.
Find Yourself with the Google Maps API
by Mike Diehl
DIY Google Maps.
Rich Internet Apps That Just Work—Writing for the User
by Avi Deitcher
With the right tools, you can build rich apps that work with, not against, the user.
Quick User Interfaces with Qt
by Johan Thelin
Qt Quick is transforming user interfaces.
by Dan Sawyer
Organize your e-book collection before it gets (even more) out of control.
Reuven M. Lerner's At the Forge
Dave Taylor's Work the Shell
Mad Libs Generator, Part II
Mick Bauer's Paranoid Penguin
Interview with a Ninja, Part II
Kyle Rankin's Hack and /
Your Own Personal Server: DNS
Doc Searls' EOF
Hacking with Humor
D-Link's Boxee Box
by Shawn Powers
In Every Issue
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?