Joey Bernard's “Statistics with R” was a very welcome and
useful piece [LJ, March 2011]. As an instructor, I noticed a
very interesting on-line GNU-licensed statistics “textbook” based on R,
IPSUR. Although available in “frozen” PDF
format, it is also available “live”
as a Lyx+Sweave file. I was never really able to get Lyx and Sweave to work (I
use plain-vanilla Lyx all the time). There are instructions on-line, but I
could not get them to work for me. Maybe it's too specialized for a column
(is it?), but maybe you have suggestions.
I have a request for Dave Taylor: do a series on system admin scripts. I
have been doing basic bash stuff for years, but have several scripts that
are quite a bit more complex—specifically, wrapper functions for things
like database queries that can be included into any script or grabbing the
output of stderr, getting the exit codes from commands and acting on them.
I personally find these a challenge and would benefit from some expert
experience. Keep up the good work.
Dave Taylor replies: Thanks for your note, George. It's always great to get reader mail (as long as it's not complaining that I don't handle spaces in filenames properly).
I'm not exactly sure what you're talking about here though. Can you give me a more specific example of what you're trying to accomplish?
I just wanted to comment on the desktop manager article by Shawn Powers
[LJ, February 2011]. The memory usage stated by Shawn from the screenshots are
not the actual amounts used by the system and applications. The amount in
the article is the physical memory used. In Linux, unused resources are
considered wasted, so the kernel will cache as much memory as it can for
faster access. To get the amount of memory being used by the system, we
have to look at the used column for -/+ buffers/cache. And, the free column
on this same row is the amount available for applications.
Thanks for the tip. My main point in comparison is how much physical RAM was used. Because that is such a critical point for low-end systems, it's what I wanted to concentrate on. I took the snapshot immediately after the system booted, and even if memory was freed afterward, it still loaded up that much RAM at first, which would be a problem for low-end systems. You are correct that the kernel is amazing at managing memory, which is why I took my snapshot on a fresh boot.—Ed.
I would like to second Kwan Lowe's comments in the March 2011 Letters regarding Joey Bernard's new column. I love it. Being a computer scientist by trade, and having worked in engineering data processing/presentation at Boeing labs and wind tunnel for more than 20 years, I love working with and learning about data analysis tools and processes.
If LJ would give Joey a couple more pages to work with, maybe some
on CFD and Finite Elements might be fun. Also, generating fractal
and some basic 3-D rendering (PovRay) are always fun to play with.
Joey Bernard replies: I know that a lot of CFD people use the Ansys products, but I'd like to keep these pieces focused on open-source software. I have a piece on getting started with OpenFOAM on my list, so keep on the lookout for that. As for longer pieces, that depends on how much space is available in any given issue. I'll let Shawn and the rest of the editorial team figure out what the best balance is for all the readers.
In the February 2011 Letters section, David N. Lombard suggests to check RAID status periodically by making a cron job with a command similar to this:
# echo check > /sys/block/md0/md/sync_action
I think that this is good advice, but I'd suggest that users should check
whether their distribution already ships with a similar solution.
For example, Ubuntu Karmic does have a cron job in /etc/cron.d/mdadm that
calls a script located at /usr/share/mdadm/checkarray every week that does
exactly what David suggested. It also has other convenient features,
such as checking whether the MD device is idle before issuing a
This is to thank Daniel Bartholomew for the article “Finding Your Phone, the Linux Way” in the November 2010 issue. It was very useful.
Regarding triggering the “lost-phone-actions” on the phone, I think an important method is missed. One can send an SMS to the phone (when one feels it's lost) and trigger these actions.
The advantages for this compared to the suggested methods are that you won't need a Web site, and the phone won't need to poll it to trigger these actions. The phone can respond back by replying to the trigger SMS (with GPS coordinates and so on) giving you flexibility as compared to hard-coding the recipient. One also may specify an e-mail ID to respond to in the SMS, so that the phone can send GPS coordinates and/or photos in that e-mail ID.
Look at SMSCON (talk.maemo.org/showthread.php?t=60729), although
I have not tried this out myself.
Just a quick note to pass along how much I'm enjoying Kyle Rankin's article in the March 2011 issue of Linux Journal regarding setting up a home server. The first paragraph was too ironic, in that I've been preaching that same thing to people for some time now—the “cloud” sounds nice, and Canonical and others are putting a lot of effort in that direction, but it may not be as universally accepted as they might think or hope.
I bought Kyle's Ubuntu Server book a while back and set up a server and network in our home, and it works great. It's just a Samba file server for Ubuntu and Mac machines, but it stores all of our family pictures, videos and so on. Thanks to Kyle for providing such clear guidance in that book on how to set it up!
I'm just an airline pilot (not in the computer industry) hacker, educated long ago as an aero engineer, so all of this is self-learning. When I first gave Linux a try, I did get some bad reviews about Linux Journal and ended up spending lots of money for two of the British periodicals, even though they tend toward the tabloid at times. The feedback I got then was that Linux Journal was “just for heavy business servers people”, and that an individual wouldn't find much use with getting it. Your direction is clearly to improve that image, and I do enjoy what else Linux Journal has included lately.
So thanks. You've been a great help already. I'll sign off by asking
Kyle to keep this series that he's starting. It's useful for the little people
as much as more Linux-competent types, and I encourage the editors to
keep broadening the scope of the magazine as well. I do enjoy getting
it every month.
Keep up the great work!
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?