Joey Bernard's “Statistics with R” was a very welcome and
useful piece [LJ, March 2011]. As an instructor, I noticed a
very interesting on-line GNU-licensed statistics “textbook” based on R,
IPSUR. Although available in “frozen” PDF
format, it is also available “live”
as a Lyx+Sweave file. I was never really able to get Lyx and Sweave to work (I
use plain-vanilla Lyx all the time). There are instructions on-line, but I
could not get them to work for me. Maybe it's too specialized for a column
(is it?), but maybe you have suggestions.
I have a request for Dave Taylor: do a series on system admin scripts. I
have been doing basic bash stuff for years, but have several scripts that
are quite a bit more complex—specifically, wrapper functions for things
like database queries that can be included into any script or grabbing the
output of stderr, getting the exit codes from commands and acting on them.
I personally find these a challenge and would benefit from some expert
experience. Keep up the good work.
Dave Taylor replies: Thanks for your note, George. It's always great to get reader mail (as long as it's not complaining that I don't handle spaces in filenames properly).
I'm not exactly sure what you're talking about here though. Can you give me a more specific example of what you're trying to accomplish?
I just wanted to comment on the desktop manager article by Shawn Powers
[LJ, February 2011]. The memory usage stated by Shawn from the screenshots are
not the actual amounts used by the system and applications. The amount in
the article is the physical memory used. In Linux, unused resources are
considered wasted, so the kernel will cache as much memory as it can for
faster access. To get the amount of memory being used by the system, we
have to look at the used column for -/+ buffers/cache. And, the free column
on this same row is the amount available for applications.
Thanks for the tip. My main point in comparison is how much physical RAM was used. Because that is such a critical point for low-end systems, it's what I wanted to concentrate on. I took the snapshot immediately after the system booted, and even if memory was freed afterward, it still loaded up that much RAM at first, which would be a problem for low-end systems. You are correct that the kernel is amazing at managing memory, which is why I took my snapshot on a fresh boot.—Ed.
I would like to second Kwan Lowe's comments in the March 2011 Letters regarding Joey Bernard's new column. I love it. Being a computer scientist by trade, and having worked in engineering data processing/presentation at Boeing labs and wind tunnel for more than 20 years, I love working with and learning about data analysis tools and processes.
If LJ would give Joey a couple more pages to work with, maybe some
on CFD and Finite Elements might be fun. Also, generating fractal
and some basic 3-D rendering (PovRay) are always fun to play with.
Joey Bernard replies: I know that a lot of CFD people use the Ansys products, but I'd like to keep these pieces focused on open-source software. I have a piece on getting started with OpenFOAM on my list, so keep on the lookout for that. As for longer pieces, that depends on how much space is available in any given issue. I'll let Shawn and the rest of the editorial team figure out what the best balance is for all the readers.
In the February 2011 Letters section, David N. Lombard suggests to check RAID status periodically by making a cron job with a command similar to this:
# echo check > /sys/block/md0/md/sync_action
I think that this is good advice, but I'd suggest that users should check
whether their distribution already ships with a similar solution.
For example, Ubuntu Karmic does have a cron job in /etc/cron.d/mdadm that
calls a script located at /usr/share/mdadm/checkarray every week that does
exactly what David suggested. It also has other convenient features,
such as checking whether the MD device is idle before issuing a
This is to thank Daniel Bartholomew for the article “Finding Your Phone, the Linux Way” in the November 2010 issue. It was very useful.
Regarding triggering the “lost-phone-actions” on the phone, I think an important method is missed. One can send an SMS to the phone (when one feels it's lost) and trigger these actions.
The advantages for this compared to the suggested methods are that you won't need a Web site, and the phone won't need to poll it to trigger these actions. The phone can respond back by replying to the trigger SMS (with GPS coordinates and so on) giving you flexibility as compared to hard-coding the recipient. One also may specify an e-mail ID to respond to in the SMS, so that the phone can send GPS coordinates and/or photos in that e-mail ID.
Look at SMSCON (talk.maemo.org/showthread.php?t=60729), although
I have not tried this out myself.
Just a quick note to pass along how much I'm enjoying Kyle Rankin's article in the March 2011 issue of Linux Journal regarding setting up a home server. The first paragraph was too ironic, in that I've been preaching that same thing to people for some time now—the “cloud” sounds nice, and Canonical and others are putting a lot of effort in that direction, but it may not be as universally accepted as they might think or hope.
I bought Kyle's Ubuntu Server book a while back and set up a server and network in our home, and it works great. It's just a Samba file server for Ubuntu and Mac machines, but it stores all of our family pictures, videos and so on. Thanks to Kyle for providing such clear guidance in that book on how to set it up!
I'm just an airline pilot (not in the computer industry) hacker, educated long ago as an aero engineer, so all of this is self-learning. When I first gave Linux a try, I did get some bad reviews about Linux Journal and ended up spending lots of money for two of the British periodicals, even though they tend toward the tabloid at times. The feedback I got then was that Linux Journal was “just for heavy business servers people”, and that an individual wouldn't find much use with getting it. Your direction is clearly to improve that image, and I do enjoy what else Linux Journal has included lately.
So thanks. You've been a great help already. I'll sign off by asking
Kyle to keep this series that he's starting. It's useful for the little people
as much as more Linux-competent types, and I encourage the editors to
keep broadening the scope of the magazine as well. I do enjoy getting
it every month.
Keep up the great work!
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SourceClear Open
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide