Regarding Dave Taylor's Work the Shell column in the March 2009 issue of LJ: as you have been using UNIX nearly as long as I have, you probably already know this. The early UNIX spell program used a pipeline very similar to the one you develop in your column. Its purpose was to get a list of unique words from the document, sorted and single case. The rest of it used comm(1) to compare the document word list to a small system dictionary, /usr/lib/dict/words. I say small, as it had only about 25,000 entries.
One significant difference between the spell pipeline and yours was the handling of the tr(1) commands. Like your pipe, one tr did upper → lower translation. But, the second tr used options you did not mention in the article: -c and -s (complement and squeeze). Using today's syntax, that it would look like this:
tr -cs [:lower:] '\n'
By complementing the lowercase class, this style ensures that no punctuation, white space, digits, control chars and so on are missed. All are translated into newlines, and where multiple sequential newlines result (that is, blank lines), they are squeezed out by the -s option.
I notice from your uniq -c output that blank lines are the second-most
Dave Taylor replies: Thanks for your note, Jon. You're right, using a bit more advanced call to tr would eliminate the blank lines, punctuation and so on. Thanks for the tip!
Regarding Mick Bauer's “Secure Squid Proxy, Part I” in the
April 2009 issue:
great article, Mick! However, I just wanted to draw some attention to the
information in the “Just How Intelligent Is a Web Proxy”
sidebar. It isn't necessarily true that “contents of HTTPS sessions are, in
practical terms, completely opaque to the Web proxy”. Some proxy software
now has the ability to initiate a man-in-the-middle attack, issuing fake
SSL certificates on the fly to enable the proxy to impersonate the remote
server. This enables the proxy to inspect the traffic going
between the client and server. Most browsers will detect this on-the-fly
cert (generating a warning to the user), as it usually doesn't come from a
valid Certificate Authority, but some companies are using tools, such as
Group Policy, to push down custom CA settings within their organizations to
configure the browsers to accept the on-the-fly certs as genuine (without
throwing a warning).
Mick Bauer replies: Sure enough, you caught me oversimplifying. Thanks for the clarification, Ray!
As I continue to search the forums for the issues I am having with a
Linux desktop install, it seems that the Linux desktop (for me) still ranks
as a hobby; Linux lacks a desktop that I can use in business. Linux has the
applications—that's not the issue, desktop stability is. I think
Linux on the desktop is up and coming, but there are still unresolved issues—look at the forums and the number of issues that go unresolved.
I am not a Windows zealot by any means and run a lot of Linux in the server
environment (where it rocks!), but I have yet to have a Linux desktop
install that just works out of the box. When you install Windows, you know
what you're getting, warts and all, but it does work. It seems
lacks a level of stability and requires a level of experience that I
don't have time for. Windows does not seem to have these issues and is why
I continue to say that Windows wins the desktop war. Some will ask “what distro
are you running, or what hardware platform are you installing on?”
Or, they will say
there is something I am doing wrong, and there probably is, but all I am
looking for is usability so I can make a living. I will continue to search
the forums and continue hoping that a stable Linux desktop OS emerges.
I sound like a broken record when I keep saying this, but again, one of the weaknesses Linux has as a desktop operating system is the diversity we have. Linux can (and does) mean so many different things. Are you using a popular desktop distribution? By that, I mean one tailored for desktop use as opposed to server and/or corporate use? Distributions like Ubuntu, Linux Mint, OpenSUSE and a few others have a better track record for desktop stability and usefulness. Since you mention that applications aren't the problem, it makes me scratch my head, because stability is usually where Linux ROCKS. Feel free to drop me an e-mail with more specifics, and I'll see what I can do to help (firstname.lastname@example.org).—Ed.
Regarding James Gray's response to Jim Leuba in the April 2009 Letters:
you may want to omit the political “Climate Change” nonsense. While I'm
sure you eat it up with the spoon Al Gore sold you in exchange for carbon
credits, the rest of us out here in the ether don't want to hear it. Stick
to geek-speak and keep your audience.
James Gray replies: While the decisions regarding how to respond to climate change—or not to respond to it—are political, the fact that climate change is occurring is not. The Theory of Global Climate Change is one supported by huge amounts of empirical data and enjoys near unanimous consensus among climatologists. You can read more about it in documents published by the Intergovernmental Panel on Climate Change (IPCC), which summarizes the findings of climatologists around the world (www.ipcc.ch).
Your reference to Al Gore suggests that I am a person who does not analyze evidence before making a decision. This I do not appreciate. Because you don't know me, you have no idea how I make my decisions. However, the scientific literature I have read on climate change, and not bombastic rhetoric from blowhard opinionators, is the basis for my writings on the topic.
Regarding your advice to “stick to the geek-speak”, I would argue that I am doing so. In most of the “green” pieces I write, I discuss solutions to the challenge of reducing energy consumption in the data center. Discussions of climate change is simply part of the rationale that I offer for taking on such challenges.
I usually enjoy Shawn Powers' articles, but I feel that his editorial was a bit misleading [“Free to a Good Home: Junk”, in the UpFront section of the May 2009 issue]. The idea of recycling old computers into the hands of those who need them is great: “Don't worry about running out of hardware, the local school district likely has parts piled in closets in would love for you to 'recycle'.” I work for my local school district and had the same thought. I quickly received a lot of flack from the people at the top and discovered it is easier for them to trash computers than to give them away. As a result, I started a 501(c)(3) at reglue.org (Recycled Electronics and Gnu/Linux Used for Education). A lot of things did and didn't happen. I quickly had a lot of CRTs; I didn't have nearly as many working mainboards with RAM to couple them with. I also quickly discovered that sometimes it's hard to give stuff away.
On a lighter note, I know someone who has been a lot more successful with refurbishing and giving away computers than I—Helios from the Helios Project (www.heliosinitiative.org/news.php). He's also the author of the blog about the teacher and the Knoppix CD.
He and others are working to create a nation-wide (originally, just in Austin) Linux Against Poverty drive and installfest on August 1, 2009 (geekaustin.org/2009/02/01/linux-against-poverty). Maybe you'll consider coordinating your own Linux Against Poverty installfest.
As a side note: no one is really interested in having a computer without
Internet access. Community-based mesh networks are a great idea. I think
those distributing computers might want to help others access
the Internet—the greatest cleft in the Digital Divide
Unfortunately, it is easier to throw stuff away. That doesn't mean the school wouldn't love to give stuff away, just that it's difficult. Unless we break some ground and push for some new policies, those computers will continue to be thrown away instead of put to better use. If I misled you into thinking it would be easy, I do apologize. Also, as a big coincidence, I'm actually writing this response on Earth Day. It seems all the more important that we do make the effort, however difficult, to get the piles of usable computers into the hands of those who can use them. I'm speaking to myself as much as anyone, because in my school district, it's much easier to dispose of hardware than to give it away. That just has to change. Thanks for your comments. Hopefully, with people like us willing to do the grunt work, some real change can take place.—Ed.
I just wanted to send a quick note of thanks for the May 2009 issue. The
hardware articles were thoroughly enjoyable and just the right technical
level. I enjoyed the articles on the amateur rocket and underwater vehicle
in particular, and am eagerly awaiting the land-based RC Linux mobile to
complete the Earth/Air/Sea trilogy.
Me too! I'd go one further and anticipate the interstellar Linux probe, but that might be a while yet. Thanks for the kind comments. It's nice to hear we're bringing you material that is enjoyable and useful.—Ed.
Kyle Rankin had a great article in the March 2009 issue: “When Disaster Strikes: Hard Drive Crashes”. Good stuff there, and “Linux Hacks” has saved my backside more than once.
It has been my observation that most of my drive failures, particularly in laptops, involve heat. By cooling the drive, it is sometimes possible to pull an image—often an error-free image—before the unit fails entirely. If a drive won't run long enough to pull an image, sometimes it is possible to extract important files quickly.
I wrap them in anti-stat plastic and freeze them for a couple hours.
Once out of the freezer, I leave them wrapped to avoid condensation,
sandwich with gel-pacs, connect directly to a host machine, or via USB to
SATA/PATA adapter, and pull an image as quickly as possible. Rinse and
repeat as necessary.
I'm about 70% with this technique. Your mileage may vary.
Great article and great magazine. Keep it up.
Kyle Rankin replies: Ah, the famous freezer trick! I admit I have used that one myself a few times, although I've always wondered how much of it was science and how much was voodoo. Either way, when one's data is at stake, I think most people are willing to try anything that works (just look out for condensation on the drives if you live in a humid environment).
Regarding Zach Banks' “Fun with the iRobot Create” in the May
the schematic in Figure 2 on page 59 appears to be
less than entirely correct. It certainly doesn't match the diagram in
Figure 3 on page 60. The power supply and ground nets appear to be
somewhat scrambled in the schematic.
Zach Banks replies: Thanks for pointing out the inaccuracy. Please refer to the breadboard diagram for correct wiring connections.
Regarding Mick Bauer's “Samba Security, Part III” in the
January 2009 issue:
I believe “group-execute bit” should read “owner-execute
bit” in the
“The default value 0744, shown in Figure 2,
translates to 'owner read+write+execute, group
read, other read'. However, because this share
is going to contain text files, there's no reason
for the group-execute bit to be set; 0644
(owner read+write, group read, other read) is
a better choice.”
Mick Bauer replies: Excellent catch, Steven—thanks!
However, if you are going to cover AJAX, please get the facts straight. In whatever number of articles you have published about AJAX, I don't think you've ever correctly explained where it came from. The May 2009 Point/Counterpoint column was especially glaring. Bill Childers says, “AJAX is a newish Web technology (Google Maps came out with it in 2005...)”. Mr Childers, and Linux Journal, should do a bit of research. Although the magazine makes no secret of its distaste for all things Microsoft, you really need to give credit where it's due. As painful as you may find this, AJAX is a Microsoft creation.
The XMLHTTP object (later named XMLHttpRequest) was first available in IE5 in the late 1990s. It was created by the developers of an Outlook Web client. It was later copied by the Mozilla team and other browser developers. The increased availability of high-speed Net access, plus the adoption of XMLHttpRequest by multiple browser vendors, opened the door for developers at companies such as Google to start using it in more mainstream Web applications. And, of course, it really hit the big time when Jesse James Garrett coined the AJAX acronym.
Rampant Microsoft bashing makes Linux Journal look a little silly, but
inaccurate technical details makes you look plain bad. Whatever
justification there may be to dislike Microsoft the company, please do not
short-change the many good Microsoft developers who have done interesting
and valuable work.
Bill Childers replies: Thank you very much for the clarification on the origins of AJAX. You are absolutely correct about the history of the XMLHTTP object and how it became the XMLHttpRequest object. It was finally mostly “standardized” in 2006 when the W3C released the first draft spec for the object.
Although all that is historical fact, the practical matter is that Point/Counterpoint is meant to be a view into the differing opinions of two techies. It's not meant to be a heavy technical piece. The comment that Google Maps utilized AJAX in 2005 was meant to be taken as that's when AJAX “hit the big time”. It certainly wasn't the first use of it, but it was the one of the first places that used it in a new and unique way. Millions of people saw and ogled over the fact that the map could be dragged about with the mouse, and with that, Google Maps set a new standard for Web apps, in my opinion.
Please note that there was only one mention of Microsoft in the column, and it was with respect to feature creep. It wasn't “rampant Microsoft bashing”, nor was it targeted at its engineers. I'm sure the coders who work for Microsoft are hard-working, respectable folks who put their pants on one leg at a time, just like us Linux folks do. It sounds like you may have a cross to bear toward some past anti-Microsoft writings. May I suggest you withhold your rancor for a Point/Counterpoint where Kyle and I do bash Microsoft? I'm sure that will be on our list at some point, as Kyle and I have no sacred cows.
At any rate, I thank you for bringing some history on AJAX to our readers.
Whoever wrote the May 2009 “They Said It” column in UpFront saw fit to include quotes by Marx and Lenin. Why go half-baked? Allow me to submit a few more choice quotes for the next issue, in chronological order:
“Western intellectuals that profess admiration for Communism are suspect....They are objective traitors to their class and to their interests, and must be treated as such....After their final victory in Western Europe and America, revolutionary forces will eliminate all bourgeois traitors.”—Lenin
“Death solves all problems—no man, no problem.”—Joseph Stalin
“The only good bourgeois is a dead bourgeois.”—Pol Pot
I could go on, but I think you are starting to see my point.
Marx and Lenin are responsible for some of the most horrible dictatorships in history. And those are the people you chose to quote. Kudos. What elegance, what taste! Truly, you outdid yourself.
I am extremely disappointed in you and your journal. I have been reading LJ since 2000, and in all these years, this is the first time you display such an utter contempt for decency and history.
I expect you to apologize in the next issue, and I'd very much like not
subjected to repeat offenses.
Mitch Frazier replies: Francis, I'm responsible for those quotes. Sorry to have offended you, but I have to disagree with your apparent arguments that a bad guy can never have said anything useful and that all bad guys should just be erased from history.
I was watching the video on LinuxJournal.com about the various programs for screencasting, but I didn't see a reference to Wink. It is fairly decent.
I also was wondering if you could do a video tutorial on how to get the
sound from both the microphone and Rhythmbox to be recorded with some of
the other screencast programs?
I enjoy your mag; keep it up.
Ha! I thought Wink was Windows only. Either it added Linux support since I last looked at it, or (more likely) I just never realized it. Thanks for the tip! As far as diverting audio, I can look into the process, but I generally use an external hardware mixer, so I'd be guessing and poking too.—Ed.
Have a photo you'd like to share with LJ readers? Send your submission to email@example.com. If we run yours in the magazine, we'll send you a free T-shirt.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide