Work the Shell - Looking More Closely at Letter and Word Usage
Time and again I have entreated you, dear readers, with my plea for “A letter! My column, nay, my kingdom for a reader letter!” And lo, the miracle occurred, the heavens parted, the angels sang and a letter arrived:
In addition to the letter and word frequency, how about looking at how frequently a letter appears as the first letter of a word? Just to make things more interesting, what is the frequency of two-letter combinations? For instance, if the first letter of a two-letter combination is a t, what is the most frequent second letter? Thanks for the article in Linux Journal. It was a good read and nice scripts.—Mike Short
Quando omni flunkus moritati.
First off, before I even read the letter, I was intrigued by the closing quote. Latin? Isn't that, like, a dead language? Turns out the quote's a good one though, especially for IT admins in big companies. It roughly translates to “when all else fails, play dead”, and it comes from the Red Green Show, a Canadian comedy. (Thanks Google.)
Now, on to the heart of the letter. Mike's referring to an earlier column where we looked at how to use shell scripts to ascertain letter and word usage, using three books as source material: Dracula, A History of the United States and Pride and Prejudice, all downloaded from Project Gutenberg.
In that series of columns, we ascertained that the ten most common letters in the English language are e, t, a, o, n, i, s, r, h and d. Are they the same if we constrain it to just the first letter of words? Let's find out.
Once we have a corpus of writing and the ability to break it down by words, so that the input stream to the counting script:
is like this
it's done like so:
$ cat dracula.txt | tr ' ' '\ ' | grep -v '[^[:alpha:]]' | grep -v "^$"
That'll turn Dracula into the world's narrowest book, with one word per line.
Now we simply can add to it to axe all but the first letter by appending cut -c1. The result looks like one of those streams of letters in The Matrix, but that's another story.
So, all that's left is to translate uppercase into lowercase, sort, and then use our friend uniq -c to tally up the results:
tr '[:upper:]' '[:lower:]' | sort | uniq -c | sort -rn | head
And, the resultant top ten are:
20648 t 15787 a 11110 i 10655 w 9906 h 9030 s 7618 o 5720 m 5411 b 4597 f
Quite different! Now, the question is, does it change based on the type of content? Let's do the same command, but this time, let's feed in all three of our books, not just Dracula (though with the rabid <cough cough> popularity of Twilight, maybe Linux Journal would do well to stick with a vampire theme for a few issues?):
34359 t 27053 a 18212 w 18119 h 17854 i 15746 s 13614 o 10076 b 9792 m 7712 f
It's not exactly the same. Isn't that interesting? I'm not sure what to make of it, but as you can see, a good grasp of shell script commands makes finding out this sort of fairly goofy information interesting.
But, we're not quite done, because Mike also wondered about two-letter combinations. It's this sort of query that really shows just how helpful becoming savvy on the command line can be. To calculate that requires only one character to be changed in the command invoked above. Do you know what it is?
It's the cut command. Above, we're specifying that we want only the very first character of each line of input with cut -c1. If we want the first two, we simply can tweak that command flag as appropriate.
But, -c2 won't work, because that'll give us only the second letter of each word (and the most common second letter in the English language is o, followed by h, e, a and n).
Instead, we need to use a letter range, which looks like this: -c1-2. The result of that invocation is:
22100 th 10168 an 9138 to 7508 he 7100 of 5873 i<space> 5517 in 5332 ha 5157 be 4664 wh
There ya go, Mike. The most common two-letter combination in the English language is th, which actually makes some sense, followed as a distant second by an.
I hope it's trivially obvious how you could use this to calculate the most common three-letter combinations (it should be no surprise at all that the is the most common three letter combo, followed by and).
I'll wrap up here, but again, I invite you to send me your letters and queries so we can explore various ways to use shell scripts.
Dave Taylor has been involved with UNIX since he first logged in to the on-line network in 1980. That means that, yes, he's coming up to the 30-year mark now. You can find him just about everywhere on-line, but start here: www.DaveTaylorOnline.com.
Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide