Sorting is a very basic computer operation. It is commonly used on text, to get lists in alphabetical order or to sort a numbered list. Linux has a powerful filter for sorting called, logically enough, sort.
These two very simple filters have a surprising variety of uses. As their names suggest, head shows the head of a file, while tail shows the end. By default, both show the first or last ten lines respectively, and tail in particular has a number of other useful options. (See the man pages.)
Sometimes we need to do something a bit more complex than the relatively simple command lines of the above examples. For this, we need something I'll call a “programmable filter”, that is, a filter with a scripting language that allows us to specify complex operations.
sed, the stream editor, is a filter typically used to operate on lines of text as an alternative to using an interactive editor. (See “Take Command: Good Ol' sed” by Hans de Vreught, April 1999.) There are times when firing up vi or Emacs and making the change, whether manually or using vi/ex commands, is not appropriate. For example, what if you have to make the same changes to fifty files? What if you need to change a string, but are not sure exactly in which files it occurs?
As is common in the UNIX world, where tools are often duplicated in different ways, sed can do most things grep does. Here is a simple grep in sed:
sed -n '/Linus Torvalds/p' filename
All this does is read standard input and print only those lines containing the string “Linus Torvalds”.
The default with sed is to pass standard input to standard output unchanged. To make it do anything useful, you must give it instructions. In our first example, we searched for the string by enclosing it in forward slashes (//) and told sed to print any line with that string in it with the p option. The -n option ensured that no other lines would be printed. Remember, the default behaviour is to print everything.
If this were all sed could do, we would be better off sticking with grep. However, sed's forte is as a stream editor, changing text files according to the rules you supply. Let's take a simple example.
sed 's/Torvuls/Torvalds/g' filename
This uses the sed “substitute” (s option) and applies it globally (g option). It looks for every occurrence of “Torvuls” and changes it to “Torvalds”. Without the g command at the end, it would change only the first occurrence of “Torvuls” on each line.
sed '/^From /,/^$/d' filenameThis searches the standard input for the word “From” at the beginning of a line, followed by a space, and deletes all the lines from the line containing that pattern up to and including the first blank line, which is represented by ^$, i.e., a beginning of line (^) followed immediately by an end of line ($). In plain English, it strips out the header from a Usenet posting you have saved in a file.
Double-spacing a text file takes just one command:
sed G filename > file.doublespaced
According to our manual page, all this does is “append the contents of the hold space to the current text buffer”. That is, for each line, we output the contents of a buffer sed uses to store text. Since we haven't put anything in there, it is empty. However, in sed, appending this buffer adds a new line, regardless of whether there is anything in the buffer. So, the effect is to add an extra new line to each line, thus double-spacing the output.
Another very useful filter is the AWK programming language. (See “The AWK Tools” by Lou Iacona, May 1999.) Despite the weird name, it is an everyday tool.
To start with, let's look again at yet another way to do a grep: 'grep'. Fast
awk '/Linus Torvalds/'
Like grep and sed, AWK can search for text patterns. As with sed, each pattern can be associated with an action. If no action is supplied as in the above example, the default is to print each line where the pattern is matched. Alternatively, if no pattern is supplied, then the default action is to apply the action to every line. An AWK script for centering lines in a file is shown in Listing 1.
AWK's strength is in its ability to treat data as tabular, that is, arranged in rows and columns. Each input line is automatically split into fields. The default field separator is “white space”, i.e., blanks and tabs, but can be changed to any character you want. Many UNIX utilities produce this sort of tabular output. In our next section, we'll see how this tabular format can be sent as input to AWK using a shell construction we haven't seen yet.
Practical books for the most technical people on the planet. Newly available books include:
- Agile Product Development by Ted Schmidt
- Improve Business Processes with an Enterprise Job Scheduler by Mike Diehl
- Finding Your Way: Mapping Your Network to Improve Manageability by Bill Childers
- DIY Commerce Site by Reven Lerner
Plus many more.
- diff -u: What's New in Kernel Development
- Server Hardening
- Giving Silos Their Due
- 22 Years of Linux Journal on One DVD - Now Available
- What's New in 3D Printing, Part III: the Software
- Controversy at the Linux Foundation
- Don't Burn Your Android Yet
- February 2016 Issue of Linux Journal
- Firefox OS