Sorting is a very basic computer operation. It is commonly used on text, to get lists in alphabetical order or to sort a numbered list. Linux has a powerful filter for sorting called, logically enough, sort.
These two very simple filters have a surprising variety of uses. As their names suggest, head shows the head of a file, while tail shows the end. By default, both show the first or last ten lines respectively, and tail in particular has a number of other useful options. (See the man pages.)
Sometimes we need to do something a bit more complex than the relatively simple command lines of the above examples. For this, we need something I'll call a “programmable filter”, that is, a filter with a scripting language that allows us to specify complex operations.
sed, the stream editor, is a filter typically used to operate on lines of text as an alternative to using an interactive editor. (See “Take Command: Good Ol' sed” by Hans de Vreught, April 1999.) There are times when firing up vi or Emacs and making the change, whether manually or using vi/ex commands, is not appropriate. For example, what if you have to make the same changes to fifty files? What if you need to change a string, but are not sure exactly in which files it occurs?
As is common in the UNIX world, where tools are often duplicated in different ways, sed can do most things grep does. Here is a simple grep in sed:
sed -n '/Linus Torvalds/p' filename
All this does is read standard input and print only those lines containing the string “Linus Torvalds”.
The default with sed is to pass standard input to standard output unchanged. To make it do anything useful, you must give it instructions. In our first example, we searched for the string by enclosing it in forward slashes (//) and told sed to print any line with that string in it with the p option. The -n option ensured that no other lines would be printed. Remember, the default behaviour is to print everything.
If this were all sed could do, we would be better off sticking with grep. However, sed's forte is as a stream editor, changing text files according to the rules you supply. Let's take a simple example.
sed 's/Torvuls/Torvalds/g' filename
This uses the sed “substitute” (s option) and applies it globally (g option). It looks for every occurrence of “Torvuls” and changes it to “Torvalds”. Without the g command at the end, it would change only the first occurrence of “Torvuls” on each line.
sed '/^From /,/^$/d' filenameThis searches the standard input for the word “From” at the beginning of a line, followed by a space, and deletes all the lines from the line containing that pattern up to and including the first blank line, which is represented by ^$, i.e., a beginning of line (^) followed immediately by an end of line ($). In plain English, it strips out the header from a Usenet posting you have saved in a file.
Double-spacing a text file takes just one command:
sed G filename > file.doublespaced
According to our manual page, all this does is “append the contents of the hold space to the current text buffer”. That is, for each line, we output the contents of a buffer sed uses to store text. Since we haven't put anything in there, it is empty. However, in sed, appending this buffer adds a new line, regardless of whether there is anything in the buffer. So, the effect is to add an extra new line to each line, thus double-spacing the output.
Another very useful filter is the AWK programming language. (See “The AWK Tools” by Lou Iacona, May 1999.) Despite the weird name, it is an everyday tool.
To start with, let's look again at yet another way to do a grep: 'grep'. Fast
awk '/Linus Torvalds/'
Like grep and sed, AWK can search for text patterns. As with sed, each pattern can be associated with an action. If no action is supplied as in the above example, the default is to print each line where the pattern is matched. Alternatively, if no pattern is supplied, then the default action is to apply the action to every line. An AWK script for centering lines in a file is shown in Listing 1.
AWK's strength is in its ability to treat data as tabular, that is, arranged in rows and columns. Each input line is automatically split into fields. The default field separator is “white space”, i.e., blanks and tabs, but can be changed to any character you want. Many UNIX utilities produce this sort of tabular output. In our next section, we'll see how this tabular format can be sent as input to AWK using a shell construction we haven't seen yet.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- The Qt Company's Qt Start-Up
- Devuan Beta Release
- May 2016 Issue of Linux Journal
- EnterpriseDB's EDB Postgres Advanced Server and EDB Postgres Enterprise Manager
- The US Government and Open-Source Software
- Open-Source Project Secretly Funded by CIA
- The Death of RoboVM
- The Humble Hacker?
- BitTorrent Inc.'s Sync
- New Container Image Standard Promises More Portable Apps
In modern computer systems, privacy and security are mandatory. However, connections from the outside over public networks automatically imply risks. One easily available solution to avoid eavesdroppers’ attempts is SSH. But, its wide adoption during the past 21 years has made it a target for attackers, so hardening your system properly is a must.
Additionally, in highly regulated markets, you must comply with specific operational requirements, proving that you conform to standards and even that you have included new mandatory authentication methods, such as two-factor authentication. In this ebook, I discuss SSH and how to configure and manage it to guarantee that your network is safe, your data is secure and that you comply with relevant regulations.Get the Guide