An Introduction to awk

by Jose Nazario

The awk programming language often gets overlooked for Perl, which is a more capable language. Out in the real world, however awk is found even more ubiquitously than Perl. It also has a smaller learning curve than Perl does, and awk can be used almost everywhere in system monitoring scripts, where efficiency is key. This brief tutorial is designed to help you get started in awk programming.

The Basics

The awk language is a small, C-style language designed for the processing of regularly formatted text. This usually includes database dumps and system log files. It's built around regular expressions and pattern handling, much like Perl is. In fact, Perl is considered to be a grandchild of the awk language.

awk's funny name comes from the names of its original authors, Alfred V. Aho, Brian W. Kernighan and Peter J. Weinberger. Most of you probably recognize the Kernighan name; he is one of the fathers of the C programming language and a major force in the UNIX world.

Using awk in a One Liner

I began using awk to print specific fields in output. This worked surprisingly well, but the efficiency went through the floor when I wrote large scripts that took minutes to complete. Here, however, is an example of my early awk code:

ls -l /tmp/foobar | awk '{print $1"\t"$9}'

This code takes some input, such as this:

-rw-rw-rw-   1 root     root            1 Jul 14  1997 tmpmsg

and generates output like this:

-rw-rw-rw-      tmpmsg

As shown, the code output only the first and ninth fields from the original input. So you can see why awk is so popular for one-line data extraction purposes. Now, let's move on to a full-fledged awk program.

An awk Program Structure

One of my favorite things about awk is its amazing readability, especially as compared to Perl or Python. Every awk program has three parts: a BEGIN block, which is executed once before any input is read; a main loop, which is executed for every line of input; and an END block, which is executed after all of the input is read. It's quite intuitive, something I often say about awk.

Here is a simple awk program that highlights some of the language's features. See if you can pick out what is happening before we dissect the code:

#!/usr/bin/awk -f
# check the sulog for failures..
# copyright 2001 (c) jose nazario
# works for Solaris, IRIX and HPUX 10.20
  print "--- checking sulog"
  if ($4 == "-") {
    print "failed su:\t"$6"\tat\t"$2"\t"$3
  print "---------------------------------------"
  printf("\ttotal number of records:\t%d\n", NR)
  printf("\ttotal number of failed su's:\t%d\n",failed)

Have you figured it out yet? Would it help to know the format of a typical line in the input file--sulog, from, say, IRIX? Here's a typical pair of lines:

        SU 01/30 13:15 - ttyq1 jose-root
        SU 01/30 13:15 + ttyq1 jose-root

Now read the script again and see if you can figure it out. The BEGIN block sets everything up, printing out a header and initializing our one variable--in this case, failed--to zero. The main loop then reads each line of input--the sulog file, a log of su attempts--and compares field four against the minus sign. If they match, it means the attempt failed, so we increment the counter by one and note which attempt failed and when. At the end, final tallies are presented that show the total number of input lines as the number of records--NR, an internal awk variable--and the number of failed su attempts, as we noted. Output looks like this:

failed su:      jose-root       at      01/30   13:15
        total number of records:        272
        total number of failed su's:    73

You also should be able to see how printf works here, which is almost exactly the way printf works in C. In short, awk is a rather intuitive language.

By default, the field separator is whitespace, but you can tweak that. I set it to be a colon in password files, for example. The following small script looks for users with an ID of 0 (root equivalent) and no passwords:

#!/usr/bin/awk -f
BEGIN { FS=":" }
  if ($3 == 0) print $1
  if ($2 == "") print $1

Other awk internals you should know and use are "RS" for record separator, which defaults to a newline or \n; "OFS" for output field separator, which defaults to nothing; and "ORS" for output record separator, which default to a new line. All of these can be set within the script, of course.

Regular Expressions

The awk language matches normal regular expressions that you have come to know and love, and it does so better than grep. For instance, I use the following awk search pattern to look for the presence of a likely exploit on Intel Linux systems:

#!/usr/bin/awk -f
{ if ($0 ~ /\x90/) print "exploit at line " NR }

You can't use grep to look for hex value 0x90, but 0x90 is popular in Intel exploits. Its the NOP call, which is used as padding in shell code portions.

You can use awk, though, to look for hex values by using \xdd, where dd is the hex number to look for. You also can look for decimal (ASCII) values by looking for \ddd, using the decimal value. Regular expressions based on text work too.

Random awk Bits

Random numbers in awk are readily generated, but there is an interesting caveat. The rand() function does exactly what you would expect it to--it returns a random number, in this case, between 0 and 1. You can scale it, of course, to get larger values. Here's some example code to show you how, as well as an interesting bit of behavior:

#!/usr/bin/awk -f
  print rand(); exit

Run that a couple of times, and you soon see a problem: the random numbers are hardly random--they repeat every time you run the code!

What's the problem? Well, we didn't seed the random number generator. Normally, we're used to our random number generator pulling entropy from a good source, such as, in Linux, /dev/random. However, awk doesn't do this. To really get random numbers, we should seed our random number generator. The improved code below does this:

#!/usr/bin/awk -f
  print rand(); exit

The seeding of the random number generator in the BEGIN block is what does the trick. The function srand() can take an argument, and in the absence of one, the current date and time is used to seed the generator. Note that the same seed always produces the same "random" sequence.


This isn't the most detailed introduction to awk that you can find, but I hope it is more clear to you how to use awk in a program setting. Myself, I'm quite happy programming in awk, and I've got a lot more to learn. And, we haven't even touched on arrays, self-built functions or other complex language features. Suffice it to say, awk is hardly Perl's little brother.


Kernighan's home page contains a list of good awk books as well as the source for the "one true awk", aka nawk. The page also contains a host of other interesting links and information from Kernighan.

The standard awk implementation, nawk (for "new awk", as opposed to old awk, sometimes found as "oawk" for compatability), is based on the POSIX awk definitions. It contains a few functions that were introduced by two other awk implementations, gawk and mawk. I usually keep this one around as nawk and use it to test the portability of my awk scripts. nawk usually is found on commercial UNIX machines, where I often don't have gawk installed.

The GNU project's awk, gawk, also is based on the POSIX awk standard, but it adds a significant number of useful features as well. These include command-line features such as "lint" checking and reversion to struct POSIX mode. My favorite feature in gawk is the line breaks, using \, and the extended regular expressions. The gawk documentation has a complete discussion of GNU extensions to the awk language. This is also the standard awk version found on Linux and BSD systems.

sed & awk is perhaps the most popular book available on these two small languages, and it is highly regarded. It contains, among other things, a discussion of popular awk implementations--gawk, nawk, mawk--a great selection of functions and the usual O'Reilly readability. The awk Home Page lists several other books on the awk programming language, but this one remains my favorite.

Copyright (c) 2001, Jose Nazario. Originally published in Linux Gazette issue 67. Copyright (c) 2001, Specialized Systems Consultants, Inc.

Load Disqus comments

Firstwave Cloud