Work the Shell - Special Variables I: the Basics

 in
Dave begins a new series of columns on shell variable notation.

There I was, trying to come up with a topic for this column, when I did what I usually do when stumped: I sent a question out to my Twitter followers. This time, I got a great answer, from John Minnihan: “How about how special vars inside a script, for example, #!/bin/bash script="${0##*/}" current=`dirname "$0"` cd $current; make?”

That's a good topic, so let's dig into it, starting with the basics this month, shall we?

The Easy Special Variables

The basic notation of variables in the shell is $varname, but I bet you've already used a few special notations without really thinking about it. For example, want to know how many positional parameters (aka starting arguments) you received when the script was invoked? Using $# gives you the value:

echo "you gave me $# parameters"

Want to get a specific positional parameter from the starting command line? That's done with other special variables: $1, $2, $3 and so on. These are rather odd cases actually, and the shift command shifts them all down one, so you easily can parse and trim command flags.

Try this snippet to see what I'm talking about:

echo "arg1 = $1" ; shift ; echo "now arg1 = $1"

The variable $0 is a special one in this sequence. It's the name of the script or program invoked. This can be quite helpful, because it means you can add multiple commands to your Linux shell with a single shared script base.

Let's say that you want to add “happy” and “sad” as two new command-line options, but you want to do it within a single script. Easy! Write the script, save as “happy”, create a symbolic link that means “sad” points to “happy”, and put this in the script itself:

if [ "$0" = "happy" ] ; then
  echo "I am so darn happy too, hurray!"
else
  echo "Sorry you're sad. Why not take a walk?"
fi

See how that works? It turns out that there's a nuance to this usage, however, because you often get the full path in the $0 variable, so most people use $(basename $0) instead of just utilizing the $0 directly.

Checking Your Status

Another special variable that you might have encountered is the status variable, $?. In a script, this contains the return value of the most recently executed (external) command.

This is where you need to read man pages so you know what to expect on success and failure, but as an example, consider the test command. According to the man page, “if [the expression] evaluates to true, it returns a zero (true) exit status; otherwise it returns 1 (false). If there is no expression, test also returns 1 (false).”

This means you could do this:

test 1 --eq 3
if $? ; then

Quick, now, would we be within this conditional statement or not? That's where it's tricky because zero = true and nonzero = false, which is somehow opposite to how we naturally think of conditional tests (well, how I think of them, at least). In fact, the above test would be testing 1, because the “test” would evaluate to false, and its return value also would be false.

Now, using test like this is a sort of daft example, but what if you wanted to create a subdirectory and then test to see if it was successful? That's a perfect use for $?, actually:

mkdir $newdir
if [ $? --ne 0 ] ; then
   echo "We failed to make the directory $newdir"

It turns out that you also can streamline this sort of thing by having the “if” directly evaluate the return code:

if mkdir $newdir ; then

That's a better coding style, although it can be confusing if you are used to having conditional expressions be value tests, not actually commands that do something.

A Few More Useful Special Variables

A special variable that I use with great frequency for helping create temporary file names is $$, which expands to the current process ID in the system. For example:

$ echo $$
3243

If you're doing a lot with subshells or spawning subcommands, another useful variable is $!, which is the process ID of the most recently spawned background command. I've never used this in any of my shell scripts, but you might find a situation where it's helpful.

The last example I'll talk about here is most useful when you want to hand starting parameters to subshells. The two options are $* and $@, and it's so convoluted to explain the difference that it's easier just to demonstrate.

Let's start with a tiny script that simply reports how many parameters it's given:

#!/bin/sh
echo "I was given $# parameters"
exit 0

I'll call that subshellcount.sh and utilize it like this:

#!/bin/sh
echo "you gave me $# variables and the first is $1"
echo "unprotected parameters:"
./subshellcount.sh  $1 $2 $3 $4
echo "or, more succinctly:"
./subshellcount.sh $*
echo "but when we put \$* in quotes:"
./subshellcount.sh "$*"
echo "by comparison, same thing with \$@:"
./subshellcount.sh "$@"

Watch what happens when I invoke it with three parameters, one of which has a space embedded:

$ sh test.sh I love "Linux Journal"
you gave me 3 variables and the first is I
unprotected parameters:
I was given 4 parameters
or, more succinctly:
I was given 4 parameters
but when we put $* in quotes:
I was given 1 parameters
by comparison, same thing with $@:
I was given 3 parameters

Can you see the difference here? When we don't take efforts to protect the space in the third positional parameter (either by just referencing $3 or using the $@ without quotes), it splits into two parameters to the subshell, and we get a count of four.

Quoting by itself doesn't do the trick either, because of the difference between $@ and $*. With the latter, everything expands without “breaking out of” the quotes, so $* ends up being a single positional parameter to the subshell. Fortunately, $@ works exactly as we'd like, and the subshell gets three parameters, not one, not four.

It seems a bit trivial, but when you start working with filenames that have spaces in them, for example, you quickly will learn just how tricky it is to get all of this correct!

I'm going to stop here, and starting next month, we'll delve into the more obscure and complex shell variable notation. It's interesting stuff.

Dave Taylor is a 26-year veteran of UNIX, creator of The Elm Mail System, and most recently author of both the best-selling Wicked Cool Shell Scripts and Teach Yourself Unix in 24 Hours, among his 16 technical books. His main Web site is at www.intuitive.com, and he also offers up tech support at AskDaveTaylor.com. You also can follow Dave on Twitter through twitter.com/DaveTaylor.

______________________

Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at www.DaveTaylorOnline.com.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix