Shell Functions and Path Variables, Part 1

Shell utilities can simplify the maintenance of your path variables.

Few UNIX users give much consideration to their path variables. They are typically used in a “set and forget” fashion, and consequently, they often end up like straggly weeds, overgrown and unlovely. Take a look at this mess:

$ echo $PATH
/opt/kde/bin:/localbin:/usr/local/bin:/bin:/usr/bin:
/usr/X11R6/bin:/home/stephen
/scripts:/home/stephen/bin:/opt/CC/test/bin:/usr/sbin:
/usr/bin/X11:/ora01/app
/oracle/product/7.3.2/bin:/scripts:/opt/CC/bin:/bin:/usr/bin
Given this undifferentiated stream of characters, how long will it take you to:
  • List all the bin directories in PATH? (grep won't help—try it)

  • Swap the order of /bin and /usr/bin?

  • Remove that pesky /opt/CC/test/bin directory?

  • Get rid of the duplicate directories?

Despite their apparent simplicity, path variables can be tricky beasts to manipulate. It's all too easy to end up with duplicate entries in a path, and even the act of checking the contents is not straightforward. Adding a new directory to PATH is easy enough, but even then you may end up with a duplicate, because your eyes don't parse a colon-separated list efficiently.

A path variable is any shell or environment variable comprised of textual elements separated by colons. You are almost certainly familiar with the so-called search path, PATH, which your shell uses to find executable files, but there are other standard paths, such as MANPATH, which the man program uses to locate man pages, and LD_LIBRARY_PATH, which the dynamic loader can use to find shared libraries.

Path variables consist of textual elements separated by colons, and the (admittedly non-standard) term I use for these is “path element” or simply “pathel”. (You'll also see the term “path prefix” used, but not by me.) I'll also abbreviate “path variable” to “pathvar”.

All the utilities I describe here assume the bash shell (though there are Korn shell versions available as well), and they have been tested using bash 1.14.7 and bash 2.03.4.

I assume you know how to set and access variables in a shell and have used (or seen) shell control constructs (if, for and while). I also assume you are not necessarily clear about shell variables versus environment variables, or shell scripts versus shell functions, and specifically, that you have no idea what eval does.

Utilities

Here's a brief description of some path-variable utilities:

  • addpath: adds a pathel to a pathvar only if the pathel cannot be found on the pathvar (e.g., addpath -p NEWP/abc/).

  • delpath: removes pathels from a pathvar (e.g., delpath -p NEWP /abc/).

  • edpath: allows editing, and thus arbitrary modifications, of a pathvar.

  • listpath: echoes the pathels of a pathvar on separate lines; the output can then be filtered using grep, for example.

  • uniqpath: removes duplicate pathels from a pathvar.

A good shell utility should provide some guidance to the user, and accordingly, each pathvar utility has a -h option, which writes usage information to standard output. Furthermore, a good utility should not be fragile; it should check its arguments for sanity, as far as possible. This is doubly important when an important variable such as PATH is being altered. The path utilities share common option-handling code to simplify this sanity checking.

Taming Options and Arguments

Traditionally, shell scripts have handled their options in a somewhat ad hoc manner. The option-handling code in a script will often comprise a hand-crafted loop around getopts (which I'll describe later); this loop sets variables and issues error statements, as appropriate to the requested options. While this approach is common, it requires duplication of code in every script that is written. This is tedious and error-prone.

Listing 1

Option-handling code generally performs a small set of functions (i.e., setting variables and issuing messages), so we can usefully write a shell function to standardize this behaviour. Take a look at Listing 1, a shell script called testoptions.

To run this script, we could make the file executable by the owner (chmod u+x testoptions) and type its name. If you do that, you should see something like this:

$ testoptions
./testoptions: options: command not found

This occurs because line 3 of the script refers to options, a shell function, which we haven't told the shell about yet. When we do so, we can run testoptions again, this time with some arguments:

$ testoptions -a -b fred -d
opt_a=1
opt_b=fred
opt_c=
options_missing_arg=
options_unknown_option=d
options_num_args_left=0
Now, the shell function options has looked at its first argument (“ab:c”), a coded specification of the name and type of the expected options. It uses this to interpret its remaining arguments, which in this case are all those originally passed to testoptions (i.e., -a -b, fred and -d because $@ is converted into a quoted list of all arguments to the script).

The argument specification (ab:c) is in the form expected by the getopts command and means “we take three options, -a, -b and -c, and -b requires an argument”. The fact that -b requires an argument is indicated by the colon.

Each time the options function sees one of the allowed options in its argument list, it creates a new shell variable indicating the argument was present. So, for example, when the second argument (-a) is examined, options creates a variable called opt_a and sets its value to 1. Similarly, if an illegal option is passed, options creates a variable called options_unknown_option and sets its value to the name of the illegal option. As you can see from the output shown above, if an option requires an argument, the supplied argument is used as the value of the new variable. (Perl scripters will recognize this behaviour from the Getopts modules, which were, in fact, the inspiration for options.)

Listing 2

The fundamental problem is that options can't know in advance which variable names it will have to create, so they can't simply be hard coded in some way (at least not efficiently). Listing 2 is the code for options. The first couple of lines inform the shell that what follows is a shell function. A shell function is a collection of commands in a file that can be run by typing that name (i.e., typing options in a shell runs the commands in that script) and that run in the context of the shell calling the function. This last part is important; that is, when a shell runs a function, its commands take effect in that shell, in the same way as commands typed on the command line of an interactive shell. You should compare this to the effect of commands executed in a shell script, where a new shell is created to run the commands. For example, if you execute the command cd in a shell function, the current directory will be altered; in a shell script, the cd will take effect only in the new shell created by running the script. When the script has finished, you'll be in the same directory as before you ran it. Shell functions also have numbered arguments (i.e., $1, $2, etc.) just like scripts.

The next part of options performs some initializations. The first six executable lines declare variables. Since code in a function executes as if it was run in the calling shell, if we create a variable in a function, it will exist in the shell at the end of the function. If we don't want this behavior, we can make a variable local to a function by preceding it with the reserved word typeset. (In bash, you can use local instead, but typeset works in ksh, too.) Thus, the variable opts will not exist at the end of options, but options_shift_val will, for example.

After checking the number of arguments, we set opts to the value of the argument spec, with an additional leading colon. So, with our testoptions values, opts would contain :ab:c. The leading colon prevents getopts from issuing spurious error messages. The first argument is then shifted away by the shift command. This means the argument that was $2 becomes $1, $3 becomes $2, and so on. This is a common trick in shell scripting, used when an argument is no longer needed.

The meat of the function begins with the line OPTERR=0. This code section does the work of examining the options and creating the variables. We delegate the option examination to getopts and create variables using eval.

The shell command getopts examines the positional parameters ($1, $2, etc.). When you call it the first time, it examines $1; the next time $2, and so on. When called in a while loop as in Listing 2, it will look at all positional parameters and return false when finished, thus terminating the loop. Remember, options expects its first argument to be the getopts specification, and the remaining arguments to be positional parameters. However, we shifted the getopts specification away, so $1, $2 and so on are indeed the positional parameters when getopts examines them. The $opts argument to getopts tells it the legal set of arguments, as described above.

If getopts sees a legal option, it stores it without the leading - in the argname variable, and if that option takes an argument, it stores that argument in a variable called OPTARG. If an incorrect option is seen, getopts stores an error code in argname and the name of the incorrect option in OPTARG. There are two sorts of incorrect options:

  • An option in which the name is not listed in the getopts specification. In this case, getopts stores ? in argname.

  • An option requiring an argument, but where the argument is missing; getopts stores : in argname if this occurs.

bash getopts has a bug: it stores ? in both these cases. Listing 2 contains a workaround. ksh does not have this problem.

If neither of these problems occurs, we have a valid option and can go on to create a variable. This is done in the final if statement in the loop. The then branch handles the case when the option has an argument and the else branch handles the case when there is no argument; both use eval. Let's look at one of these:

eval opt_$argname=$OPTARG  # set option name

Suppose we're processing the -b option with an argument of fred: argname will contain b and OPTARG will contain fred. We want the shell to run this code:

opt_b=fred
Our first attempt is likely to be:
opt_$argname=$OPTARG
reasoning that the shell will replace $argname by b, $OPTARG by fred, and we're done. Good try, but it doesn't work. If you're sitting in front of a bash shell prompt now, try this:
$ argname=b
$ OPTARG=fred
$ opt_$argname=$OPTARG
You should see this message: bash: opt_b=fred: command not found.

Which command is not found? The shell did indeed expand the variables. The problem is that, although the shell has generated the string opt_b=fred, it considers its work on the line to be finished and tries to execute a program called “opt_b=fred”. Although the line after processing looks like a shell command, the shell won't notice this, because it processes each line only once. To fix this problem, we need to instruct the shell to expand the variables in this line and then start over again, processing the line as if it is the first time. That is precisely what the eval at the start of the line accomplishes. When the shell processes an expanded line, it will recognize eval as a command to create a shell variable and will create one.

Remember, these variables are being created in a function and will continue to exist after the function has terminated. Thus, we can call the options function from a script (or indeed another function) and use the variables it has created in any way we like.

In the interest of space, I have not described all the steps the shell performs when it expands a command line; for the gory details, consult Learning the Bash Shell from O'Reilly & Associates.

To ensure the shell knows about a function, there is one option in bash and two in ksh. In bash, you must “source” the file containing the function in one of your start-up scripts such as .bash_profile (or equivalently, include the code directly in the start-up script). In ksh, too, you can source a file in a start-up script, or alternatively, put your function files in a directory (perhaps called $HOME/functions) and add this directory to the FPATH environment variable. When you type the name of a command unknown to ksh, it looks in the directories in FPATH to see if there is a function file with that name. If so, it reads the file, remembers the function definition and executes it.

Stephen Collyer (stephen@twocats.demon.co.uk) is a freelance software developer working in the UK. His interests include scripting languages, distributed and thread-based systems, and finding out why upgrading to Red Hat 6.0 broke pppd on his Linux box. In his spare time, he campaigns against the British Government's IR35 proposals. Occasionally, he finds the time to talk to his wife and two remarkably attractive and highly intelligent children.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

user entering a variable name can also be solved

Mohamed Mustafa Z's picture

I was having a problem when a user enters a input
i wanted to store the value of the variable into another on the fly. if the input was not a variable it shd get the value as is. I.E
echo "Enter oracle path:"
read $VAR

here if the user enters $ORACLE_HOME i wanted the var to have the value of $ORACLE_HOME and if he enters /oracle/binxyz i wanted as it is
that was solved by:
eval VAR=\"$VAR\"

Thanks for the posting :)

testoptions 'problem'

joe Davison's picture

Nice example.

I was nasty, though, I tried: testoptions -a -b "a b" -c

It took a while to figure out what the error message meant -- the eval sees a form like
opt_b=a b
and tries to find a command 'b' to execute with $opt_b == 'a'...

The fix was a couple of escaped quotes in the eval and in the final echo:
eval opt_$argname=\"$OPTARG\" # set option name
and
echo opt_b=\"$opt_b\"

Thanks.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState