Shell Functions and Path Variables, Part 1
Few UNIX users give much consideration to their path variables. They are typically used in a “set and forget” fashion, and consequently, they often end up like straggly weeds, overgrown and unlovely. Take a look at this mess:
$ echo $PATH /opt/kde/bin:/localbin:/usr/local/bin:/bin:/usr/bin: /usr/X11R6/bin:/home/stephen /scripts:/home/stephen/bin:/opt/CC/test/bin:/usr/sbin: /usr/bin/X11:/ora01/app /oracle/product/7.3.2/bin:/scripts:/opt/CC/bin:/bin:/usr/binGiven this undifferentiated stream of characters, how long will it take you to:
List all the bin directories in PATH? (grep won't help—try it)
Swap the order of /bin and /usr/bin?
Remove that pesky /opt/CC/test/bin directory?
Get rid of the duplicate directories?
A path variable is any shell or environment variable comprised of textual elements separated by colons. You are almost certainly familiar with the so-called search path, PATH, which your shell uses to find executable files, but there are other standard paths, such as MANPATH, which the man program uses to locate man pages, and LD_LIBRARY_PATH, which the dynamic loader can use to find shared libraries.
Path variables consist of textual elements separated by colons, and the (admittedly non-standard) term I use for these is “path element” or simply “pathel”. (You'll also see the term “path prefix” used, but not by me.) I'll also abbreviate “path variable” to “pathvar”.
All the utilities I describe here assume the bash shell (though there are Korn shell versions available as well), and they have been tested using bash 1.14.7 and bash 2.03.4.
I assume you know how to set and access variables in a shell and have used (or seen) shell control constructs (if, for and while). I also assume you are not necessarily clear about shell variables versus environment variables, or shell scripts versus shell functions, and specifically, that you have no idea what eval does.
Here's a brief description of some path-variable utilities:
addpath: adds a pathel to a pathvar only if the pathel cannot be found on the pathvar (e.g., addpath -p NEWP/abc/).
delpath: removes pathels from a pathvar (e.g., delpath -p NEWP /abc/).
edpath: allows editing, and thus arbitrary modifications, of a pathvar.
listpath: echoes the pathels of a pathvar on separate lines; the output can then be filtered using grep, for example.
uniqpath: removes duplicate pathels from a pathvar.
A good shell utility should provide some guidance to the user, and accordingly, each pathvar utility has a -h option, which writes usage information to standard output. Furthermore, a good utility should not be fragile; it should check its arguments for sanity, as far as possible. This is doubly important when an important variable such as PATH is being altered. The path utilities share common option-handling code to simplify this sanity checking.
Traditionally, shell scripts have handled their options in a somewhat ad hoc manner. The option-handling code in a script will often comprise a hand-crafted loop around getopts (which I'll describe later); this loop sets variables and issues error statements, as appropriate to the requested options. While this approach is common, it requires duplication of code in every script that is written. This is tedious and error-prone.
Option-handling code generally performs a small set of functions (i.e., setting variables and issuing messages), so we can usefully write a shell function to standardize this behaviour. Take a look at Listing 1, a shell script called testoptions.
To run this script, we could make the file executable by the owner (chmod u+x testoptions) and type its name. If you do that, you should see something like this:
$ testoptions ./testoptions: options: command not found
This occurs because line 3 of the script refers to options, a shell function, which we haven't told the shell about yet. When we do so, we can run testoptions again, this time with some arguments:
$ testoptions -a -b fred -d opt_a=1 opt_b=fred opt_c= options_missing_arg= options_unknown_option=d options_num_args_left=0Now, the shell function options has looked at its first argument (“ab:c”), a coded specification of the name and type of the expected options. It uses this to interpret its remaining arguments, which in this case are all those originally passed to testoptions (i.e., -a -b, fred and -d because $@ is converted into a quoted list of all arguments to the script).
The argument specification (ab:c) is in the form expected by the getopts command and means “we take three options, -a, -b and -c, and -b requires an argument”. The fact that -b requires an argument is indicated by the colon.
Each time the options function sees one of the allowed options in its argument list, it creates a new shell variable indicating the argument was present. So, for example, when the second argument (-a) is examined, options creates a variable called opt_a and sets its value to 1. Similarly, if an illegal option is passed, options creates a variable called options_unknown_option and sets its value to the name of the illegal option. As you can see from the output shown above, if an option requires an argument, the supplied argument is used as the value of the new variable. (Perl scripters will recognize this behaviour from the Getopts modules, which were, in fact, the inspiration for options.)
The fundamental problem is that options can't know in advance which variable names it will have to create, so they can't simply be hard coded in some way (at least not efficiently). Listing 2 is the code for options. The first couple of lines inform the shell that what follows is a shell function. A shell function is a collection of commands in a file that can be run by typing that name (i.e., typing options in a shell runs the commands in that script) and that run in the context of the shell calling the function. This last part is important; that is, when a shell runs a function, its commands take effect in that shell, in the same way as commands typed on the command line of an interactive shell. You should compare this to the effect of commands executed in a shell script, where a new shell is created to run the commands. For example, if you execute the command cd in a shell function, the current directory will be altered; in a shell script, the cd will take effect only in the new shell created by running the script. When the script has finished, you'll be in the same directory as before you ran it. Shell functions also have numbered arguments (i.e., $1, $2, etc.) just like scripts.
The next part of options performs some initializations. The first six executable lines declare variables. Since code in a function executes as if it was run in the calling shell, if we create a variable in a function, it will exist in the shell at the end of the function. If we don't want this behavior, we can make a variable local to a function by preceding it with the reserved word typeset. (In bash, you can use local instead, but typeset works in ksh, too.) Thus, the variable opts will not exist at the end of options, but options_shift_val will, for example.
After checking the number of arguments, we set opts to the value of the argument spec, with an additional leading colon. So, with our testoptions values, opts would contain :ab:c. The leading colon prevents getopts from issuing spurious error messages. The first argument is then shifted away by the shift command. This means the argument that was $2 becomes $1, $3 becomes $2, and so on. This is a common trick in shell scripting, used when an argument is no longer needed.
The meat of the function begins with the line OPTERR=0. This code section does the work of examining the options and creating the variables. We delegate the option examination to getopts and create variables using eval.
The shell command getopts examines the positional parameters ($1, $2, etc.). When you call it the first time, it examines $1; the next time $2, and so on. When called in a while loop as in Listing 2, it will look at all positional parameters and return false when finished, thus terminating the loop. Remember, options expects its first argument to be the getopts specification, and the remaining arguments to be positional parameters. However, we shifted the getopts specification away, so $1, $2 and so on are indeed the positional parameters when getopts examines them. The $opts argument to getopts tells it the legal set of arguments, as described above.
If getopts sees a legal option, it stores it without the leading - in the argname variable, and if that option takes an argument, it stores that argument in a variable called OPTARG. If an incorrect option is seen, getopts stores an error code in argname and the name of the incorrect option in OPTARG. There are two sorts of incorrect options:
An option in which the name is not listed in the getopts specification. In this case, getopts stores ? in argname.
An option requiring an argument, but where the argument is missing; getopts stores : in argname if this occurs.
bash getopts has a bug: it stores ? in both these cases. Listing 2 contains a workaround. ksh does not have this problem.
If neither of these problems occurs, we have a valid option and can go on to create a variable. This is done in the final if statement in the loop. The then branch handles the case when the option has an argument and the else branch handles the case when there is no argument; both use eval. Let's look at one of these:
eval opt_$argname=$OPTARG # set option name
Suppose we're processing the -b option with an argument of fred: argname will contain b and OPTARG will contain fred. We want the shell to run this code:
opt_b=fredOur first attempt is likely to be:
opt_$argname=$OPTARGreasoning that the shell will replace $argname by b, $OPTARG by fred, and we're done. Good try, but it doesn't work. If you're sitting in front of a bash shell prompt now, try this:
$ argname=b $ OPTARG=fred $ opt_$argname=$OPTARGYou should see this message: bash: opt_b=fred: command not found.
Which command is not found? The shell did indeed expand the variables. The problem is that, although the shell has generated the string opt_b=fred, it considers its work on the line to be finished and tries to execute a program called “opt_b=fred”. Although the line after processing looks like a shell command, the shell won't notice this, because it processes each line only once. To fix this problem, we need to instruct the shell to expand the variables in this line and then start over again, processing the line as if it is the first time. That is precisely what the eval at the start of the line accomplishes. When the shell processes an expanded line, it will recognize eval as a command to create a shell variable and will create one.
Remember, these variables are being created in a function and will continue to exist after the function has terminated. Thus, we can call the options function from a script (or indeed another function) and use the variables it has created in any way we like.
In the interest of space, I have not described all the steps the shell performs when it expands a command line; for the gory details, consult Learning the Bash Shell from O'Reilly & Associates.
To ensure the shell knows about a function, there is one option in bash and two in ksh. In bash, you must “source” the file containing the function in one of your start-up scripts such as .bash_profile (or equivalently, include the code directly in the start-up script). In ksh, too, you can source a file in a start-up script, or alternatively, put your function files in a directory (perhaps called $HOME/functions) and add this directory to the FPATH environment variable. When you type the name of a command unknown to ksh, it looks in the directories in FPATH to see if there is a function file with that name. If so, it reads the file, remembers the function definition and executes it.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Reply to comment | Linux Journal
5 hours 37 min ago
- Nice article, thanks for the
16 hours 17 min ago
- I once had a better way I
22 hours 3 min ago
- Not only you I too assumed
22 hours 21 min ago
- another very interesting
1 day 14 min ago
- Reply to comment | Linux Journal
1 day 2 hours ago
- Reply to comment | Linux Journal
1 day 9 hours ago
- Reply to comment | Linux Journal
1 day 9 hours ago
- Favorite (and easily brute-forced) pw's
1 day 11 hours ago
- Have you tried Boxen? It's a
1 day 17 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?