The Über-Skeleton Challenge

I received an interesting message from Angela Kahealani with a challenge: "Here's what I'd like to see in Work the Shell: a full-blown shell script template. It should comply with all standards applicable to CLI programs. It should handle logging, piped input, arguments, traps, tempfiles, configuration files and so on." That's an interesting idea, and it fits neatly into something I've been talking about in the last few columns too: the difference between writing something quick and streamlined and writing bulletproof scripts. So let's jump in!

Parsing Command-Line Arguments

The first step of any meaningful shell script is to parse the starting arguments. There's a function built in to Bash for this, but it's rather tricky to work with. For example:

while getopts "ab:c" opt; do
  case $opt in
    a)  echo "-a was specified"  ;;
    b)  echo "arg given to b is $OPTARG" ;;
    c)  echo "-c was specified"  ;;
    \?) echo "Invalid option: -$OPTARG" >&2 ;;

This specifies that you're going to have three possible parameters: -a, -b and -c, and that -b has an argument. Using getopts, they can occur in any order and can be combined where it makes sense. For example, -cab arg works fine, with arg being set as the optional parameter for -b. -abc arg wouldn't work, however, because what appears immediately after the b needs to be its optional parameter.

What's nice about working with getopts is that it does all the hard work for you—there's no need to worry about shifting twice after an optional parameter is read and so on. If you give it bad parameters, the "?" value will be triggered, with an error output.

Many programs continue to parse input after all the flags have been eaten, and you'll need code to handle that situation too. The key variable in this situation is OPTIND, which contains the number of positional parameters that getopts has processed. The solution looks like this:

shift $((OPTIND-1)) 

Now $1 is the first non-starting-flag option; $@ is the full set of arguments given minus all the starting flags and so on.

Logging Messages

Adding logging to a script actually is quite easy, if you're not going to have a lot of instantiations running simultaneously. You could use syslog, but let's start with the most basic:

if [ $logging ] ; then
  echo $(date): Status Message >> $logfile

Or, better, here's a more succinct "date" format and the process ID:

echo $(date '+%F %T') $$: Status Message >> $logfile 

In the logfile itself, you'd see something like:

2012-08-07 15:07:56 7026: Status Message 

When there's a lot going on, that information will prove invaluable for debugging and analysis.

But what if you did want to use syslog and get the script messages in the standard system logfile? That can be done with the handy "logger" program, which has surprisingly few options, none of which you need.

Instead of the echo statement above, you would simply use:

logger "Status Message"

Check /var/log/system.log, and you can see what has been automatically added:

Aug  7 15:12:26 term01 taylor[7100]: status message 

In fact, if you want to be really streamlined, you could have something like this at the top of your über-script:

if [ $logging ] ; then
  logger="echo >/dev/null"

Now every invocation where you'd potentially log information in the system log will either be the standard /usr/bin/logger message or echo >/dev/null message, the latter causing the information to be discarded without being displayed or saved.

Trapping Signals

For most shell scripts, a quick ^C kills them and that's that. For other scripts, however, more complicated things are going on, and it's nice to be able to, for example, remove temp files rather than leave detritus all over the filesystem.

The key player in this instance is a program called trap, which takes two parameters, the function (or name of the function) to invoke and the signal or set of signals to associate with that function.

Here's an interesting example:

trap '{ echo "You pressed Ctrl-C" ; exit 1; }' INT 
echo "Counting, press Ctrl-C to exit"
for count in 1 2 3 4 5 6 7 8 9 10; do
    echo $count; sleep 5

If you run this, you'll find that the script will count from 1–10 with a 5-second delay between each digit. At any point, press Ctrl-C and the trap is triggered; the echo statement is invoked, and the script exits with a nonzero return code (exit 1).

Sometimes you want to make the script have trap management in certain places, but not others, in which case you can disable it at any time by specifying a null command sequence:

trap '' INT 

Easy enough. The code snippet probably would appear similar to:

trap '{ /bin/rm -f $tempfile $temp2; exit 1 }' SIGINT 

If you're wondering about the last parameter, it's the signal name.

There are a lot of signals defined in the Linux world, and they're all documented in the signal man page.

The most interesting signals are SIGINT, for program interruptions; SIGQUIT for a program quit request; SIGKILL, the famous "-9" signal that cannot be trapped or ignored and forces an immediate shutdown; SIGALRM, which can be used as a timer to constrain execution time; and SIGTERM, a software-generated termination request.

Let's take a closer look at SIGALRM, as it's darn useful for situations when you're concerned that a portion of your script could run forever.

To set the timer, use trap, as usual:

trap '{ echo ran out of time ; exit 1 }' SIGALRM 

Then elsewhere in the script, prior to actually invoking the section that you fear might take too long, add something like this:

  sleep $delay ; kill -s SIGALRM $$

That'll spawn a subshell that waits the specified number of seconds then sends the SIGALRM signal to the parent process (that's what the $$ specifies, recall).

Next month, I'll continue this interesting project by showing an example of the SIGALRM code and adding some additional smarts to the script, including the ability to test and change its behavior based on whether it's receiving input from the terminal (command line) or from a redirected file/pipe.

Any other fancy tricks you'd like it to do? E-mail me via

Keyboard photo via


Dave Taylor has been hacking shell scripts for over thirty years. Really. He's the author of the popular "Wicked Cool Shell Scripts" and can be found on Twitter as @DaveTaylor and more generally at


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Computer Monitoring Software Reviews

Computer Monitoring Software Reviews 's picture

Extremely useful articles and other content Im highly eager about digesting more. Computer Monitoring Software Reviews


CyberNixon's picture

Here's my version of your logger bits...

if [ $logging ] ; then
logger="echo >/dev/null"

printDebug is either useful or not, depending on the DEBUG flag (using getopts, I call it "-d").

[ "$DEBUG" -eq 0 ] && printDebug() { echo > /dev/null; }
[ "$DEBUG" -eq 1 ] && printDebug() { echo -e "DEBUG: $1" >&2; }
cleanup() { rm -rf $tempDir; }
        printDebug "Defined function 'cleanup'" 
error() { echo -e "*** FAILED ***\n$1" 1>&2; cleanup; exit 1; }
        printDebug "Defined function 'error'" 

Using your suggestions, my trap statement could be

trap 'error "Process ran too long"' INT

missing a semi colon?

Xed's picture

Looks like you got the first one, but there are two other instances where you're missing that annoying final semi-colon. Why does Bash need this?

trap '{ /bin/rm -f $tempfile $temp2; exit 1 }' SIGINT
trap '{ echo ran out of time ; exit 1 }' SIGALRM

I think these should have another semi-colon before the '}'. Without it, you'll get the weird "unexpected end of file" syntax error.

Good article though.

{} not needed

xrat's picture

IMHO, { and } are not required with ''trap''.

{} not needed

Xed's picture

I agree.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState