Bash Redirections Using Exec

 in

If you've used the command line much at all you know about I/O redirection for redirecting input to and/or output from a program. What you don't see all that often or that you may not be familiar with is redirecting I/O inside a bash script. And I'm not talking about redirections that you use when your script executes another command, I'm talking about redirecting your script's I/O once it has already started executing.

As an example, let's say that you want to add a --log option to your script and if the user specifies it you want all the output to go to a log file rather than to the stdout. Of course, the user could simply redirect the output on the command line, but let's assume there's a reason why that option doesn't appeal to you. So, to provide this feature in your script you can do:

#!/bin/bash

echo hello

# Parse command line options.
# Execute the following if --log is seen.
if test -t 1; then
    # Stdout is a terminal.
    exec >log
else
    # Stdout is not a terminal, no logging.
    false
fi

echo goodbye
echo error >&2

The if statement uses test to see if file descriptor number one is connected to a terminal (1 being the stdout). If it is then the exec command re-opens it for writing on the file named log. The exec command without a command but with redirections executes in the context of the current shell, it's the means by which you can open and close files and duplicate file descriptors. If file descriptor number one is not on a terminal then we don't change anything.

If you run this command you'll see the first echo and the last echo are output to the terminal. The first one happens before the redirection and the second one is specifically redirected to stderr (2 being stderr). So, how do you get stderr into the log file also? Just one simple change is required to the exec statement:

#!/bin/bash

echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log 2>&1
else
    # Stdout is not a terminal, no logging.
    false
fi

echo goodbye
echo error >&2

Here the exec statement re-opens stdout on the log file and then re-opens stderr on the same thing that stdout is opened on (this is how you duplicate file descriptors, aka dup them). Note that order is important here: if you change the order and re-open stderr first (i.e. exec 2>&1 >log), then it will still be on the terminal since you're opening it on the same thing stdout is on and at this point it's still the terminal.

Perhaps mainly as an exercise, let's try to do the same thing even if the output is not to the terminal. We can't do what we did above since re-opening stdout on the log file, when it's currently connected to a file redirection or to a pipeline, would break the redirection/pipeline that the user specified when the command was invoked.

Given the following command as an example:

bash test.sh | grep good

What we want to do is manipulate things so that it appears that the following command was executed instead:

bash test.sh | tee log | grep good

Your first thought might be that you could change the exec statement to something like this:

exec | tee log &       # Won't work

and tell exec to re-open stdout on a background pipeline into tee, but that won't work (although bash doesn't complain about it). This just pipes exec's output to tee, and since exec doesn't produce any output in this instance, tee simply creates an empty file and exits.

You might also think that you can try some dup-ing of file descriptors and start tee in the background with it taking input from and writing output to different file descriptors. And you can do that, but the problem is that there's no way to create a new process that has its standard input connected to a pipe so that we can insert it into the pipeline (although see the postscript at the end of this article). If we could do this, the standard output of the tee command would be easy since by default that goes to the same place the main script's output goes to, so we could just close the main script's output and connected it to our pipe (if we just had a way to create it).

So are we at a dead end? Ahhhh no, that would be a different operating system you're thinking of. The solution is actually described in the last sentence of the previous paragraph. We just need a way to create a pipe, right? Well let's use named pipes.

#!/bin/bash

echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
else
    # Stdout is not a terminal.
    npipe=/tmp/$$.tmp
    trap "rm -f $npipe" EXIT
    mknod $npipe p
    tee <$npipe log &
    exec 1>&-
    exec 1>$npipe
fi

echo goodbye

Here, if the script's stdout is not connected to the terminal, we create a named pipe (a pipe that exists in the file-system) using mknod and setup a trap to delete it on exit. Then we start tee in the background reading from the named pipe and writing to the log file. Remember that tee is also writing anything that it reads on its stdin to its stdout. Also remember that tee's stdout is also the same as the script's stdout (our main script, the one that invokes tee) so the output from tee's stdout is going to go wherever our stdout is currently going (i.e. to the user's redirection or pipeline that was specified on the command line). So at this point we have tee's output going where it needs to go: into the redirection/pipeline specified by the user.

Now all we need is to get tee reading the right data. And since tee is reading from a named pipe all we need to do is redirect our stdout to the named pipe. We close our current stdout (with exec 1>&-) and re-open it on the named pipe (with ezec 1>$npipe). Note that since tee is also writing to the redirection/pipeline that was specified on the command line, our closing the connection does not break anything.

Now if you run the command and pipe it's output to grep as above you'll see the output in the termimal and it will also be saved in the log file.

Many such journeys are possible, let the man page be your guide!

p.s. There's another way to do this using Bash 4's coproc statement, but that'll wait for another time.

______________________

Mitch Frazier is an Associate Editor for Linux Journal.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More Than One Way To Boil A Cat

Lawrence D’Oliveiro's picture

Another approach is to group the commands with braces to apply a common redirection, e.g.

{
... sequence of commands ...
} | tee log

Logging

Mitch Frazier's picture

That will do the logging, although it doesn't allow you to selectively turn it on.

Mitch Frazier is an Associate Editor for Linux Journal.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState