Bash Sub Shells

 in

When writing bash scripts you sometimes need to run commands in the background. This is easily accomplished by appending the command line to be run in the background with an ampersand "&". But what do you do if you need to run multiple commands in the background? You could put them all into a separate script file and then execute that script followed by an ampersand, or you can keep the commands in your main script and run them as a sub-shell.

Creating sub-shells in bash is simple: just put the commands to be run in the sub-shell inside parentheses. This causes bash to start the commands as a separate process. This group of commands essentially acts like a separate script file, their input/output can be collectively redirected and/or they can be executed in the background by following the closing parenthesis with an ampersand.

As a somewhat contrived example, let's say that we want to start a "server" and then once it's running we want monitor it in the background to make sure it's still running. We'll assume that the server itself becomes a daemon and creates a PID file which we can use to monitor it. When the PID file disappears we assume the server has exitted and we send an email to somebody.

Now you could start the server from the main script and then create a second script that does the monitoring and then execute it in the background from the main script, but you can do the whole thing from the same script:

#!/bin/bash

server_cmd=server
pid_file=$(basename $server_cmd .sh).pid
log_file=$(basename $server_cmd .sh).log

(
    echo "Starting server"
    echo "Doing some init work"
    $server_cmd   # server becomes a daemon

    while true
    do
        if [[ -f $pid_file ]]; then
            sleep 15
        else
            break
        fi
    done
    mail -s "Server exitted" joe@blow.com <<<CRAP

) 2>&1 >> $log_file &

echo "Server started"

With this simple example you could of course just execute the whole script in the background and dispense with the sub-shell part, but that may not work if the script is part of a larger script. It's also nice not to require the user to have to remember to start the script in the background, not to mention having to remember to redirect its output to a log file. And of course there are numerous other things a real world script should do: check to see if the server is already running before starting it, delete the PID file if it's stale, check to see if the server has died without removing its PID file, etc. However, that's the real world, this is the example world.

______________________

Mitch Frazier is an Associate Editor for Linux Journal.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

login to multiple server

shoeb's picture

i want to login to 2 different servers at the beginning of the script. Then I will execute command in those servers as many time as I need(i will chose the server to execute the command). Is it possible?
i.e. i wanna to avoid login every time i execute command.

please help me

Subshell example

AskApache's picture

Nice article, subshells are very powerful.. I like to use them with this format also:

(
  find . -perm 777 -exec chmod 755 {} \;
  find . -perm 775 -exec chmod 755 {} \;
  find . -perm 666 -exec chmod 644 {} \;
  find . -perm 664 -exec chmod 644 {} \;
)

How can I do?

Siteler's picture

If you add a wait command before the echo then the script doesn't exit until the sub-shell exits. Same here

domains to use in an example

Anonymous's picture

Even though the W3C has violated their own guidelines,
the domain example.com should be used for examples. Thus:

mail -s "Server exitted" joe@blow.com

should be:

mail -s "Server exitted" joe@example.com ...

Consider me reminded

Mitch Frazier's picture

Thanks

Mitch Frazier is an Associate Editor for Linux Journal.

wait...

Anonymous's picture

To be neat, I think a "wait" command is missing before the echo "Server started" line...

Not unless you're trying to accomplish something else

Mitch Frazier's picture

If you add a wait command before the echo then the script doesn't exit until the sub-shell exits. And the sub-shell doesn't exit till the server exits. The idea here is to start the sub-shell in the background to monitor the server.

Mitch Frazier is an Associate Editor for Linux Journal.

If I run a script that sets

MadMax's picture

If I run a script that sets an environment variable, the variable remains set only for the sub-shell that's spawned for running the script, right. What if I want the script to set a variable in the parent shell's environment? (i.e. the environment variable remains set when the script exits)

No workie

Mitch Frazier's picture

No way to do that: a sub-shell can't affect the environment of it's parent. A sub-shell is a child process and it doesn't have write access to the memory space of its parent.

You would have to use some form of IPC to communicate between the processes.

Mitch Frazier is an Associate Editor for Linux Journal.

Note from the kludgemeister

Hushpuppy's picture

Well, there's always a kludge. Say you create a script named kludge.sh. What you can do at your shell is do
eval `./kludge.sh`

Inside kludge.sh, you must ensure that ALL stdout is fully accounted for. And the ONLY stdout that you would have would be commands that you wanted to eval. For example, kludge.sh could contain:

#!/bin/bash

echo "VAR1=foo; export VAR1"
ls /tmp > /tmp/ls.out # Don't send the output of ls to stdout
echo "VAR2=bar; export VAR2"

After running the eval, above, you'll see that VAR1 is set to "foo" and VAR2 is set to "bar" in your current environment.

...and that works for parenthesized subshells as well

Hushpuppy's picture

Sorry, didn't mean to imply that you had to run the subshell using another script. This works as well:


#!/bin/sh

tmp=`echo "VAR1=foo; export VAR1"
ls /tmp > /tmp/out
echo "VAR2=bar; export VAR2"`

eval $tmp

echo $VAR1 $VAR2

Related question...

Richard Heck's picture

Here's a related question. I'm thinking maybe subshells could help with this, but maybe there's an easier way. Say I want to convert a whole directory of images. I'm running on a quad core processor, so I could at least run two of these at once. How can I start two of these and then start a new one whenever one of them ends? Maybe some version of the PID idea idea works? But, well, it seems like there ought to be a better way.

PID files aren't really for monitoring processes

Mitch Frazier's picture

For monitoring a process, a PID file is really not the right thing to use, I just used that since it was expedient for the example. A PID file is useful for finding out the process ID of a daemon if it was started long ago or by somebody else, but a PID doesn't tell you anything about the status of the process. Three ways come to mind for monitoring a process, you can use ps, kill or check /proc.

Referring to the situation you pose, one might think that using wait was the answer. Alas, wait doesn't work the way you'd like: you can wait for all your background processes, or you can wait for a group of them, or you can wait for one of them, but you can't wait for any one of them. So there's no way to start 10 background processes and then use wait to wait for any one of the 10 to exit.

This is a rough example of one way you might accomplish what you're trying to do:

#!/bin/bash

function start_job()
{
    echo "Counting down from $1, pausing $2 seconds between counts"
    (
        i=$1
        while [[ $i -gt 0 ]]
        do
            sleep $2
            let i--
        done
    ) &
    spid=$!
}

start_job 10 3
pid[0]=$spid
start_job 20 2
pid[1]=$spid
start_job 30 1
pid[2]=$spid

while true
do
    for i in ${!pid[*]}
    do
        p=${pid[i]}
        if [[ ! -d /proc/$p ]]; then
            echo "PID #$i ($p) exitted, starting replacement"
            start_job 5 2
            pid[$i]=$spid
        fi
    done
    sleep 5
done

            

In your specific example, there are still questions to be answered: for example, how do the sub-shells know which files they need to process and which are being processed by other sub-shells?

Mitch Frazier is an Associate Editor for Linux Journal.

..or just do a wc(1) on jobs

Anonymouse's picture

..or just do a wc(1) on jobs output?


while sleep 5; do
[[ $( jobs | wc -l ) < $MAX_JOBS ]] && (
...
) &
done

one answer

smoser's picture

You could definitely do this in shell using '&' of commands and 'wait', but heres an program that reports to do what you're looking for.

http://www.badexample.net/projects/fork/

You can fork a function and

Anonymous's picture

You can fork a function and look for it's pid which you get by $! your example shouldn't be used as is...

Amazing

ZiggyFish's picture

It still amazes me what bash can do. I like using named pipes with type of stuff though.

Good Book

Phil Hughes's picture

While this book isn't about Bash, your comment make me think of The Unix Programming Environment. This book is over 20 years old but is more about thinking about solving a problem using basic UNIX tools.

My copy has long since vanished but, if you can find a copy, it is a gook read. The book presents problems to solve and then shows how a combination of tools including the Bourne shell, awk, sed and other standard UNIX utilities can be combined with possibly the addition of a bit of C code.

Phil Hughes

That book is available on

Anonymous's picture
Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix