Using Named Pipes (FIFOs) with Bash
It's hard to write a bash script of much import without using a pipe or two. Named pipes, on the other hand, are much rarer.
Like un-named/anonymous pipes, named pipes provide a form of IPC (Inter-Process Communication). With anonymous pipes, there's one reader and one writer, but that's not required with named pipes—any number of readers and writers may use the pipe.
Named pipes are visible in the filesystem and can be read and written just as other files are:
$ ls -la /tmp/testpipe prw-r--r-- 1 mitch users 0 2009-03-25 12:06 /tmp/testpipe|
Why might you want to use a named pipe in a shell script? One situation might be when you've got a backup script that runs via cron, and after it's finished, you want to shut down your system. If you do the shutdown from the backup script, cron never sees the backup script finish, so it never sends out the e-mail containing the output from the backup job. You could do the shutdown via another cron job after the backup is "supposed" to finish, but then you run the risk of shutting down too early every now and then, or you have to make the delay much larger than it needs to be most of the time.
Using a named pipe, you can start the backup and the shutdown cron jobs at the same time and have the shutdown just wait till the backup writes to the named pipe. When the shutdown job reads something from the pipe, it then pauses for a few minutes so the cron e-mail can go out, and then it shuts down the system.
Of course, the previous example probably could be done fairly reliably by simply creating a regular file to signal when the backup has completed. A more complex example might be if you have a backup that wakes up every hour or so and reads a named pipe to see if it should run. You then could write something to the pipe each time you've made a lot of changes to the files you want to back up. You might even write the names of the files that you want backed up to the pipe so the backup doesn't have to check everything.
Named pipes are created via mkfifo or mknod:
$ mkfifo /tmp/testpipe $ mknod /tmp/testpipe p
The following shell script reads from a pipe. It first creates the pipe if it doesn't exist, then it reads in a loop till it sees "quit":
#!/bin/bash pipe=/tmp/testpipe trap "rm -f $pipe" EXIT if [[ ! -p $pipe ]]; then mkfifo $pipe fi while true do if read line <$pipe; then if [[ "$line" == 'quit' ]]; then break fi echo $line fi done echo "Reader exiting"
The following shell script writes to the pipe created by the read script. First, it checks to make sure the pipe exists, then it writes to the pipe. If an argument is given to the script, it writes it to the pipe; otherwise, it writes "Hello from PID".
#!/bin/bash pipe=/tmp/testpipe if [[ ! -p $pipe ]]; then echo "Reader not running" exit 1 fi if [[ "$1" ]]; then echo "$1" >$pipe else echo "Hello from $$" >$pipe fi
Running the scripts produces:
$ sh rpipe.sh &  23842 $ sh wpipe.sh Hello from 23846 $ sh wpipe.sh Hello from 23847 $ sh wpipe.sh Hello from 23848 $ sh wpipe.sh quit Reader exiting
Note: initially I had the read command in the read script directly in the while loop of the read script, but the read command would usually return a non-zero status after two or three reads causing the loop to terminate.
while read line <$pipe do if [[ "$line" == 'quit' ]]; then break fi echo $line done
Mitch Frazier is an Associate Editor for Linux Journal.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide