Introduction to Named Pipes

A very useful Linux feature is named pipes which enable different processes to communicate.
Command Substitution

Bash uses named pipes in a really neat way. Recall that when you enclose a command in parenthesis, the command is actually run in a “subshell”; that is, the shell clones itself and the clone interprets the command(s) within the parenthesis. Since the outer shell is running only a single “command”, the output of a complete set of commands can be redirected as a unit. For example, the command:

(ls -l; ls -l) >ls.out

writes two copies of the current directory listing to the file ls.out.

Command substitution occurs when you put a < or > in front of the left parenthesis. For instance, typing the command:

cat <(ls -l)

results in the command ls -l executing in a subshell as usual, but redirects the output to a temporary named pipe, which bash creates, names and later deletes. Therefore, cat has a valid file name to read from, and we see the output of ls -l, taking one more step than usual to do so. Similarly, giving >(commands) results in Bash naming a temporary pipe, which the commands inside the parenthesis read for input.

If you want to see whether two directories contain the same file names, run the single command:

cmp <(ls /dir1) <(ls /dir2)

The compare program cmp will see the names of two files which it will read and compare.

Command substitution also makes the tee command (used to view and save the output of a command) much more useful in that you can cause a single stream of input to be read by multiple readers without resorting to temporary files—bash does all the work for you. The command:

ls | tee >(grep foo | wc >foo.count) \
         >(grep bar | wc >bar.count) \
         | grep baz | wc >baz.count

counts the number of occurrences of foo, bar and baz in the output of ls and writes this information to three separate files. Command substitutions can even be nested:

cat <(cat <(cat <(ls -l))))
works as a very roundabout way to list the current directory.

As you can see, while the unnamed pipes allow simple commands to be strung together, named pipes, with a little help from bash, allow whole trees of pipes to be created. The possibilities are limited only by your imagination.

Andy Vaught is currently a PhD candidate in computational physics at Arizona State University and has been running Linux since 1.1. He enjoys flying with the Civil Air Patrol as well as skiing. He can be reached at andy@maxwell.la.asu.edu.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

synchronization problem with named pipes

saime's picture

Dear Andy,

I came across your article regarding named pipes. I am having a problem to writing (from c) and reading(from java process) at the same time. Java process seems to start reading only after I quit the C process(which is writing).

Could you give me any hint or any help?

Thanks in advance.

Regards,
saime

puzzled

Anonymous's picture

Great good old article, kept me busy for more than 2 hours (better than a movie)!

Piping Struggler

Small Laptops's picture

Thanks for the explanation. Had been looking for something which would explain the piping structure. Am learning Unix on my netbook and this has helped me understand.

do u mean process substitution?

caster1mg's picture

command substitution is something like

$(baz)
'foobar'

You're Right

Mitch Frazier's picture

What he refers to <(...) is, as you say, Process Substitution, not Command Substitution.

Mitch Frazier is an Associate Editor for Linux Journal.

ouch... 'foobar' -> `foobar`

caster1mg's picture

ouch...

'foobar' -> `foobar`

i need be help

phuongjolly's picture

Hi Andy Vaught!
I am Phuongjolly
I have a problem:
i created two file: the first's use for read data, the second file is use for display data
readfile.sh
#!/bin/sh
a=1
b=1

exit 0

displayfile.sh
#!bin/sh
c=`expr $a + $b`
echo "Sum is $c"
exit 0

then:
bash$: /bin/bash ./readfile.sh | /bin/bash ./displayfile.sh
and have an error, but i don't know what's error
Can you help me, please! Thanks

Also works with cygwin

Chris Bruner's picture

If you have cygwin installed, you can try out this article. Works great.

A slight typo in the first example

mkfifo pipe

The simplest way to show how named pipes work is with an example. Suppose we've created pipe as shown above. In one virtual console1, type:

ls -l > pipe1

and in another type:

cat < pipe

the ls -l > pipe1 should be ls -l > pipe

named pipes

Anonymous's picture

Linux code for a client/server program using named pipes to sare some data between clients through a server

named pipes

kanchan's picture

How to use the named pipes for conversation between 2 processes?
Please write the program for me.

What i do not understand is

guhnoo's picture

What i do not understand is the following:
When i do

mkfifo pipe pipe2; echo foo >pipe & cat pipe >pipe2 & cat < pipe2 >pipe

it sends the foo back and forth and i get a high cpu, however when I do

mkfifo pipe pipe2; echo foo >pipe & cat pipe >pipe2 & cat pipe2 >pipe

my cpu doesn't raise, so the foo isn't send back and forth. My question is:
why doesn't the latter work? does cat pipe2 >pipe differ from cat < pipe2 >pipe ?

I was wondering the same

Anonymous's picture

I was wondering the same thing. Perhaps it has something to do with the order the pipes are set up (due to bash syntax and precedence)? I'm looking for an answer to this as well.

I see

Same guy as directly above's picture

I tried
cat pipe2 | cat > pipe1

and it worked. Then I realized that in

cat pipe2 >pipe1

The string "pipe2" is passed to cat as an argument, and therefore cat must run to open the filestream to pipe2. But cat cannot run in this case until pipe1 has something running on the other end accepting input. The thing on the other end is:

echo -n x | cat - pipe1 > pipe2

Which is blocked until something connects to pipe2. Well, the _other_ cat isn't connected to pipe2, because it hasn't been able to run, which it must do to open the filestream.

I _think_ I am on the right track here. When you go:

cat pipe1

Cat is connected to pipe2 without having to do any work, because pipe2 is coming in via stdin.

Can someone confirm that this is indeed what is happening?

Ugh

Anonymous's picture

I meant:

cat < pipe2 > pipe1

at the end there. Dumb formatting.

Hey, thanks for the article.

Anonymous's picture

Hey, thanks for the article. It was really helpful.

Brilliant info

SeHe's picture

I second all the commenters: highly informative stuff. It lead me to the following gem, after having spent 2 days looking into perl modules, C++ libraries and even rsynclib to do streaming diffs on fifos:


diff -ur <(xxd pop.sig) <(xxd pop2.sig) | kompare -o -

Cheers

But ... no sigar

Sehe's picture

Today I found out that it is not actually streaming (because of the non-linear nature of diff(1))

This woke me up:

sehe@sehe-desktop:~$ diff -Ewbur <(xxd /dev/dvdrw1) <(ssh koolu xxd /dev/dvdrw)
diff: memory exhausted

Difference between mknod and mkfifo function

Anonymous's picture

What is the difference between mknod and mkfifo command?

A good article

Imran's picture

Hi,
A nice article on pipes.

I have a few questions here.

1)How to flush stdio buffers associated with the pipe.
2)What happens when there is no space on the pipe to accomodate the flushed data of the stdio buffer?

Imran

as seen in bash 3

vince^II's picture

Have a look here:
http://en.wikipedia.org/wiki/Bash#I.2FO_redirection

# close FD6
exec 6>&-

Problem using named pipes in a shell script

Jim Wings's picture

I am having a problem with using named pipes in a shell script (ksh or borne shell). As a test do something like this:

----------- cut --------
#!/bin/ksh
# This is shell script: "read_pipe"

mkfifo /tmp/event_pipe

while true
do
read EVENT </tmp/event_pipe
echo $EVENT >>/tmp/event.log
done
----------- cut -------------

Run the script via: "nohup read_pipe &".

Do something like: "cat /etc/passwd >/tmp/event_pipe". You will get some of the password file in the "/tmp/event.log", but not all of it. I tried various ways of doing the "read", but still get the same results. If I send data slowly (sleep 1 second between each line of data) it works. So what am I doing wrong? I tried a loop with "tail -f /tmp/event_pipe | while read EVENT", still the same result.
The purpose of "read_pipe" script when it does "real work", will be to process each "line" of data as it comes in. I don't know how fast the lines of data will come in, and I am afraid I will miss lines, based on the testing I did with "cat /etc/passwd". It almost appears I am overloading the "named pipe" and it is not blocking correctly. Anyone done something like this? Any ideas? Thanks.

RE: Problem using named pipes in a shell script

E. Choroba's picture

Try crumble your loop into two loops:


while true
do
cat /tmp/event_pipe | while read line
do echo $line >> /tmp/event.log
done
done

Or use exec 3< /tmp/event_pipe and read -u3.

How do we keep the stream from closing?

Jean D. Fongang's picture

Thanx for your very infromative article. We can only find this kind of stuff on LJ.

I have been trying for sometime to keep the mysql client connected after executing a SQL script, and the words "named pipes" just poped to my mind. However, although I managed to put it to use in some scenarios, I fail to keep the stream flowing.

on the first shell I do this:
linux:/var/lib/mysql/replMySQL # mysql -u root -p1234 < pipe1
linux:/var/lib/mysql/replMySQL #

As soon as I do
linux:/var/lib/mysql/replMySQL # echo 'FLUSH TABLES WITH READ LOCK;' > pipe1

on the second shell, the mysql client sees the EOF, executes and exits.

My actual problem is to keep the mysqlclient connected after the 'FLUSH TABLES WITH READ LOCK;', so that I can do a certified backup from within a script and then release the connection and the lock. This script will be part of a solution in which I might not be able to put python or perl.

Keeping the stream open

Anonymous's picture

Apparently, the stream closes when all writers have closed, so to keep the stream from closing open another writer that doesn't actually write anything. There must be better examples, but the following will work:

sleep 999999999 > pipe1 &

This was quite helpful. I

Anonymous's picture

This was quite helpful. I was wondering however if there was a way to keep a process attached to a pipe permanently. When I used the above examples, the output process worked just fine for one command and then needed to be reattached. I'll keep researching and write back if I find anything on my own.

another way is to just

Anonymous's picture

another way is to just use
cat > named_pipe
this cat waits on stdin, but keeps a write handle to the pipe open.

The answer is yes. I

Anonymous's picture

The answer is yes. I solved my problem with:

tail -f <name_of_pipe> | <process_to_handle_output> &

Re: How do we keep the stream from closing?

Jean D. Fongang's picture

how can be the mysql client in this case? It didn't work.

Awesome, thank you!

Kaolin Fire's picture

I wanted to give C-Kermit a dynamic list of files to upload (using svn diff), and (as above) "named pipe" somehow popped into my head as maybe something I could use. I'd never sat down to figure them out, and _magic_!

Re: Named Pipes YOUR CODE IS BAD

Anonymous's picture

Your code with ls and tee is wrong. It gives:

Missing name for redirect.

Fix it.

try using bash!!!

Anonymous's picture

try using bash!!!

Re: Named Pipes

Anonymous's picture

the code with ls and tee works for me. I suggest that you stop shouting and check that you copied it correcly when you tried it. And that you're really running bash.

Re: Linux Apprentice: Introduction to Named Pipes

Anonymous's picture

Thanks a lot it helped me a lot to understand named pipes

Very helpful. Thanks.

cevher's picture

Very helpful.
Thanks.

Doesn't work for me ;(

Anonymous's picture

Doesn't work in Windows 2000 cmd shell for me. This linux stuff is for the birds.

Dear bird: "Cmd.exe" is not

Anonymous2's picture

Dear bird: "Cmd.exe" is not Bash, as you probably know.

For using Bash under Windows please see Msys (www.mingw.org) or Cygwin (www.cygwin.com).

Thanks indeed

zzen's picture

I needed to replace a filename parameter with command output, without taking up the stdin, but had no idea how. By some unknown magic, the phrase "named pipe" sprung to my mind. I googled for it, your article came up first, it was very clear, brief and informative - and ultimately helped me solve my problem in a few minutes. Thanks!

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState