Bash Process Substitution
In addition to the fairly common forms of input/output redirection the shell recognizes something called process substitution. Although not documented as a form of input/output redirection, its syntax and its effects are similar.
The syntax for process substitution is:
<(list) or >(list)where each list is a command or a pipeline of commands. The effect of process substitution is to make each list act like a file. This is done by giving the list a name in the file system and then substituting that name in the command line. The list is given a name either by connecting the list to named pipe or by using a file in /dev/fd (if supported by the O/S). By doing this, the command simply sees a file name and is unaware that its reading from or writing to a command pipeline.
To substitute a command pipeline for an input file the syntax is:
command ... <(list) ...To substitute a command pipeline for an output file the syntax is:
command ... >(list) ...
At first process substitution may seem rather pointless, for example you might imagine something simple like:
uniq <(sort a)to sort a file and then find the unique lines in it, but this is more commonly (and more conveniently) written as:
sort a | uniqThe power of process substitution comes when you have multiple command pipelines that you want to connect to a single command.
For example, given the two files:
# cat a e d c b a # cat b g f e d c bTo view the lines unique to each of these two unsorted files you might do something like this:
# sort a | uniq >tmp1 # sort b | uniq >tmp2 # comm -3 tmp1 tmp2 a f g # rm tmp1 tmp2With process substitution we can do all this with one line:
# comm -3 <(sort a | uniq) <(sort b | uniq) a f g
Depending on your shell settings you may get an error message similar to:
syntax error near unexpected token `('when you try to use process substitution, particularly if you try to use it within a shell script. Process substitution is not a POSIX compliant feature and so it may have to be enabled via:
set +o posixBe careful not to try something like:
if [[ $use_process_substitution -eq 1 ]]; then set +o posix comm -3 <(sort a | uniq) <(sort b | uniq) fiThe command set +o posix enables not only the execution of process substitution but the recognition of the syntax. So, in the example above the shell tries to parse the process substitution syntax before the "set" command is executed and therefore still sees the process substitution syntax as illegal.
Of course, note that all shells may not support process substitution, these examples will work with bash.
Mitch Frazier is an Associate Editor for Linux Journal.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- New Products
- Weechat, Irssi's Little Brother
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
11 min 29 sec ago
- This has already been done
12 min 29 sec ago
- Reply to comment | Linux Journal
57 min 43 sec ago
- Welcome to 1998
1 hour 46 min ago
- notifier shortcomings
2 hours 9 min ago
3 hours 46 min ago
- Android User
3 hours 48 min ago
- Reply to comment | Linux Journal
5 hours 41 min ago
8 hours 30 min ago
- This is a good post. This
13 hours 44 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?