The Cold, Thin Edge
The Shell Paradigm is described (by me at least) as taking some of a true operating system's most beautiful characteristics and bending, twisting, folding, spindling, and mutilating them into obscenely obtuse and imperfect tools. That these characteristics can be bent, twisted, etc., and still work is, of course, what gives them their beauty.
Open up your Unix toolbox (/usr/bin for you gnubies), and you will see a complete set of tools, ready for use. Much as the discovery of a basic technology distinguishes one epoch of human history from another, redirection and job control under Unix create a golden age of computing in contrast to the iron-age toils of MS-DOS. Because of the simple ability to differentiate separate, simultaneous processes and direct their input and output at your discretion, there are few limits to the ways in which you can use these tools to assemble simple Unix processes. This ability, and the will to use it, constitute the shell paradigm.
But where power resides lies danger. How much | & and popen() can a single process take before it disintegrates into a heap of intractable spaghetti code? How many different programming contexts can we use before our simple program hurtles out of control towards the nether-regions of “Kernel Panic: Out of memory”? [A lot—ED]
This article will describe to you how to mix and match I/O streams to and from executables in different environments. If you are hacking a Perl script and want to throw a little grep in for good measure, go right ahead; it's possible. Finally, we will discuss the limits to and wisdom of these techniques.
The capability to have processes communicate easily among themselves is inherent in the design of Unix systems, so the appellation “shell paradigm” is somewhat of a misnomer. Nonetheless, the shell is the context in which most people are familiar with I/O redirections, so we will start there. As we will later see, all these facilities can be easily recreated in places other than at the shell prompt.
There are several ways to use process redirection within the shell. You can take the output of a process and direct it to a file, for example:
cd ~; ls > /tmp/ls.file
Alternatively, you can append output to existing files:
cd ~/bin; ls >> /tmp/ls.file
You can also take the output of a process and redirect it as the input of another process:
cd ~; ls | grep lj.article
Within most shells, including the Bourne-compatible bash and zsh, you can integrate the output of your command within other commands. For example, if you wanted to generate a file with yesterday's time appended to the end, you could do the following:
touch /usr/acct/atlanta/data.` date --date '1 day ago' +"%Y%m%d" `
which just generated a file named data.19960503 for me. What you get depends on how quickly you read your Linux Journal. It also depends on which free OS you are running; FreeBSD's version of date does not offer the 1 day ago facility, so you will have to get and compile gnu-date if you are silly enough not to run Linux (or if your employer uses FreeBSD.)
External-command inclusion is nice in C when you need a function already implemented as a Unix tool which you don't want to recode. For example, if you need to sort a stream of data or compress an output file, using sort or gzip rather than coding it natively is an efficient way to accomplish the task. There are two ways to use external programs under C: system() and popen().
If you have a large amount of data in strings that you want to sort using the sort program, you can use popen() to call the sort program, sort the data and read the result back from the program. If you just want to compress a file, you can use the simpler system() function. Neither function is unfamiliar to a C programmer, but if either is unfamiliar to you, Look in the Linux man pages, where they are documented. If you want more explanation, you can read Advanced Programming in the Unix Environment, by W. Richard Stevens.
However, if you need to interact with the program you call, it is possible to do this with a C library that comes with a tool called “Expect”, which is described later in the Tcl section.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Client-Side Performance
- Tibbo Technology's Tibbo Project System
- Sony Settles in Linux Battle
- Peppermint 7 Released
- Libarchive Security Flaw Discovered
- July 2016 Issue of Linux Journal
- The Giant Zero, Part 0.x
- Maru OS Brings Debian to Your Phone
- Profiles and RC Files
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide