The Cold, Thin Edge

Open up your Unix toolbox and you will see a complete set of tools ready to be used. The ability to differentiate separate, simultaneous processes and direct their input and output at your discretion and the will to use this ability, constitute the shell paradigm.
Mother of Perl

Whereas there are a number of different ways to manipulate process I/O within the shell, there is really only one within Perl: as a filehandle. This is actually a testimony to the beauty of Perl's design; kudos to Larry Wall for making it so simple.

You can include other processes from within Perl in several different manners, all with the open () command. For example, if you wanted to open a process bottle to which the output of your Perl script should be sent, you would use

open (BOTTLE, "| ~<bin/bottle"

to direct the output. Similarly, if you wanted to read the input of bottle, you would do much the same thing, adding the pipe symbol (|) at the end:

open (BOTTLE, "~<bin/bottle |")

In the first case, you could only write to filehandle bottle, whereas in the second case, you could do nothing but read.

Commands opened in this manner can also get fancy. Everything within the quotation marks is executed from within a subshell, so commands like either of the following will work:

open (BOTTLE, "cd ~; /bin/bottle |")
open (FIND, "cd /home/tlewis; find . -name $string -print |")

At this point many people ask, “What if I want to do both reading and writing?” You can't do this with the open () command, so Perl is broken, right? No, not really. The fact that you can't easily open a two-way pipe is a design decision. As explained in the Unix FAQ:

The problem with trying to pipe both input and output to an arbitrary slave process is that deadlock can occur, if both processes are waiting for not-yet-generated input at the same time.

Again, it is possible to do this with Expect, as we'll see later.

A short example:

#!/usr/bin/Perl
open (ACCT, "(cd /usr/acct/;".
  "for i in `ls | grep -v admin`; do; ".
  "cat $i/date.19960503; done) | sort |");
while (<ACCT>) {
     chop;
     ($A,$B,$C) = split;
     print "$C $A $B\n";
}

This would take the data in a limited subset of the /usr/acct/ directory, sort it based on the first entry in each line of each file, reformat the data and print it to standard output. By mixing Perl and shell tools, this job becomes a lot easier.

Tcl/Tk

Tcl is a simple scripting language designed as a command language which could easily be applied to various C programs for smooth configuration and user interaction. Tk is a language which grew out of Tcl in which graphical user interfaces can be constructed. One usually refers to them together as Tcl/Tk.

Tk has gained much popularity recently as an extremely easy way to construct graphical interfaces under X-Windows. If you have used make xconfig when compiling any of the recent (since 1.3.60) development kernels, you have used Tk. The program Tkined, a network management tool for Linux, uses Tk; it is based on Scotty, a Tcl extension offering various network functions such as access to SNMP data.

In accordance with its original design goals, Tcl allows you to interact with external processes in a fairly intuitive manner. Simple commands may be executed under Tcl with a simple exec command. For example:

exec ls | grep -v admin

returns exactly the same result as it did in the previous Perl example, but prints it to standard output, much like the system() command in C.

If you wish to interact with the output of a process or direct information to its input, you need to associate it with a filehandle, much as in Perl. This is done via the open command, as in:

set g0 [open |sort r+]

This opens the command sort for input. You would send data to the handle g0 elsewhere in the program using puts and then read from the output using gets. The r+ switch means that you can both write data to the process (data to be sorted) and read data from the process (sorted data). If you just wanted the data to be sent to standard output, you would use:

set g0 [open |sort w]

giving you write access to the process.

Wait, you say, this means that I can both read and write from a process? Yes, it does. Doesn't the Unix FAQ say this is a bad thing? Yes, it does. If you use this functionality to construct webs of interlocking, self-feeding processes, then you are really asking for trouble. Keep it simple if you are going to do this at all.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix