PracTcl Programming Tips

This month, find out how to test the speed of your Tcl programs and zero in on the slow parts with the time command.

The other day I wrote a Tcl program, and it was really slow. Even after I got all of the bugs out, it was still slow, so it was time to optimize its performance. My first rule of program optimization is to do nothing; just wait for a faster machine. Unfortunately, in this case, I can't wait that long.

My second rule of optimization is to avoid spending any effort optimizing code that doesn't matter, either because it never gets run (don't laugh, it happens more than you think), or it doesn't take a significant fraction of the total run time of the program.

As an aid to program optimization, Tcl provides a time command that measures and reports the time it takes to execute a Tcl script. It's called as:

time script count

where script is the Tcl script to time, and count is the number of times to run the script, it's run once if no count is given. time reports its result in the form:

3000 microseconds per iteration

The time is the average elapsed time for each time the script is timed.

Occasionally, it is useful to use the time command as-is to time procedures as they are run on various inputs. Once you have pin-pointed the “time hog” procedure in your application, time can be used this way to measure the performance effects of recoding techniques or new algorithms (or faster machines).

However, for any serious program optimization, a more systematic approach is needed. We can write some procedures that use the time command to enable us to compute an execution time profile of an entire application.

One way to profile an application would be to re-write every procedure, surrounding its body with a time command, whose result can be tabulated in some way. Since Tcl lets us ask about the arguments and body of every procedure, the procedures can be rewritten automatically by the profiler.

The disadvantage of this approach is that it only works for procedures and not for the built-in commands. Considering that one of the primary ways of speeding up a Tcl application is recoding some of the procedures in C and turning them into built-in commands, it would be nice to be able to profile commands as well as procedures.

Instead of re-writing procedures, we'll rename them and write a new procedure with the original name that calls (and times) the renamed version. This way, profiling can be done on both commands and procedures.

Gathering the Data

The first step is to write the replacement procedure. It will be the same for all procedures and commands except for its name, and the name of the renamed original it will call. The easiest way to accomplish this is by writing a procedure body template and using the format command to substitute in the appropriate names.

 proc profile_proc {proc} {
    global Template
    rename $proc $proc:OLD
    set body [format $Template $proc]
    proc $proc args $body
 }

The procedure profile_proc takes the name of a Tcl procedure, renames it by adding the suffix :OLD, then uses the template (in Template) to write a new procedure that times the calls to the original. Although different commands and procedures will require differing numbers of arguments, by using args, we can arrange for them to be passed to the original (:OLD) procedure un-changed.

The global variable Template contains the format template for the new procedure body, which substitutes the procedure name in two places, one to call the original and another to log the results. The 1$ in the format specifier indicates the first parameter to format should be used both times.

set Template {
    global Profile_fd
    set time [lindex [time {set result \
            [uplevel [list %1$s:OLD] $args]}] 0]
    set level [expr [info level] - 1]
    if {$level > 0} {
        set caller [lindex [info level $level] 0]
        regsub -all {(.*):OLD} $caller {\1} \
        caller
    } else {
        set caller TopLevel
    }
    puts $Profile_fd [list %1$s $time $caller]
    return $result
}

The timing information is written to a file so that it may be analyzed off-line. We will make sure the variable Profile_fd contains a file handle to which we can write timing information. First, we run the “original” command with time. By using uplevel, we can insure the original command will run in the same stack level it expects, so if it uses any upvar commands they will work properly. Since nothing in Tcl prevents a procedure name from having special characters in it, the %s:OLD needs to be passed though the list command. The original arguments to the procedure, that were gathered up into the single list args, are expanded into their original form by uplevel. The lindex 0 extracts just the time portion of the time command output and saves it in the variable time.

To properly account for the time spent in each procedure, not only is the timed procedure's name required, but its caller's name is needed as well, as will be apparent when it comes time to account for all of the time.

The info level command is used to determine the caller's name (or TopLevel, if called from the top level). The regsub gets rid of the :OLD to make the book-keeping easier.

Finally, the procedure name, time spent, and caller name are all written to the logging file, and the result returned by the renamed procedure, result, is returned to the caller.

The procedure profile_start is used to turn on profiling.

proc profile_start {{pattern *}} {
    global Profile_fd
    set Profile_fd [open /tmp/prof.out.[pid] w]
    foreach i [info procs $pattern] {
        profile_proc $i
    }
}

First, it opens the file that will receive the timing information, and calls profile_proc for the procedures we wish to profile. If commands are to be profiled as well as procedures, the info procs could be changed to info commands. At this point, the application can be run as usual, except that for each profiled command or procedure, a line of timing information is written into the file.

To turn off the profiling, profile_stop is called.

proc profile_stop {} {
    global Profile_fd
    close $Profile_fd
    foreach proc [info procs *:OLD] {
        regsub {(.*):OLD} $proc {\1} restore
        rename $restore {}
        rename $proc $restore
    }
    profile_summarize /tmp/prof.out.[pid]
}

The procedure profile_stop closes the log file, removes the special profiling procedures, restores the names of the original procedures, and calls profile_summarize, which prints a summary of the profile data. Fancier versions of profile_start and profile_stop could check to make sure a procedure isn't profiled twice, or that time isn't wasted profiling the profiling routines.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState