Improving Perl Application Performance

The four basic performance-tuning steps to improve an existing application's performance.

A fellow developer and I have been working on a data collection application primarily written in Perl. The application retrieves measurement files from a directory, parses the files, performs some statistical calculations and writes the results to a database. We needed to improve the application's performance so that it would handle a considerable load while being used in production.

This paper introduces four performance-tuning steps: identification, benchmarking, refactoring and verification. These steps are applied to an existing application to improve its performance. A function is identified as being a possible performance problem, and a baseline benchmark of that function is established. Several optimizations are applied iteratively to the function, and the performance improvements are compared against the baseline.

Identifying Performance Problems

The first task at hand in improving the performance of an application is to determine what parts of the application are not performing as well as they should. In this case I used two techniques to identify potential performance problems, code review and profiling.

A performance code review is the process of reading through the code looking for suspicious operations. The advantage of code review is the reviewer can observe the flow of data through the application. Understanding the flow of data through the application helps identify any control loops that can be eliminated. It also helps identify sections of code that should be further scrutinized with application profiling. I do not advise combining a performance code review with other types of code review, such as a code review for standards compliance.

Application profiling is the process of monitoring the execution of an application to determine where the most time is spent and how frequently operations are performed. In this case, I used a Perl package called Benchmark::Timer. This package provides functions that I use to mark the beginning and end of interesting sections of code. Each of these marked sections of code are identified by a label. When the program is run and a marked section is entered, the time taken within that marked section is recorded.

Adding profiling sections to an application is an intrusive technique; it changes the behavior of the code. In other words, it is possible for the profiling code to overshadow or obscure a performance problem. In the early stages of performance tuning, this may not be a problem because the magnitude of the performance problem will be significantly larger than the performance impact of the profiling code. However, as performance issues are eliminated, it is more likely that a subsequent performance issue will be harder to distinguish. Like many things, performance improvement is an iterative process.

In our case, profiling some sections of the code indicated that a considerable amount of time was being spent calculating statistics of data collected off the machine. I reviewed the code related to these statistics calculations and noticed that a function to calculate standard deviation, std_dev, was used frequently. The std_dev calculation caught my eye for two reasons. First, because calculating the standard deviation requires calculating the mean and the mean of the sum of squares for the entire measurement set, the na�e calculation for std_dev uses two loops when it could be done with one loop. Secondly, I noticed that the entire data array was being passed into the std_dev function on the stack rather than being passed as a reference. I thought these two items together might indicate a performance issue worth examining.

Benchmarking

After identifying a function that could be improved, I proceeded to the next step, benchmarking the function. Benchmarking is the process of establishing a baseline measurement for comparison. Creating a benchmark is the only way to know whether a modification actually has improved the performance of something. All the benchmarks presented here are time-based. Fortunately, a Perl package called Benchmark was developed specifically for generating time-based benchmarks.

I copied the std_dev function (Listing 1) out of the application and into a test script. By moving the function to a test script, I could benchmark it without affecting the data collection application. In order to get a representative benchmark, I needed to duplicate the load that existed in the data collection application. After examining the data processed by the data collection application, I determined that a shuffled set of all the numbers between 0 and 999,999 would be adequate.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

standard deviations..

Nagilum's picture

use Statistics::Descriptive;
my $stat = Statistics::Descriptive::Sparse->new();
$stat->add_data(331806,331766,328056);
print $stat->standard_deviation() . "\n";

-> 2153.60937343181

@scratch=(331806,331766,328056);
sub std_dev_ref_sum {
my $ar = shift;
my $elements = scalar @$ar;
my $sum = 0;
my $sumsq = 0;
foreach (@$ar) {
$sum += $_;
$sumsq += ($_ **2);
}
return sqrt( $sumsq/$elements -
(($sum/$elements) ** 2));
}
print std_dev_ref_sum(\@scratch) . "\n";

-> 1758.41469005422

Someone makes a mistake here..

Difference between standard deviation, knowing full population

anonymous's picture

The difference between the two calculations:

The calculation in the Statistics::Descriptive package assumes that the data available is a sample from the population, does not contain the full population. See: http://en.wikipedia.org/wiki/Standard_deviation#Estimating_population_SD
In the Statistics::Descriptive documentation, this is referenced by the note: "Returns the standard deviation of the data. Division by n-1 is used."

The calculation used in the article assumes that the data represents the full population.

Err... No.

Gordan Bobic's picture

In most cases, I have seen Perl performance that rivals C;

I would love to see you demonstrate even just one example where this is the case. The gain of _only_ 11.75x of your "C" over Perl in the case you describe is because you used XS for the implementation and not pure C with XS to just glue the two together. For big arrays you'll find it's faster to transcribe the Perl array into a C array of floats, and to do the work in pure C. Perl is usually about two orders of magnitude (100x) slower than C or decently coded C++.

What you say about object oriented interfaces slowing things down is also completely untrue. The only thing you'll save by using procedural rather than OO implementation is a pointer dereference when you call the std_dev method on the object - which is negligible compared to the calculations inside the function.

Re: Improving Perl Application Performance

Anonymous's picture

Hopefully, in the future, there will be less of a need for this sort of thing... With any luck, Perl6 and Parrot will prove to be faster, and far easier to integrate with C. In fact, the equivalent Parrot routines are already only about 3x slower than the equivalent C program, and both are far faster than Perl5 is today. (code follows)
-- pb

time N0 # time
mul N0, 1048576.0
mod N0, N0, 2000000000.0
set I0, N0 # seed
new P0, .Random # rng
set P0, I0 # seed the rng
set I0, 1000000 # array size
set I1, I0
set I2, 100 # loops
new P1, .SArray
set P1, I1
SETRND:
set N0, P0 # random numbers
mul N0, N0, I0
dec I1
set P1[I1], N0
if I1, SETRND
time N4
SDLOOP:
set I1, P1 # array size
set N3, I1
div N3, 1, N3 # 1 / array size
set N1, 0
set N2, 0
STDDEV:
dec I1
set N0, P1[I1]
add N1, N1, N0 # sum
mul N0, N0, N0
add N2, N2, N0 # sumsq
if I1, STDDEV
mul N1, N1, N3 # sum / array size
mul N1, N1, N1 # (squared)
mul N2, N2, N3 # sumsq / array size
sub N2, N2, N1 # -
pow N2, N2, 0.5 # sqrt
dec I2
if I2, SDLOOP
time N5
sub N4, N5, N4
print N4 # time elapsed in bench loop
print "
"
end

That is parrot? That looks

Anonymous's picture

That is parrot? That looks like shit. I love perl but its as good as dead with this perl6 garbage.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix