Improving Perl Application Performance

The four basic performance-tuning steps to improve an existing application's performance.

In order to yield a reliable benchmark, the std_dev function must be repeated several times. The more times the function is run, the more reliable or consistent the benchmark will be. The number of times to repeat the benchmark can be set specifically with the Perl Benchmark package. For example, run this benchmark 10,000 times. Alternatively, the package accepts a time duration, in which case the benchmark is repeated as many times as possible within the allotted time. All benchmarks shown in this article use an iteration parameter of 10 seconds. Calculating the standard deviation of 1,000,000 data elements for at least 10 seconds produced the result:

12 wallclock secs (10.57 usr + 0.02 sys
    = 10.59 CPU) @ 0.28/s (n = 3)

This information indicates that the benchmark measurement took 12 seconds to run. The benchmark tool was able to execute the function 0.28 times per second or, taking the inverse, 3.5 seconds per iteration. The benchmark utility was able to execute the function only three times (n = 3) in the allotted 10 CPU seconds. Throughout this paper, results are measured using seconds per iteration (s/iter). The lower the number, the better the performance. For example, an instantaneous function call would take 0 s/iter, and a really bad function call would take 60 s/iter. Now that I have a baseline measurement of the std_dev performance, I can measure the effects of refactoring the function.

Although three samples are enough to identify issues with the std_dev calculation, a more in-depth performance analysis should have more samples.

Refactoring and Verification

After establishing the benchmark shown in Listing 1, I refined the std_dev algorithm in two iterations. The first refinement, called std_dev_ref, was to change the parameter passing from “pass by value” to “pass by reference” in both the std_dev function and the mean function that is called by std_dev. The resulting functions are shown in Listing 2. Theoretically, this will increase the performance of both functions by avoiding copying the entire contents of the data array onto the stack before the call to std_dev and the subsequent call to mean.

The second refinement, called std_dev_ref_sum, was to remove the mean function altogether. The mean and the mean of the sum of squares are combined into one loop through the entire data set. This refinement, shown in Listing 3, removes at least two iterations over the data. Table 1 contains a summary of the benchmark times.

Table 1. Baseline and Two Refinements

 s/iter
std_dev3.53
std_dev_ref2.93
std_dev_ref_sum1.37

As hoped, an incremental improvement between each of the refinements is shown in Table 1. Between the std_dev and std_dev_ref functions there is a 20% improvement, and between std_dev and std_dev_ref_sum functions there is a 158% improvement. This seems to confirm my expectation that pass by reference is faster than pass by value in Perl. Also, as expected, removing two loops through the data improved the performance of the std_dev_ref_sum function. After both of these refinements, the function can calculate the standard deviation of 1,000,000 items in 1.37 seconds. Although this is considerably better than the original, I still think there is room for improvement.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

standard deviations..

Nagilum's picture

use Statistics::Descriptive;
my $stat = Statistics::Descriptive::Sparse->new();
$stat->add_data(331806,331766,328056);
print $stat->standard_deviation() . "\n";

-> 2153.60937343181

@scratch=(331806,331766,328056);
sub std_dev_ref_sum {
my $ar = shift;
my $elements = scalar @$ar;
my $sum = 0;
my $sumsq = 0;
foreach (@$ar) {
$sum += $_;
$sumsq += ($_ **2);
}
return sqrt( $sumsq/$elements -
(($sum/$elements) ** 2));
}
print std_dev_ref_sum(\@scratch) . "\n";

-> 1758.41469005422

Someone makes a mistake here..

Difference between standard deviation, knowing full population

anonymous's picture

The difference between the two calculations:

The calculation in the Statistics::Descriptive package assumes that the data available is a sample from the population, does not contain the full population. See: http://en.wikipedia.org/wiki/Standard_deviation#Estimating_population_SD
In the Statistics::Descriptive documentation, this is referenced by the note: "Returns the standard deviation of the data. Division by n-1 is used."

The calculation used in the article assumes that the data represents the full population.

Err... No.

Gordan Bobic's picture

In most cases, I have seen Perl performance that rivals C;

I would love to see you demonstrate even just one example where this is the case. The gain of _only_ 11.75x of your "C" over Perl in the case you describe is because you used XS for the implementation and not pure C with XS to just glue the two together. For big arrays you'll find it's faster to transcribe the Perl array into a C array of floats, and to do the work in pure C. Perl is usually about two orders of magnitude (100x) slower than C or decently coded C++.

What you say about object oriented interfaces slowing things down is also completely untrue. The only thing you'll save by using procedural rather than OO implementation is a pointer dereference when you call the std_dev method on the object - which is negligible compared to the calculations inside the function.

Re: Improving Perl Application Performance

Anonymous's picture

Hopefully, in the future, there will be less of a need for this sort of thing... With any luck, Perl6 and Parrot will prove to be faster, and far easier to integrate with C. In fact, the equivalent Parrot routines are already only about 3x slower than the equivalent C program, and both are far faster than Perl5 is today. (code follows)
-- pb

time N0 # time
mul N0, 1048576.0
mod N0, N0, 2000000000.0
set I0, N0 # seed
new P0, .Random # rng
set P0, I0 # seed the rng
set I0, 1000000 # array size
set I1, I0
set I2, 100 # loops
new P1, .SArray
set P1, I1
SETRND:
set N0, P0 # random numbers
mul N0, N0, I0
dec I1
set P1[I1], N0
if I1, SETRND
time N4
SDLOOP:
set I1, P1 # array size
set N3, I1
div N3, 1, N3 # 1 / array size
set N1, 0
set N2, 0
STDDEV:
dec I1
set N0, P1[I1]
add N1, N1, N0 # sum
mul N0, N0, N0
add N2, N2, N0 # sumsq
if I1, STDDEV
mul N1, N1, N3 # sum / array size
mul N1, N1, N1 # (squared)
mul N2, N2, N3 # sumsq / array size
sub N2, N2, N1 # -
pow N2, N2, 0.5 # sqrt
dec I2
if I2, SDLOOP
time N5
sub N4, N5, N4
print N4 # time elapsed in bench loop
print "
"
end

That is parrot? That looks

Anonymous's picture

That is parrot? That looks like shit. I love perl but its as good as dead with this perl6 garbage.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState