# Improving Perl Application Performance

A number of open-source Perl packages are available. Hopefully, I could find a standard deviation calculation that was faster than my best attempt so far. I found and downloaded a statistics package from CPAN called Statistics::Descriptive. I created a function called std_dev_pm that used the Statistics::Descriptive package. The code for this function is shown in Listing 4.

**
Listing 4. The std_dev_pm Function**

sub std_dev_pm { my $stat = new Statistics::Descriptive::Sparse(); $stat->add_data(@_); return $stat->standard_deviation(); }

Using this function, however, produced a result of 6.80 s/iter; 48% worse than the baseline std_dev function. This is not altogether unexpected considering that the Statistics::Descriptive package uses an object interface. Each calculation includes the overhead of constructing and destructing a Statistics::Descriptive::Sparse object. This is not to say that Statistics::Descriptive is a bad package. It contains a considerable number of statistical calculations written in Perl and is easy to use for calculations that don't have to be fast. However, for our specific case, speed is more important.

All languages have good and bad qualities. Perl, for example, is a good general-purpose language but is not the best for number-crunching calculations. With this in mind, I decided to rewrite the standard deviation function in C to see if it improved performance.

In the case of the data collection application, it would be counter-productive to rewrite the entire project in C. Quite a few specific Perl utilities make it the best language for most of the application. An alternative to rewriting the application is to rewrite only the functions that specifically need performance improvement. This is done by wrapping a standard deviation function written in C into a Perl module. Wrapping the C function allows us to keep the majority of the program in Perl but allows us to mix in C and C++ where appropriate.

Writing a Perl wrapper over an existing C or C++ interface requires using XS. XS is a tool that is distributed with the Perl package, and it is documented in the perlxs Perl document. You also need some of the information located in the perlguts document. Using XS, I created a Perl package called OAFastStats containing a standard deviation function implemented in C. This function, shown in Listing 5, can then be called directly from Perl. For comparison purposes, this standard deviation function will be called std_dev_OAFast.

**
Listing 5. The XS Implementation**

double std_dev(sv) INPUT: SV * sv CODE: double sum = 0; double sumsq = 0; double mean = 0; /* Dereference a scalar to retrieve an array value */ AV* data = (AV*)SvRV(sv); /* Determine the length of the array */ I32 arrayLen = av_len(data); if(arrayLen > 0) { for(I32 i = 0; i <= arrayLen; i++) { /* Fetch the scalar located at i from the array.*/ SV** pvalue = av_fetch(data,i,0); /* Dereference the scalar into a numeric value. */ double value = SvNV(*pvalue); /* collect the sum and the sum of squares. */ sum += value; sumsq += value * value; } mean = (sum/(arrayLen+1)); RETVAL = sqrt((sumsq/(arrayLen+1)) - (mean * mean)); } else { RETVAL = 0; } OUTPUT: RETVAL

The comparison between the baseline standard deviation function and the C function wrapped with XS is presented in Table 2, showing a significant speedup. The C function (std_dev_ref_OAFast) is 1,175% faster than the baseline function (std_dev), and it is 395% faster than the best Perl implementation (std_dev_ref_sum).

## Trending Topics

## Webinar

### Fast/Flexible Linux OS Recovery

On Demand Now

In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.

Join *Linux Journal*'s Shawn Powers and David Huffman, President/CEO, Storix, Inc.

Free to *Linux Journal* readers.

Secure Desktops with Qubes: Installation | May 28, 2016 |

CentOS 6.8 Released | May 27, 2016 |

Secure Desktops with Qubes: Introduction | May 27, 2016 |

Chris Birchall's Re-Engineering Legacy Software (Manning Publications) | May 26, 2016 |

ServersCheck's Thermal Imaging Camera Sensor | May 25, 2016 |

Petros Koutoupis' RapidDisk | May 24, 2016 |

- Secure Desktops with Qubes: Introduction
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- CentOS 6.8 Released
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- ServersCheck's Thermal Imaging Camera Sensor
- Chris Birchall's Re-Engineering Legacy Software (Manning Publications)
- Petros Koutoupis' RapidDisk
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole

## Geek Guides

Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.

￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.

Get the Guide
## Comments

## standard deviations..

use Statistics::Descriptive;

my $stat = Statistics::Descriptive::Sparse->new();

$stat->add_data(331806,331766,328056);

print $stat->standard_deviation() . "\n";

-> 2153.60937343181

@scratch=(331806,331766,328056);

sub std_dev_ref_sum {

my $ar = shift;

my $elements = scalar @$ar;

my $sum = 0;

my $sumsq = 0;

foreach (@$ar) {

$sum += $_;

$sumsq += ($_ **2);

}

return sqrt( $sumsq/$elements -

(($sum/$elements) ** 2));

}

print std_dev_ref_sum(\@scratch) . "\n";

-> 1758.41469005422

Someone makes a mistake here..

## Difference between standard deviation, knowing full population

The difference between the two calculations:

The calculation in the Statistics::Descriptive package assumes that the data available is a sample from the population, does not contain the full population. See: http://en.wikipedia.org/wiki/Standard_deviation#Estimating_population_SD

In the Statistics::Descriptive documentation, this is referenced by the note: "Returns the standard deviation of the data. Division by n-1 is used."

The calculation used in the article assumes that the data represents the full population.

## Err... No.

In most cases, I have seen Perl performance that rivals C;I would love to see you demonstrate even just one example where this is the case. The gain of _only_ 11.75x of your "C" over Perl in the case you describe is because you used XS for the implementation and not pure C with XS to just glue the two together. For big arrays you'll find it's faster to transcribe the Perl array into a C array of floats, and to do the work in pure C. Perl is usually about two orders of magnitude (100x) slower than C or decently coded C++.

What you say about object oriented interfaces slowing things down is also completely untrue. The only thing you'll save by using procedural rather than OO implementation is a pointer dereference when you call the std_dev method on the object - which is negligible compared to the calculations inside the function.

## Re: Improving Perl Application Performance

Hopefully, in the future, there will be less of a need for this sort of thing... With any luck, Perl6 and Parrot will prove to be faster, and far easier to integrate with C. In fact, the equivalent Parrot routines are already only about 3x slower than the equivalent C program, and both are far faster than Perl5 is today. (code follows)

-- pb

time N0 # time

mul N0, 1048576.0

mod N0, N0, 2000000000.0

set I0, N0 # seed

new P0, .Random # rng

set P0, I0 # seed the rng

set I0, 1000000 # array size

set I1, I0

set I2, 100 # loops

new P1, .SArray

set P1, I1

SETRND:

set N0, P0 # random numbers

mul N0, N0, I0

dec I1

set P1[I1], N0

if I1, SETRND

time N4

SDLOOP:

set I1, P1 # array size

set N3, I1

div N3, 1, N3 # 1 / array size

set N1, 0

set N2, 0

STDDEV:

dec I1

set N0, P1[I1]

add N1, N1, N0 # sum

mul N0, N0, N0

add N2, N2, N0 # sumsq

if I1, STDDEV

mul N1, N1, N3 # sum / array size

mul N1, N1, N1 # (squared)

mul N2, N2, N3 # sumsq / array size

sub N2, N2, N1 # -

pow N2, N2, 0.5 # sqrt

dec I2

if I2, SDLOOP

time N5

sub N4, N5, N4

print N4 # time elapsed in bench loop

print "

"

end

## That is parrot? That looks

That is parrot? That looks like shit. I love perl but its as good as dead with this perl6 garbage.