At the Forge - Profiling Rails Applications

 in
Wondering if your Rails application is running at peak efficiency? Before optimizing, profile your application to see which parts are slow.
Request Profiler

The production log profiler shows which actions require attention, but it doesn't tell why a particular action might be giving you trouble. For that, you need to dive into the application a bit more, profiling not a set of actions, but one particular action.

This is possible thanks to a built-in script that comes with Rails, script/performance/request. This script follows a set of instructions written in a (presumably short) Ruby program, using a similar set of commands and subroutines that are available for integration tests.

In other words, you use integration-test syntax to describe a short sequence of one or more actions and run this program via the request profiler. Then, the request profiler produces two output files that describe what was going on behind the scenes as those requests were serviced. This information can help you improve the performance of this particular action.

In order for this script to work, first install the ruby-prof gem:

gem install --remote ruby-prof

Once that is installed, you need to create a simple integration test script. This script doesn't need to be wrapped in the same object that the integration tests themselves use. Instead, simply create a file named test.rb, and put it somewhere on the filesystem. I created a directory named test/performance and put it there, with the one-line contents as follows:

get('/')

Notice that I'm using URLs here, rather than names of controllers and actions. Finally, with this in place, invoke the profiler:

script/performance/request -n 10 test/performance/test.rb

Now you should see the program telling you that it's warming up and then reporting as it goes through each of the iterations you specified. In the above example, the -n 10 option indicates the number of times the script should be invoked; by default, it's 100.

Note that the output files are put in the test directory (to which you might not have write access by default). And, indeed, the output files are quite useful, but they can be confusing the first time you look at them.

The first output file, profile-output.txt, is (as the suffix implies) a text file that shows how much time was spent in each method, both as a time measure and as a percentage of the total run time. Consider the following:

%self     total     self     wait    child    calls  name
13.74     58.35    38.13     0.00    20.22   608720  Buffer#read

This means there were 608,720 calls to Buffer#read during the test, which took a total of 38.13 seconds, or 13.75% of the execution time. Because this is a built-in method, you can't optimize it. However, you can try to reduce the number of times it is called, so that it will take even less time.

The question is, how do we know which functions are calling Buffer#read? Perhaps reading from buffers is an inevitable part of a Web application, and we just need to realize that?

If you look at the second file, profile-graph.html, you see a nicely linked description of which methods called which other methods, and how long it took. Each box represents the analysis of one method, and the method being analyzed is printed in bold.

All of the methods above the boldfaced method name are parent methods (that is, methods that called the one in question); whereas, methods below the current one are child methods (that is, methods that are called by the method being analyzed). By looking at who called Buffer#read, you can see which methods (if any) need optimizing or a smaller number of invocations. By going back and forth between methods, their parents and the source code, you can cut down on a great deal of waste, making your sites more efficient than before.

Conclusion

This month, we looked at two basic profiling tools that programmers can use to identify performance problems in Rails-based Web sites. There are, of course, other tools we can use, but the fact that these are so nicely integrated into Rails makes us all the more likely to use them. With constant monitoring and tweaking, we can make our sites run faster without having to resort to buying additional servers.

Reuven M. Lerner, a longtime Web/database developer and consultant, is a PhD candidate in learning sciences at Northwestern University, studying on-line learning communities. He recently returned (with his wife and three children) to their home in Modi'in, Israel, after four years in the Chicago area.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Another tool to analyze your production logs

Willem van Bergen's picture

You may also want to have a look at request-log-analyzer, a tool I wrote. This command line application parses Rails production log files to produce a performance report. It also works for Merb log files, Rails development log files and can be easily extended to other log file formats. See the project's wiki on Github for more info.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState