At the Forge - Checking Your Ruby Code with metric_fu

By combining automated testing with automated code analysis, you can make your Ruby code easier to test and easier to maintain.
Code Coverage

Perhaps the best-known member of the metric_fu family is rcov, the Ruby code-coverage checker, written by Mauricio Fernandez. rcov invokes all your automated tests and then produces a report indicating which lines of your source code files were untouched by those tests. This allows you to see precisely which lines of each file have been tested, letting you concentrate on those paths that are highlighted in red (that is, untested), rather than writing additional tests for code that already has been tested.

rcov, as invoked by metric_fu, produces two basic types of HTML output. One provides an overview of the pages of a site. This output, with red and green bar graphs, shows the percentage of each file that has been secured. If any of your files has a graph whose bar is partly red, this tells you on which files to concentrate your initial effort.

But, once you have decided to make sure that a particular file has better test coverage, which lines do you improve? That's where rcov's individual file output comes in handy. It shows the source code of the file, with lines of the code in either green (to show that it was covered in tests) or red (to show that it was not). If you have any red lines, the idea is for you to add tests that force those lines to be covered next time around. And, of course, if there are red lines that don't need to be there, rcov has helped you refactor your code, making it leaner and meaner. Reading rcov's output is pretty simple—you want everything to be green, rather than red. Any red is an invitation to write more tests or realize that the code is no longer in use and can be removed.

One of the main reasons for testing your code is that it gives you some peace of mind when you make further changes. So, although you can refactor and otherwise change your code without 100% test coverage, it's always possible something will slip through the cracks. For that reason, rcov should be your first priority when using metric_fu. Once your code coverage is high enough to ensure that new problems and changes will be detected, you can try to make your code better, without changing what it does.

Flog

Another tool that comes with metric_fu is Flog, written by Ryan Davis. Flog produces what it calls a “pain report”, identifying code that it believes to be “tortured”—in such pain that you really should rescue it. Even if you disagree with some of its results, looking at Flog's output often can provide an interesting perspective on your code's complexity. It measures variable assignments, code branches (that is, if-then and case-when statements) and calls to other code, assigning a score to each of those. The total Flog score is the sum of the individual items that Flog finds.

As the Flog home page says, “the higher the score, the harder it is to test”. Even if you're not worried about testing, you certainly should consider other programmers who might work on your project. Complex code is hard to maintain, and maintaining software is (in my view) a bigger problem than writing it. So, by looking at Flog's output, you can get a sense of how hard your code will be for someone else to understand.

metric_fu provides an HTML version of Flog's output. I demonstrate it here from the command line, where it can be run as:

flog *.rb

This produces a simple set of outputs, such as the following, which I got for a small project I recently worked on and didn't test or analyze much:

181.0: flog total
 60.3: flog/method average

 72.5: UploadController#advertiser_file_action
 70.1: UploadController#whitepage_listing_file_action

This would seem to indicate that my upload controller has two different methods, both of which have a relatively high level of complexity. I can get further information about these two methods by invoking Flog with the --details command-line argument. That gives me the following output, which I have truncated somewhat:

~/Consulting/Modiinfo/modiinfo/app/controllers$ flog --details
upload_controller.rb
 181.0: flog total
  60.3: flog/method average

  72.5: UploadController#advertiser_file_action
  40.6: assignment
  17.3: branch
   4.8: split
   4.0: blank?
   3.2: strip
   3.2: params
   3.1: +
   3.0: map
   2.8: []
   2.1: downcase

In other words, a large proportion of Flog's high score results from the large number of variable assignments in UploadController#advertiser_file_action. And sure enough, I have a bunch of variable assignments in that method, which led to a high score. For example, I wanted to display the number of uploaded records to the end user, and, thus, had the following code, assigning values to instance variables:

if advertiser.save
  @number_of_successes = @number_of_successes + 1
else
  @number_of_failures = @number_of_failures + 1
  @error_messages[index] = advertiser.errors
  next
end

I find this code easy to read and maintain, but Flog thinks otherwise, preferring a more functional style of programming, with methods chained together. This is one case in which I'll take Flog's assertions and scores into consideration, but I'll apply my own judgment regarding the complexity of my code and whether it needs to be changed or updated.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix