These last couple of weeks have seen the release of some great tools to
help Rubyists develop programs following Test-First principles, and I'll
discuss three of them later in this article. But first, some thought-provoking
e-mail and blog posts have appeared recently in the Ruby community, and I'd like
to take a closer look at some of them here.
A favorite recurring event on the ruby-talk mailing list is the
Ruby Quiz, coordinated by
James Gray. He posts a new quiz on Friday of each week, often using
information contributed by other community members. Then, on the
following Thursday, he posts a discussion and summary of quiz solutions.
In mid-February, Gray posted Quiz 67, a series of
meta-programming koans that encouraged the Test-First development of a
building method for objects and classes. Working through the
various koans and reading the messages generated was a lot of fun for
the larger-than-normal crowd of participants.
Zenspider made a
multitest, a forthcoming addition to his ZenTest tools. If you've ever
been caught by a subtle change in Ruby from one version to another, this
looks like a godsend. multitest can run your test suite against
multiple versions of Ruby on each invocation, helping you catch problems
you ordinarily would not see.
One upcoming event that should be on your radar is Canada on Rails, the first
conference devoted to Ruby on Rails. It's being held in Vancouver, BC,
April 13 and 14. Featured speakers include David Heinemeier Hansson,
David Black and Geoffrey Grosenbach. It should be a great opportunity
to get more involved in the Ruby on Rails community.
Finally, a number of Ruby and Ruby on Rails books are fairly close
to publication. I've been watching some of them through early-access
programs, and I'm especially excited about two of them,
Integration with Ruby and
Rails. Both of them seem destined to be important
parts of any Rubyist's library.
What's in Your Toolbox?
This column's in-depth coverage focuses on tools for Test-First development. If
you're unfamiliar with Test-First development and would like to learn
more, here are some good resources:
I like writing code Test-First, because I feel more confident about
what I've written. Using Test-First, I can complete a working
implementation quickly and can refactor to a better design easily.
It doesn't hurt that Ruby provides nice tools for working according to
Test-First principles or that some good Test-First tools are available
My Test-First toolbox includes Test::Unit, rake, rcov, unit_diff and
autotest. The first two should be pretty familiar to most Ruby hackers. If you
haven't gotten to know them yet, Test::Unit is well documented in the
Axe book and at www.ruby-doc.org.
You can read more about rake in
IBM Developerworks article and in
by Martin Fowler.
If you're not already using Test::Unit and rake, take some
time to learn about using these excellent tools. Test::Unit is
distributed with Ruby, and rake is available as a rubygem from RubyForge.
As for the other tools in my toolbox, rcov is a code coverage analysis
tool. When run against a unit test suite, it generates a call coverage
analysis of the implementation code. rcov is available from eigenclass.org. You
can generate HTML output--take a look at an example here--or ASCII
output, a truncated form of which is shown below. rcov works fairly
fast; running a program with rcov is only two or three times as slow
as running the program normally. In addition, rcov produces nice,
class MockDB | 2 | 0 def exec(query, &block) | 11 case query.split(' ') | 5 when 'zero' | 5 num = 0 | 1 when 'one' | 4 num = 1 | 1 else | 0 num = 2 | 3 end | 0 | 0 yield [num] | 5 | 0 end | 0 | 0 end | 0
You run rcov against a test suite like this:
$ rcov test/test_hostname
Additional command-line options of interest include:
- -t: generate plain-text output
- -T: generate fancy-text output
- -p: generate profiling output
- -x: exclude files; this one takes a comma-separated list of
- --no-html: don't create HTML files
If you choose to generate text output, it may be worthwhile to redirect
the output to a file. Or, you might want to use
tee to dump it to a file--rcov output can
get pretty long.
Although coverage tools normally are used to show where you need to
write more tests, I recently had an experience in which rcov led me to a
refactoring. I'd been writing hostname checking code Test-First for a
couple of hours and had implemented a number of checks. I decided to
take a break from writing code and see how good my coverage was. I
expected it to be at 100%, but sometimes it's nice to see that proven.
So, I was shocked to see a red band at the end of a green bar. Something
wasn't being tested!
I walked through the code and my assertions. I could see where I was
testing the failure case, but rcov didn't believe me. It turned out that
a check I had implemented after the uncovered one duplicated the test,
so my failures were caught before I ever got there. I needed either to
split my checking method or eliminate the dead code. Thanks to
rcov, my code ended up being a method smaller.
The current release of rcov does have a small bug that you'll want to
watch out for. It doesn't look for line continuations after "and" or
"or" statements. This is a one-line fix to the rcov code, and Mauricio
will have it in the next release.
To return to the other tools in my toolbox, autotest and unit_diff are distributed with
which is available as a rubygem. They both are meant to help ease continuous
testing into your routine.
With your code in ./lib and your tests in ./test, autotest slurps up
the test files, runs them and displays the
results. It builds a map between the test and the implementation files,
classes and methods. Each time you save a file in the map or create a
file that gets put into the list, it reruns the tests. Any time a
test fails, autotest enters a tighter loop, running only the failing
tests. This allows you to catch the failing code almost immediately and
focus on correcting it.
It took a couple of hours of working with autotest before I
got into the rhythm of it. The first things I did with it were
small tasks--I updated some tests for Ruby 1.8.4--so autotest didn't have
time to cycle through the whole sleep, scan for changes and run the
test suite cycle. autotest has a way of dealing with this, however;
pressing Ctrl-c once reruns the tests immediately. Pressing the key
combination twice terminates autotest.
I also found that autotest doesn't like it when the tests fail to run.
When I'm writing code Test-First, at the beginning of a development
cycle, I often require an implementation file that doesn't exist.
During regular development, I'm liable to include a typo or syntax
error. Any of these result in autotest printing the Ruby-generated
error message and then a line that says:
# Test::Unit died, you did a really bad thing, retrying in 10
It's not the best thing for your ego, but it is a descriptive way to throw a
red flag and keep you on track.
One last failing: autotest also doesn't like Emacs-generated autosave
files and crashes anytime it finds one. A simple patch has been posted to
the ZenTest RubyForge project. A new release is on the horizon, so
this problem should go away soon.
unit_diff is another small program that quickly becomes invaluable. It
pushes the "expected" and "received" portions of the output from a
failed assertion through diff, thereby reducing large chunks of text to
something much more manageable. Eric Hodel, unit_diff's author, says
the program was written to help with ParseTree development, where
any failed assertion might produce multiple screens of dense, hard-to-read
output. unit_diff turns this nightmare into two or three lines that
show the exact error. In addition, I've found it to be invaluable when working with
unit_diff is easy to run, too. Simply pipe your normal tests through it,
$ ruby test/test_hostname |unit_diff
Any errors it finds are captured and displayed appropriately.
Hopefully, you've enjoyed this little walk through some Ruby Test-First
development tools. I'll be back soon to talk about more Ruby topics.
If you'd like to see me cover specific topics, please feel free to
leave a comment here.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SourceClear Open
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide