The past couple of weeks have been huge in the Ruby world. A number of
major releases of popular Ruby packages were made, and several
interesting posts were made to blogs and the Ruby mailing list. Let's
now take a quick look at the bi-week that was.
One package to keep an eye on is Mongrel, a much faster replacement for
WEBRick. Zed Shaw, who's also working on SCGI as an FCGI replacement,
seems to be picking up steam with Mongrel. He's added sendfile support
to an already zippy little Web server, along with stability improvements
and other bug fixes. It also looks as though IOWA might be supported by
Mongrel in the near future, adding to the already supported nitro and
Rails. For more information about Mongrel, head on over to
Another big release to consider is JRuby, a Ruby implementation in Java.
This version features a working irb implementation and several other
enhancements. The JRuby team says that it's getting close to having
Rails run on the platform as well. It will be interesting to see how
this project affects YARV, metaruby and Ruby itself. JRuby's
home page is the
place to go for more information about the project.
I wrote about ZenTest. Since then, zenspider and
Eric Hodel have released
version 3.1.0 of ZenTest. It fixes the bugs I mentioned in my last
article and also added multiruby and many other new features, including
automatic syncing with your SCM repository under several
different SCM systems. If you're already using ZenTest, go grab this
update. If you're not using it, why not? More information can be found
The biggest news from the past two weeks, however, has to be the release of
Rails 1.1. The Rails development team has pushed out over 500 changes and enhancements. Some
templates), ActiveRecord++, respond_to and integration tests. A number
of people have mentioned concerns about backward compatability, but
Rails 1.1 looks to be a solid release, offering a lot of reasons to follow the upgrade
path. More details about Rails are available on its
Web site, and
specific information about this release can be found
In keeping with the Rails idea, two blog posts worth looking at came
from Eric Hodel, a programmer and systems administrator at the
Robot Co-op, creators of
43 Things and other Rails/Web 2.0 goodness. Eric posted
a review of the
design behind 43 Things and answered a ton of questions in the comments. He
also wrote about the software
behind the Web site. If you'd like a peek behind the curtains at a
successful, fairly high-traffic Rails site, go take a look.
Mauricio Fernandez also made some great posts recently. One that
really caught my eye discussed using some code analysis metrics to
estimate the value of Ruby and the libraries that people have developed for
it. Mauricio sets the value of Ruby at $20 million dollars, with another
$100 million dollars for additional libraries. You can read his article
One thing that really stood out to me was the number of high-quality,
well-designed packages that showed a lower value, such as RubyInline. It seems that simple,
expressive code doesn't stand up well in traditional analysis. Maybe
there's room to look at how to better evaluate code going forward--any
Last time around, I mentioned Canada on Rails. This time, I'd like to
touch on another recently announced gathering,
the St. Louis
CodeCamp to be held May 6-7. The Web site and registration system
was developed in Rails by David Holsclaw of the stlouis.rb. If you're going
to be anywhere near St. Louis in early May, you might want to get involved.
Hal Fulton announced his recently published article on metaprogramming with Ruby.
the article got some great reviews on the mailing list. Go check it out. If you're interested
in metaprogramming and other "Higher Order" programming constructs, you
might want to take a look at
James Gray's Shades of
Gray blog. The bulk of the content there is
a running commentary on Gray's reading of
Perl, written by Mark Jason Dominus.
(gatewayed to/from the ruby-talk mailing list and at least one forum),
a post pointed to a
about the forthcoming plethora of Ruby and Ruby on Rails books. Between
formally announced and informally announced books, it looks
as though we'll soon be carrying around a heavy load of books. Fortunately, many of
these books are, or will be, available as PDFs.
Adventures in Ruby Programming
Because the community-related information went a bit long this week, I'm
going to shoot for a slight change of pace. Instead of talking about a
tool, I thought I'd tell you about the adventure that Sean Carley and I
have been having while working on our "checkr" program--think Ruby Lint.
At this point we're working Test-First through a spike to learn more
about ParseTree, which promises to
be the backbone of our code analysis. We live a couple of thousand
miles apart and can't really pair-program, so we decided to try "ping
pong programming". Sean writes a unit test, and I write the code
to make it pass. Once I've got a passing test, I refactor write a
failing test. Then, the code goes back to Sean to repeat the cycle. We
spend a lot of time using IM to communicate as we're writing code and
tests, exploring ideas, asking questions and giving advice.
One thing that's really been interesting is how much this helps us focus
on the basics--taking small bites, practicing YAGNI (Ya Ain't Gonna Need
It) and letting the tests guide our development. For example, when we
first sat down to write code, we knew that we wanted to use
ParseTree to do the heavy lifting in code analysis. It ended up taking a
couple of tests before we got to the point that we were using it. Then,
in only one more test, we'd dug a level deeper and found we needed
SexpProcessor, a component within ParseTree, to start working with the code.
By the way, working with ParseTree and SexpProcessor has given me a new
appreciation for unit_diff. Let me show you why. Here's a silly little
def foo if a == 2 b = 2 end end
ParseTree turns that code into a "sexp" (an
[[:class, :Example, :Object, [:defn, :example, [:scope, [:block, [:args], [:defn, :foo, [:scope, [:block, [:args], [:if, [:call, [:vcall, :a], :==, [:array, [:lit, 2]]], [:lasgn, :b, [:lit, 2]], nil]]]]]]]]]
Once you start working with bigger expressions, trying to find the
difference between the expected and actual values from a unit test can
make your head explode.
Putting the pieces together has helped Sean and I understand our problem
domain a lot better, which means that checkr will be a better tool.
We're writing the code to throw away, but the quality of the tools has
made our disposable code pretty nice, too. Here's an example; it's the
method we use to look for assignments instead of comparisons in the test
clause of "if" or "unless" statements:
class ParseTest < SexpProcessor def assignment_in_conditional?(exp) @saw_lasgn = false test_result = process(exp.shift) raise CheckRAssignmentInConditional if @saw_lasgn test_result end def process_lasgn(exp) @saw_lasgn = true s(exp.shift, exp.shift, process(exp.shift)) end def process_if(exp) s(exp.shift, assignment_in_conditional?(exp), process(exp.shift), process(exp.shift)) end end
The assignment_in_conditional? and process_lasgn methods are used by a
number of other methods.
Sometimes we find ourselves moving back and forth over the same code.
At one point, I'd finished writing the code to handle
the first and simplest test for a while loop. I immediately smelled
code duplication, so I performed an "extract method" refactoring
on one of two parallel methods in order to share the code. I then
wrote a test and checked everything in. Sean picked up the code, played
with my test for five or ten minutes and realized that he needed to undo my
refactoring. He did, made the test pass and then saw that he could do
the same refactoring--although a bit less radically than I had.
We also have made some interesting discoveries. For example, once we'd
finished doing some work with if statements, I added a test for a
parallel unless statement. It turns out that unless statements are
converted into if statement under the covers, and we ended up getting
that functionality for free.
In addition, it wasn't only our code that we learned about. We discovered a bug in
the most recent gem of ParseTree, too. This meant we had to spend a day or two
on an educational detour through the internals of ParseTree and Ruby
itself. But that's a story for another time and place.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide