At the Forge - Assessing Ruby on Rails
Several years ago, at the height of the dot-com boom, the phone was ringing off the hook with consulting work. My employees and I scrambled to fulfill all of the projects people were throwing at us. In the midst of this boom, it became obvious that nearly every project had similar characteristics, and that we were spending time (and clients' money) re-inventing the wheel with each new project. We began to look for ways in which we could reuse code, or at least techniques, across different projects. This, we assumed, would make us not only a more-competitive business, but it also would make our day-to-day work more interesting. It is, after all, more interesting to work on the new and different elements of each project, rather than creating yet another user-group permission system.
We soon abandoned our plans for a common code system, in part because other developers had not only solved many of these problems, but had released their solutions under an open-source license. And so over the years, we did a variety of different projects using Web development frameworks, many of which I have described in earlier editions of At the Forge.
But as anyone who has worked with such frameworks has learned, there is no free lunch. Nearly every framework tries to shoehorn you into doing things in a particular way, making its own set of trade-offs that might (or might not) fit the way you want to develop solutions. I have used a number of these frameworks over the years, and although I enjoyed various parts of them, I didn't feel like any of them allowed me to express myself the way I wanted.
I'm, thus, one of many developers who has become increasingly excited about a relative newcomer to the arena, known as Ruby on Rails. As we have seen during the last few months, Rails is a framework that provides a number of different functions, including an object-relational mapper, an MVC (model-view-controller) approach to design, an integrated templating system and built-in support for testing.
Rails has become extremely popular in the year or so since it was first released, and though it is still rough around some edges, the momentum is undeniable. Moreover, Rails has now become so popular that other frameworks are springing up, claiming to be Rails-like or with many features that are “just like Rails” or “better than Rails”.
Why are so many people excited about Rails? More importantly, should you consider using it for your next Web/database project? Finally, what trade-offs does it force developers to make, and how might these trade-offs affect your decisions?
I have been developing Web applications since the days when the phrase Web application described CGI programs that sent e-mail, rather than a billion-dollar industry. Every framework I have used has brought something to the table, and has made it easier for me to develop applications in one or more ways. At the same time, each frustrated me with the trade-offs I was expected to make in order to work with the system.
For example, Mason was one of the first Web development frameworks that I worked with, and it spoiled me with its flexibility and ease of use. Mason is written in Perl, and it is designed to work most easily with mod_perl and Apache. Installation and configuration have become trivially simple over the years, assuming you already have a working copy of Apache and mod_perl on your server. Also, Mason integrates beautifully with the many Perl modules available on CPAN, and with the mature and robust development tools the Perl community has created over the years. When I have to create an on-line system with Perl, Mason is definitely the first tool I turn to.
But what has always frustrated me with Mason is the small number of components that come with the system. Sure, I could create a system for handling user accounts, and even for permissions and groups. But did I really want to write such code from scratch for every project I worked on? Moreover, although Mason's templates are highly expressive for developers, they include a great deal of Perl code and unusual constructs that can scare or surprise nontechnical developers.
I was thus drawn to OpenACS, an Open Source community system that has a significantly smaller following than Mason. However, the OpenACS templating system separated each viewed page into two components, one written in Tcl and the other in a modified form of HTML, with a specified “contract” between the two. In addition, OpenACS came with a standard data model designed to be used by all of the different applications in the system. You didn't need to worry about creating a registration module, because one came standard with the system. You also didn't need to create forums, Weblogs or calendars, because those also came in the standard system.
The centralized, standard data model and set of administrative applications was certainly appealing; however, OpenACS also had its problems. Perhaps the biggest one was the weird way in which OpenACS implemented its data model, using a relational database to keep track of hierarchies and objects. This system had a great deal of intellectual appeal; relational databases are fast, stable and cheap, and object-oriented programming has made it easier to model many types of data. But the marriage of the two meant that creating even a simple OpenACS application could be quite complicated. Moreover, as the OpenACS community grew, the data models became increasingly difficult to keep small, because everyone's needs were slightly different.
I also looked into Zope, a Web development framework written largely in Python. Zope has a large, strong community, and it continues to be developed and enhanced by Zope Corporation. Zope has many attractive features, including an extremely robust development environment, compartmentalized “products” that can be added and upgraded individually, and a sophisticated system of users, roles and permissions. Zope also pioneered the idea of object publishing, in which a URL describes the method that should be called on a particular object. Thus the URL /Foo/bar means that we're invoking Foo.bar, passing inputs via the HTTP request and receiving any output via the HTTP response.
The most commonly heard complaint about Zope is that it is complicated to learn. This is somewhat true; it took some time before I found myself understanding the “Zope zen”, as it is known. In addition, many things I would expect to be straightforward require some coding acrobatics in order to work correctly—which might be a reflection on my coding style, but it also seems to be an artifact of how some Zope design decisions were made and the pervasive way in which objects are used within Zope.
Early on, Zope's designers decided to avoid the problems associated with relational databases by building their own object-oriented database. On the one hand, this gave Zope a number of big advantages over its rivals, including the ability to undo changes to the system, built-in permissions and a storage system that mapped perfectly onto data types in Zope. But given the speed and pervasive nature of relational databases, SQL was also necessary. Zope thus provides the ability to connect to and work with relational databases, using a version of its DTML-templating language.
But this means that many Zope products—and certainly all of the products I have worked on—must coordinate the relational and object databases. This is generally a not-too-terrible way to handle things, but I always have ended up wondering why my life needs to be so complicated. And for all of its sophistication, I have often found myself creating the same types of create-update-delete methods and templates time after time.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- New Products
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Build a Skype Server for Your Home Phone System
- Why Python?
- A Topic for Discussion - Open Source Feature-Richness?
- Reply to comment | Linux Journal
24 min 3 sec ago
- Not free anymore
4 hours 25 min ago
8 hours 13 min ago
- Reply to comment | Linux Journal
8 hours 21 min ago
- Understanding the Linux Kernel
10 hours 35 min ago
13 hours 5 min ago
- Kernel Problem
23 hours 8 min ago
- BASH script to log IPs on public web server
1 day 3 hours ago
1 day 7 hours ago
- Reply to comment | Linux Journal
1 day 7 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?