At the Forge - Ruby on Rails
Ruby, an interpreted programming language that looks and feels like a cross between Smalltalk and Perl, has been around for about ten years. Ruby has been gaining in popularity over the last few years, partly because of the release of English-language books and documentation. In addition, programmers have become more interested in finding an alternative to Perl and Python for their general-purpose programming needs.
Ruby's popularity might have continued to grow slowly were it not for Ruby on Rails, a Web development framework that has become the focus of enormous attention. Everyone in the Web development world seems to be talking about Rails; magazine articles, blog postings, conference tracks and even some new books all are dedicated to Rails. Rails is supposed to be elegant, easy to use and easy to modify. Even developers with no previous Ruby experience are switching to Rails.
Does Rails live up to the hype surrounding it? To a large degree, I believe the answer is “yes”—it has a relatively shallow learning curve, it connects easily and quickly to relational databases and it makes the creation of many small- and medium-sized sites faster and easier than I would have expected. But, of course, no framework is perfect, particularly one that was released publicly only one year ago. It remains to be seen if Rails can hold up against more established technologies on several different fronts.
This month, we begin to look at several aspects of Ruby on Rails, so you can decide for yourself if my assessment is accurate. We begin by installing and configuring a basic Rails application. Over the next few installments of At the Forge, we will extend our application in several different ways, considering the ways in which Rails allows us to create and modify our applications.
The first step in creating a Rails application is to install Ruby and then Rails itself. Most modern Linux distributions come with Ruby, although only the latest released version as of this writing (1.8.2) works with the most recent version of Rails (0.12.1). New versions of Rails have been coming out frequently, which means that one or both of these versions might have changed by the time you read this.
Assuming you have installed Ruby, you next need to install Gems. It provides access to the Ruby Gems library, which is something of a cross between SourceForge and Perl's CPAN (see the on-line Resources). Download and unpack the most recent .tar.gz file:
tar -zxvf rubygems-0.8.10.tar.gz
Enter the directory as the root user and type:
ruby setup.rb all
This installs the entire Gems package. Among other things, this installs the gem program in /usr/bin. You then can install Rails, which is distributed via Gems, with the following command:
gem install --remote rails
As with such systems as CPAN and Debian's apt, the gem program is smart enough to identify and download any dependencies it might encounter. By default, you need to explicitly answer “y” when asked if you are interested in installing any dependencies. Because Rails depends on a number of other packages, you should be sure to answer “y” when prompted.
When you are returned to the shell prompt, you can assume that Rails has been installed. However, this is not quite enough. If you are interested in working with a relational database, you also need to install a database interface library. Because I work with PostgreSQL, I installed the pure Ruby client, called postgres-pr:
gem install --remote postgres-pr
Somewhat confusingly, there also is a set of PostgreSQL client libraries (called postgresql) that can be used with Ruby. However, it seems as though most Rails developers are working with the postgres-pr library, at least for now.
Once Rails is installed, we can create a simple “Hello, world” program. To do this, we use the rails command, which is installed in /usr/bin/ by default. Because our example application is a Weblog, we call the application blog. For reference, the name of the application doesn't have to be linked to the name of the URL under which it will appear. Type:
Running this produces a fair amount of output, listing the files that have been created on our filesystem. When we give only a single name, blog, the application is created inside of a directory with that name. We can keep all of our applications inside of a single container directory, such as ~/Rails, with:
mkdir ~/Rails rails ~/Rails/blog
The directory you are mostly likely to work with is app, which contains the application itself. The app directory contains subdirectories named models, views and controllers. This design reflects the fact that Rails uses the MVC (model/view/controller) style widely used in many modern desktop and Web applications.
In an MVC architecture, we divide our work into three parts—the controller, which acts like a switchboard, invoking the appropriate model and view; the model, which contains the data and some of the logic; and the view, which displays information to the user. If you have ever built a database-backed site with PHP and Smarty templates or with Zope and its Page Templates or even with Java and JavaServer Pages (JSPs), you already are familiar with at least some of these ideas. Rails simply makes them more explicit with its prenamed directory structure.
Although it can't do much, we now can start our empty Rails application with:
cd ~/Rails/blog ruby script/server
This starts the WEBrick HTTP server on port 3000. To access this fairly empty Rails site, we point our browsers to an appropriate IP address or hostname. In my particular case, I started Rails on my test server, whose IP address is 192.168.2.3. I thus point my Web browser to http://192.168.2.3:3000/. And sure enough, there I see a “Welcome on board” message, indicating I have set up Rails correctly.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Senior Perl Developer
- Technical Support Rep
- Symbolic Math with Python
- UX Designer
- Network Monitoring with Ethereal
- Not only you I too assumed
5 min 27 sec ago
- another very interesting
1 hour 58 min ago
- Reply to comment | Linux Journal
3 hours 51 min ago
- Reply to comment | Linux Journal
10 hours 45 min ago
- Reply to comment | Linux Journal
11 hours 2 min ago
- Favorite (and easily brute-forced) pw's
12 hours 53 min ago
- Have you tried Boxen? It's a
18 hours 45 min ago
- seo services in india
23 hours 16 min ago
- For KDE install kio-mtp
23 hours 17 min ago
- Evernote is much more...
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?