At the Forge - Working with ActiveRecord
For the past few months, we have been looking at Ruby on Rails, the hot new open-source toolkit for creating Web/database applications. One of the core elements of this toolkit, as we saw last issue, is the ActiveRecord class, which automatically translates between Ruby objects and data in a relational database. Object-relational mappers, as such software is often known, bridges the gap between the object-oriented and relational worlds, which treat data in fundamentally different ways.
This month, we look at some of the ways we can modify ActiveRecord to validate our data in various ways. We also see how we can work with classes that depend on one another, doing something a bit more sophisticated than the basic scaffolding provides with only a few simple lines of code.
When I first started to work with relational databases, I would create tables that looked like this:
CREATE TABLE People ( first_name TEXT NOT NULL, last_name TEXT NOT NULL, phone_number TEXT NOT NULL, email_address TEXT NOT NULL );
And of course, the above definition of People will work just fine, providing the basis for a computerized address book. However, the above definition has several problems. To begin with, what happens if there is more than one person with the same name? That is, if we have two people named George Washington in our database, we're going to have a serious problem. How will we know which is the George we want?
The solution to this problem is to assign a unique number to each record in the database. Each relational database product has a different way of accomplishing this. In PostgreSQL, we add a new column and assign it a SERIAL type, indicating that it should be a nonrepeating integer:
CREATE TABLE People ( id SERIAL NOT NULL, first_name TEXT NOT NULL, last_name TEXT NOT NULL, phone_number TEXT NOT NULL, email_address TEXT NOT NULL );
We then tell PostgreSQL that it should consider id to be not just another column, but the primary key, an identifier that is guaranteed to be unique and that can serve as identification for one row in the table:
CREATE TABLE People ( id SERIAL NOT NULL, first_name TEXT NOT NULL, last_name TEXT NOT NULL, phone_number TEXT NOT NULL, email_address TEXT NOT NULL, PRIMARY KEY(id) );
Although we can now find people in our address book with their first or last names, we also can do so using their unique ID. Even if there are 100,000 people named George Washington in our database, we can unambiguously find the one that interests us using the id column. Think of the times you have been asked to identify yourself using a driver's license number, a national ID number or a Social Security number, and you quickly will realize that each of these can be used as a primary key in a database.
One additional result of this constraint is that the database creates an index for the id column. Even if you have a very large table of addresses, the fact that id is indexed means that the database can use it to find records quickly. In addition, although SERIAL columns can be set manually in an INSERT statement, just like INTEGER columns, they're normally not set explicitly at all. Rather, PostgreSQL assigns the next consecutive integer to be the column value—perfect for a primary key, whose value must be unique.
Primary keys are useful in this way, but we have not yet begun to understand their power. That's because primary keys really come into their own when they make it possible for us to link tables together. For example, consider a computerized appointment calendar that we might want to build as an add-on module to our existing address book. We could create a table like the following:
CREATE TABLE Appointments ( id SERIAL NOT NULL, person_id INTEGER NOT NULL, start_at TIMESTAMP NOT NULL, end_at TIMESTAMP NOT NULL, comment TEXT, PRIMARY KEY(id) );
The above table has an id column, uniquely identifying every appointment. It also has two columns identifying the time at which the appointment starts and ends, as well as room for an optional comment or description.
But there is also a person_id column, which allows us to indicate with whom we will be meeting. This database design has a number of problems, but perhaps the most striking one is that there is no constraint (other than NOT NULL) on the value that we can assign to person_id. Even if our People table is empty, we can assign person_id to be 10, 100 or 996—these numbers might be acceptable technically, but they don't help us ensure that person_id refers to an actual person.
The solution is to define person_id as a foreign key, indicating that values of person_id are legitimate only if they reflect an existing value in the People table. In PostgreSQL, we accomplish this as follows:
CREATE TABLE Appointments ( id SERIAL NOT NULL, person_id INTEGER NOT NULL REFERENCES People, start_at TIMESTAMP NOT NULL, end_at TIMESTAMP NOT NULL, comment TEXT, PRIMARY KEY(id) );
With these conditions in place, we can be sure that we will be able to make an appointment only with someone in our address book. What happens if we try to get around it? Let's see:
INSERT INTO People (first_name, last_name, phone_number, email_address) VALUES ('George', 'Washington', '202-555-1212', 'firstname.lastname@example.org');
When we SELECT the elements of our database table, we can see the value that was automatically assigned to our id column:
id | first_name | last_name | phone_number | email_address ----+------------+------------+--------------+--------------------------- 1 | George | Washington | 202-555-1212 | email@example.com
Now let's insert an appointment with George:
INSERT INTO Appointments (person_id, start_at, end_at, comment) VALUES (1, '2005-Oct-2 18:00', '2005-Oct-2 20:00', 'Dinner');
So far, so good. But, what happens if we try to insert an appointment with a nonexistent person?
INSERT INTO Appointments (person_id, start_at, end_at, comment) VALUES (200, '2005-Nov-2 18:00', '2005-Nov-2 20:00', 'Dinner with no one');
PostgreSQL rejects our INSERT statement, saying that inserting the row would violate the constraint introduced with the REFERENCES command:
ERROR: insert or update on table "appointments" violates foreign key constraint "appointments_person_id_fkey" DETAIL: Key (person_id)=(200) is not present in table "addressbook".
What happens if we try to remove George from our People table while we have an appointment with him?
DELETE FROM People WHERE id = 1;
Once again, PostgreSQL rejects our request, indicating this time that we cannot remove an item that is being pointed to:
ERROR: update or delete on "addressbook" violates foreign key constraint "appointments_person_id_fkey" on "appointments" DETAIL: Key (id)=(1) is still referenced from table "appointments".
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- My +1 Sword of Productivity
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide