At the Forge - Working with ActiveRecord
For the past few months, we have been looking at Ruby on Rails, the hot new open-source toolkit for creating Web/database applications. One of the core elements of this toolkit, as we saw last issue, is the ActiveRecord class, which automatically translates between Ruby objects and data in a relational database. Object-relational mappers, as such software is often known, bridges the gap between the object-oriented and relational worlds, which treat data in fundamentally different ways.
This month, we look at some of the ways we can modify ActiveRecord to validate our data in various ways. We also see how we can work with classes that depend on one another, doing something a bit more sophisticated than the basic scaffolding provides with only a few simple lines of code.
When I first started to work with relational databases, I would create tables that looked like this:
CREATE TABLE People ( first_name TEXT NOT NULL, last_name TEXT NOT NULL, phone_number TEXT NOT NULL, email_address TEXT NOT NULL );
And of course, the above definition of People will work just fine, providing the basis for a computerized address book. However, the above definition has several problems. To begin with, what happens if there is more than one person with the same name? That is, if we have two people named George Washington in our database, we're going to have a serious problem. How will we know which is the George we want?
The solution to this problem is to assign a unique number to each record in the database. Each relational database product has a different way of accomplishing this. In PostgreSQL, we add a new column and assign it a SERIAL type, indicating that it should be a nonrepeating integer:
CREATE TABLE People ( id SERIAL NOT NULL, first_name TEXT NOT NULL, last_name TEXT NOT NULL, phone_number TEXT NOT NULL, email_address TEXT NOT NULL );
We then tell PostgreSQL that it should consider id to be not just another column, but the primary key, an identifier that is guaranteed to be unique and that can serve as identification for one row in the table:
CREATE TABLE People ( id SERIAL NOT NULL, first_name TEXT NOT NULL, last_name TEXT NOT NULL, phone_number TEXT NOT NULL, email_address TEXT NOT NULL, PRIMARY KEY(id) );
Although we can now find people in our address book with their first or last names, we also can do so using their unique ID. Even if there are 100,000 people named George Washington in our database, we can unambiguously find the one that interests us using the id column. Think of the times you have been asked to identify yourself using a driver's license number, a national ID number or a Social Security number, and you quickly will realize that each of these can be used as a primary key in a database.
One additional result of this constraint is that the database creates an index for the id column. Even if you have a very large table of addresses, the fact that id is indexed means that the database can use it to find records quickly. In addition, although SERIAL columns can be set manually in an INSERT statement, just like INTEGER columns, they're normally not set explicitly at all. Rather, PostgreSQL assigns the next consecutive integer to be the column value—perfect for a primary key, whose value must be unique.
Primary keys are useful in this way, but we have not yet begun to understand their power. That's because primary keys really come into their own when they make it possible for us to link tables together. For example, consider a computerized appointment calendar that we might want to build as an add-on module to our existing address book. We could create a table like the following:
CREATE TABLE Appointments ( id SERIAL NOT NULL, person_id INTEGER NOT NULL, start_at TIMESTAMP NOT NULL, end_at TIMESTAMP NOT NULL, comment TEXT, PRIMARY KEY(id) );
The above table has an id column, uniquely identifying every appointment. It also has two columns identifying the time at which the appointment starts and ends, as well as room for an optional comment or description.
But there is also a person_id column, which allows us to indicate with whom we will be meeting. This database design has a number of problems, but perhaps the most striking one is that there is no constraint (other than NOT NULL) on the value that we can assign to person_id. Even if our People table is empty, we can assign person_id to be 10, 100 or 996—these numbers might be acceptable technically, but they don't help us ensure that person_id refers to an actual person.
The solution is to define person_id as a foreign key, indicating that values of person_id are legitimate only if they reflect an existing value in the People table. In PostgreSQL, we accomplish this as follows:
CREATE TABLE Appointments ( id SERIAL NOT NULL, person_id INTEGER NOT NULL REFERENCES People, start_at TIMESTAMP NOT NULL, end_at TIMESTAMP NOT NULL, comment TEXT, PRIMARY KEY(id) );
With these conditions in place, we can be sure that we will be able to make an appointment only with someone in our address book. What happens if we try to get around it? Let's see:
INSERT INTO People (first_name, last_name, phone_number, email_address) VALUES ('George', 'Washington', '202-555-1212', 'firstname.lastname@example.org');
When we SELECT the elements of our database table, we can see the value that was automatically assigned to our id column:
id | first_name | last_name | phone_number | email_address ----+------------+------------+--------------+--------------------------- 1 | George | Washington | 202-555-1212 | email@example.com
Now let's insert an appointment with George:
INSERT INTO Appointments (person_id, start_at, end_at, comment) VALUES (1, '2005-Oct-2 18:00', '2005-Oct-2 20:00', 'Dinner');
So far, so good. But, what happens if we try to insert an appointment with a nonexistent person?
INSERT INTO Appointments (person_id, start_at, end_at, comment) VALUES (200, '2005-Nov-2 18:00', '2005-Nov-2 20:00', 'Dinner with no one');
PostgreSQL rejects our INSERT statement, saying that inserting the row would violate the constraint introduced with the REFERENCES command:
ERROR: insert or update on table "appointments" violates foreign key constraint "appointments_person_id_fkey" DETAIL: Key (person_id)=(200) is not present in table "addressbook".
What happens if we try to remove George from our People table while we have an appointment with him?
DELETE FROM People WHERE id = 1;
Once again, PostgreSQL rejects our request, indicating this time that we cannot remove an item that is being pointed to:
ERROR: update or delete on "addressbook" violates foreign key constraint "appointments_person_id_fkey" on "appointments" DETAIL: Key (id)=(1) is still referenced from table "appointments".
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- Android's Limits
- Reachli - Amplifying your
1 hour 9 min ago
1 hour 58 min ago
- good point!
2 hours 1 min ago
- Varnish works!
2 hours 10 min ago
- Reply to comment | Linux Journal
2 hours 40 min ago
- Reply to comment | Linux Journal
5 hours 6 min ago
- Reply to comment | Linux Journal
9 hours 5 min ago
- Yeah, user namespaces are
10 hours 22 min ago
- Cari Uang
13 hours 53 min ago
- user namespaces
16 hours 46 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?