Data Modeling with DODS
Relational databases form the backbone of most serious web applications because they make data storage and retrieval easy, safe and flexible. This setup normally works wonderfully, until developers begin to write their programs using objects, which have a completely different paradigm. Is there any way for us to close the gap between the object and relational worlds?
Actually, many ways exist to map relational databases to objects and methods, and most programmers have found themselves writing such systems on their own. As we saw last month, Perl programmers can get some assistance from Alzabo, a module that helps programmers to create tables and then provides a method-based interface to manipulate them.
This month, we will look at DODS (Data Object Design Studio), a tool similar to Alzabo in spirit, except that it is aimed at Java programmers. DODS is a central part of Enhydra, whose upcoming version (Enhydra Enterprise) is expected to be the first open-source application server to implement J2EE (Java 2, Enterprise Edition).
Right now, Enhydra Enterprise is still in beta testing, and while the DODS support appears to have improved since the first versions, I was told by David Young, the Enhydra evangelist at Lutris, that DODS from Enhydra 3.x is more stable. So that I could try things more easily, Lutris sent me a copy of EAS (Enhydra Application Service), their commercially supported and enhanced version of Enhydra.
I'm not entirely sure what the differences are between EAS and the open-source Enhydra server; enhydra.org indicates that EAS is “based on” Enhydra, but the differences between EAS and buying a copy of Enhydra are not at all obvious. I will assume that the copy of EAS I installed is largely similar to Enhydra 3.x, but this might not be an accurate assumption.
DODS, like the Perl Alzabo system we discussed last month, has two goals: to help design databases with a high-end tool and to provide a set of objects and methods for working with that database. While Alzabo is a server-side web application, DODS is a client-side application written in Java that lets you define and edit your database definitions.
The goal of DODS is to create parallel SQL definitions and Java classes that describe the same relational database. You then feed the SQL definitions into your database and use the Java classes to access them.
Moreover, DODS is designed to work with different databases; at present, it works with PostgreSQL, MySQL, Sybase and Oracle, but it may work with more in the future. Since the actual SQL queries are written by an object middleware layer, this means you can move your Enhydra programs from one database to another without having to rewrite them. In reality, of course, things are more complicated. For example, Enhydra's support of PostgreSQL is not very impressive, ignoring the SERIAL data type (which is really just a sequence) and unaware of referential integrity constrations, such as foreign keys. Nevertheless, the goal is an admirable one, and I'm looking forward to seeing how Enhydra 4.x handles this. As time goes on, I expect that DODS will continue to improve its support for different databases, creating appropriate queries for individual SQL dialects.
Let's create a simple web/database application using Enhydra and DODS to demonstrate how it will work. I'll use PostgreSQL in my examples, both because it's an excellent open-source database and because DODS supports it. Our example, as has been the case for the last few months, will be a simple weblog (or blog) application that displays entries in a database table in reverse chronological order. Writing such a program is not particularly difficult, making it all the more attractive as a test of DODS and Enhydra.
Our first stop will be the Enhydra appwizard, which creates the directories and skeleton files for our basic Enhydra application. The appwizard is in $ENHYDRA/bin, where ENHYDRA is an environment variable naming the location of your Enhydra installation. (When I installed it from CD using RPMs for my Red Hat Linux box, ENHYDRA was set to /usr/local/lutris-enhydra3.5.2.)
On the first appwizard screen, I was allowed to choose between a standard web application and an Enhydra super servlet. I chose the latter. On the next screen, I chose an HTML project (rather than a wireless WML project), named the project “blog” and put it in the il.co.lerner package class. I also accepted the default home for Enhydra applications, ~/enhydraApps/. I chose not to associate a copyright with my source code and then clicked Finish, which created 18 new files in ~/enhydraApps/blog.
Now that I've created the skeleton for my application, I'll modify the default Welcome page that comes with Enhydra. We will have to do this in two parts; first, we'll modify the HTML file Welcome.html, which my computer placed in ~/enhydraApps/blog/src/il/co/lerner/presentation/Welcome.html.
Note that this file consists not just of HTML, but of tags that will be processed by XMLC (see At the Forge in the August 2001 issue of Linux Journal to see just what that means). We'll change it to display the latest information from our weblog, rather than the standard page, as you can see in Listing 1. The only difference between an XMLC document and a standard page of HTML is we mark the parts we want to modify inside of <span> tags, with an id attribute. For example:
<p><b><span id="date">Date</span><b></p> <p><span id="text">Text</span></p>
If we display this file literally in our web browser, we'll see the words Date and Text. But users will not retrieve this HTML document directly. Rather, XMLC will turn our document into a Java class. We will then use the WelcomePresentation class to create an instance of the document, using automatically generated methods to modify the date and text sections.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
- BASH script to log IPs on public web server
24 min 1 sec ago
3 hours 59 min ago
- Reply to comment | Linux Journal
4 hours 32 min ago
- All the articles you talked
6 hours 55 min ago
- All the articles you talked
6 hours 58 min ago
- All the articles you talked
7 hours 15 sec ago
11 hours 24 min ago
- Keeping track of IP address
13 hours 15 min ago
- Roll your own dynamic dns
18 hours 29 min ago
- Please correct the URL for Salt Stack's web site
21 hours 40 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?