Object Databases: Not Just for CAD/CAM Anymore

As Esther Dyson put it, “Using tables to store objects is like driving your car home and then disassembling it to put it in the garage. It can be assembled again in the morning, but one eventually asks whether this is the most efficient way to park a car.”

Applications are getting more complex and dependent on larger quantities of persistent data. Most applications rely on relational databases to manage this abundance of data. However, object databases have become another attractive option for a variety of applications. As Esther Dyson put it, “Using tables to store objects is like driving your car home and then disassembling it to put it in the garage. It can be assembled again in the morning, but one eventually asks whether this is the most efficient way to park a car.” [ORF96]

Object databases got their start in the CAD/CAM world. Object databases support the programmer-defined data types and complex relationships that CAD/CAM applications demand. To manage the additional complexity, object-oriented programming languages are becoming the standard for developing today's mainstream applications. Using an object database is a natural extension to this language choice. Object databases provide better performance, faster development, and more robust programs. This article examines these claims and looks at a public domain object database, the Texas Persistent Store.

Faster Development and More Robust Programs

Relational databases use a separate programming language, called “Structured Query Language” (SQL). Occasionally, a similar, but non-standard, query language is used to define the layout of the tables and interaction with the database. One shortcoming of relational databases is they can store only a limited set of data types; in order to store objects of more complex types they must somehow be mapped into the primitive types supported by SQL. In contrast, object databases use an object-oriented programming language for data definition and manipulation of the objects within the database. This eliminates the “impedance mismatch” of trying to map your complex objects and relationships into the limited data types and tables of the relational world. The reduction of error-prone translation code lets the programmer concentrate on the semantics of the object's behavior instead of the syntax of storing and retrieving the object. Without embedded SQL, runtime storage errors are eliminated.

While relational databases must use SQL to recreate these relationships at runtime, object databases capture the inter-object relationships directly in the database. This makes development easier by reducing the lines of codes written and the lines of code executed at runtime. A positive side effect of this is that you will not have to make any design compromises to accommodate join tables or add foreign key identifiers to your classes.

Object databases work on the principle of starting from a named object and navigating to other objects within the class hierarchy. These named objects can be singular objects or containers of objects. Navigation to the contained objects allows an object database to immediately load objects without needing to query. This adds up to less code for the programmer to write and test, making for more robust programs and shorter development cycles.

Increased Performance

If the faster development and more robust programs were not enough to convince you, let's try increased performance. The goal of many vendors is to make access to persistent objects as fast as access to transient objects. This is an impossible goal because loading a stored object requires accessing a disk and possibly a network. Sophisticated client caching and memory management techniques provide very low overhead once the object is loaded into memory. Some implementations, like the Texas Persistent Store and ObjectStore, have no overhead once the object is swapped into memory. Most relational systems do not cache the results on the client system, thereby incurring unnecessary network transmission and additional queries on the next access.

Unfortunately, there are few current benchmarks that compare relational and object databases to back up these performance claims. There are two common object database benchmarks: the Engineering Database Benchmark—also known as the 001, the Sun Benchmark or the Cattell Benchmark—developed at Sun Microsystems, and the 007 Benchmark, developed at the University of Wisconsin. The 001 Benchmark was intended to prove that object databases out-perform relational databases in engineering applications. The results showed that the measured object databases were 30 or more times faster than the benchmarked relational databases [CAT92]. The 007 tries to provide a broader mix of measurements, including multi-user access. Implementations of the 007 benchmark are audited by the University of Wisconsin and should be available from participating database vendors [LOO95].

Some advanced object database features include clustering and configurable object-fetching policies. Clustering allows programmers to indicate a collection of objects will be used together. All the objects in a cluster are loaded into the client cache when any one of them is requested. This reduces the number of disk and network transfers to load the client cache. Some vendors allow configurable object fetching policies that allows customization of the volume of extra data the server sends along. These performance gains usually come at the expense of increased lines of code and extra performance analysis.

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix