Connecting Open Office Base Application to SQL
In my last article, Quick and Dirty with Open Office Base, I described my initial experiences with building a simple database application for my wife in Open Office Base, having had no prior experience with the program. In that article, I described how easy it is to get started, and how easy it is to build usable software with Base. In this article, I'm going to discuss mitigating one of Base's major weaknesses by connecting it to an external SQL back-end.
Shortly after I finished the program and demonstrated it to my wife, I encountered one of Base's major limitations. I had put the database file in a shared “Documents” directory so that both she and I would have access to the program, and that's when it happened. I returned to my desk to work on the program and received an error message indicating that the file was locked and couldn't be accessed. It turns out that Base doesn't allow concurrent access to a given database. This limitation doesn't bode well for the day when my wife's business has millions of dollars in sales and hundreds of employees! (Well, one can wish, right?)
So, the solution to this problem is to connect the database application to an external RDBMS such as PostgreSQL or Mysql. I also gain other advantages by storing the actual data on an external database server. For one thing, I have experience in managing large PostgreSQL and Mysql databases. This means I can use the skills I already have to manage and maintain my wife's program. I also have the ability to, one day, migrate the application to the web since web-based database access is common-place these days. Overall, it's just a better solution than a single-user database.
Base, however, doesn't have built-in support for connecting to either Mysql or PostgreSQL. A little bit of research on the subject revealed that I had several options on how to get this connection done. I could use the Linux ODBC driver. I could use Java's JDBC driver. Or, I could use Open Office's SDBC driver.
My first inclination was to use the ODBC driver because of it's ubiquity. However, my initial research indicated that I'd have to make a bunch of changes to /etc/odbcinst.ini every time I wanted to add a new data source, on each machine that needed to connect to the new data source. Frankly, this just wasn't appealing.
The JDBC driver didn't need to be configured in /etc, but it was almost as bad. From what I read, I'd have to point the driver to a .jar file appropriate for the type of database I wanted to connect to. In this case, this would be a PostgreSQL-specific .jar file. Then I would have to tell the JDBC driver where to find the driver by setting another parameter's value to org.postgresql.Driver, which seems redundant. This just seemed like a lot of work for what should be a simple operation, namely, connecting to a database.
Sure, configuring the ODBC/JDBC drivers isn't really that big a deal and I could get it done if I had to, but the SDBC option was mind-numbingly simple to install and configure. First, I had to download a .zip file from http://dba.openoffice.org/drivers/postgresql/index.html. Then, after starting up Base, I went to the Extension Manager under the Tools menu. All I had to do is point the Extension Manager to the .zip file and restart Base. Once Base had restarted, I was able to select PostgreSQL as a data source. The driver needed information about the database I wanted to connect to. This information took the form of a connection string:
After pressing the “Next” button, I was able to provide a user name and password. And that was all I had to do.
Before tackling my wife's program, I decided to connect one of my own databases as a test. Sure enough, it connected and I was able to see all of the tables, though it was a bit disconcerting to find that I could also see other database schema's besides just the normal “public” schema. When I went into table mode and tried to open a table that I knew had over 40 thousand records, a couple seconds later, there was the data. Not bad for a database located on a server in another state!
At the risk of starting a religious war, I've got to say that I really prefer to use PostgreSQL, but because it doesn't have a sane replication mechanism that I know of, I'm considering migrating my operations to Mysql. So, I thought I'd take this opportunity to check out Base's Mysql support.
When I started Base, I noticed that it offered to connect to a Mysql database. However, when I clicked on that option, I realized that it wanted to connect via ODBC/JDBC, which I didn't want. A little bit of searching turned up a native driver for Mysql at http://extensions.services.openoffice.org/en/node/2415. This driver installed just as easily as the PostgreSQL driver. Once installed, a new option appears that allows a user to make a direct connection to a Mysql database. In this case, no magical connection string is needed. You simply fill in the server, database, and user fields and off you go.
So native drivers exist for both PostgreSQL and Mysql, and they are both very easy to setup and use. This brings me to the last task at hand: migrating from the native Base data management system, to either of the new external DBMS's. It turns out that this task really isn't that hard, though it could prove to be a bit tedious for databases with many tables, queries, and forms.
In order to perform this migration, we have to create, and open, a new Base database that is tied to either a PostgreSQL or Mysql database. Then we open the old database in Base. From there, we can copy and paste the forms from the old file to the new file. We can then open each table in the old file, select all of the records, and copy them into the new file's table list. Once you past the data in, you are presented with a couple of dialog boxes that allow you to select which fields to import, and to adjust the data types of those fields. Finally, the data is transferred, and you are finished. Where this could be tedious, though, is that you have to copy-paste each table individually, and if your database has many tables... Well, you get the idea.
I was curious about how well this copy-paste method would scale, so I tried it with that 40K record dataset I mentioned earlier. I discovered that it works very well, even though I did it under the most adverse conditions. I imported data from a remote server, across the Internet, to a database file on an NFS mounted directory. That's great if you get paid by the hour, not so good if you get paid to get things done. To get the migration done efficiently, you will need to try to keep things as local as possible.
I also discovered that PostgreSQL is a bit more picky about what it sees as valid data. When I imported a couple of tables from my wife's application to the new PostgreSQL server, I received errors about “not null” constraints that were violated by the data. Of course, Base should have never allowed me to create data that violated such constraints. It looks like I have some data clean-up to do before I can complete this migration.
So, migrating an application from the native Base database management system to either PostgreSQL or Mysql is very easy and intuitive. By then distributing the resulting Base file over the network via NFS, for example, you gain concurrent access capability. But because it is very much a manual process, I recommend you perform the migration as early in a project as possible.
Mike Diehl is a freelance Computer Nerd specializing in Linux administration, programing, and VoIP. Mike lives in Albuquerque, NM. with his wife and 3 sons. He can be reached at firstname.lastname@example.org
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Sony Settles in Linux Battle
- Libarchive Security Flaw Discovered
- Profiles and RC Files
- Maru OS Brings Debian to Your Phone
- The Giant Zero, Part 0.x
- Snappy Moves to New Platforms
- Understanding Ceph and Its Place in the Market
- Git 2.9 Released
- Astronomy for KDE
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide