Legacy Database Replacement with LAMP
Radio New Zealand is a public radio broadcaster, and as with other broadcasters, we have a huge library of music and programmes about music. In 1987, a new computerized library cataloging system, called BRS, was commissioned to assist broadcasters in the storage of library data.
Specifically, BRS was used to store data about LPs (and later CDs), tapes, live concert recordings, interviews and the collection of classical music scores. The system was accessed by way of dumb terminals and, later, terminal emulators on PCs. It also was used by radio staff to schedule and track the music broadcast on Concert FM—the company's classical music network.
BRS was a proprietary cataloging application sold by Maxwell Online, Inc. It ran on UNIX and had a long life. It had a couple of hardware upgrades during its 16-year life; on the software side, a few extra database tables were added for other types of data. BRS survived Y2K without a glitch, in spite of claims to the contrary, and in 2003 talks began in earnest to replace it.
In the past, a replacement project of this nature probably would have been outsourced. Experience has shown, however, that in some cases we'd end up with a closed-source custom application and be locked in to one company for ongoing upgrades and modifications. Sometimes when these companies cease trading and people move on, the application we depend on becomes an orphan, and the data is difficult or impossible to move to a new application.
DIY projects are not always appropriate, and we carefully weighed all the issues. Because of the critical nature of both the data and the application, plus the availability of in-house skills, we felt it was appropriate in this case to undertake the project ourselves.
Bruce Intemann from our IT department was the project leader and put together a quick proof of concept on a desktop PC running Red Hat Linux 8, an Apache Web server, MySQL and PHP (LAMP). Bruce was able to work out how to extract the data from BRS in plain-text format, and he constructed a simple search interface based on the Full-text Index of MySQL, with a small sample of the data converted by hand. Access was granted by way of a standard Web browser.
Around this time, I was completing a PHP Web project for another part of the company and offered my skills to this new project. When it came to name the system, I thought it would be nice to retain the B and R since the they are the first initials of the system's “parents”. My wife came up with the name BRAD, and one of our staff decided the acronym stood for Bruce and Richard's Audio Database. The name stuck.
After the proof of concept was accepted, I wrote a short Perl script to parse all the data—about 200,000 records—and insert it into the MySQL database. This was complicated because several of the smaller databases had been merged into the main database, Works, to aid global searching. Fortunately, one field was used to indicate the location (source) of the original data. See Listing 1 for a sample of BRS data.
Listing 1. One Record from the BRS in Original Text Format
*** BRS DOCUMENT BOUNDARY *** ..Document-Number: 000080019 ..TI: Ode for the centenary of Trinity College Dublin, Great parent, hail to thee (Z327) ..MA: Hyperion ..CA: CDA 66476 ..ME: cd ..RA: The King's Consort ..CF: vocal - ode ..CP: PURCELL ..CD: Robert King ..SO: Gillian Fisher, Evelyn Tubb (sopranos), James Bowman, Nigel Short (counter-tenors), Rogers Covey-Crump (high tenor), John Mark Ainsley (tenor), Michael George, Charles Pott (basses) ..ST: T12-21 ..AT: Purcell - Complete odes and welcome songs vol 5 ..DU: 002419 ..PT: ..RD: 2-4 Jan 1991 ..RE: ..AD: ..SR: ..RO: ..CY: 1694 ..LI: Nahum Tate ..OR: ..LN: ..PU: ..RI: ..ED: ..LP: ..LQ: cp ..LQ: ..NO: ..IS: ..LD: 921012 ..LU: leander *** BRS DOCUMENT BOUNDARY ***
Once a complete snapshot of the data was transferred, I rewrote Bruce's code using object-oriented PHP. I also utilised a search class I wrote for another project, modifying it to display music data instead of news stories.
The rough-and-ready demo was deployed on a development server, and staff members were asked for comments. Based on their responses, we decided the best way to proceed was to improve the system continually based on staff feedback, alongside the operation of the existing system. Dual operation during development ensured that staff still had access to a working system and also allowed comparisons between the search results obtained from both systems. It also allowed staff to gain confidence in using the new system and the results it presented.
To separate out the data into its original sets, a more complex script was written to parse the data files, un-merging all the original data sources from BRS. These sets were inserted into the separate databases and tables shown in Figure 1. Each division of the company is considered a zone in BRAD, and each data source is known as a section. Any zone can contain aliases to sections in other zones or options to search across any list of tables, regardless of where they are in the system.
The BRS database was flat (nonrelational), and data had been entered by different people in different formats over many years. As I viewed the results of each snapshot going into the new system, I adjusted the Perl script to clean up some of the data anomalies—particularly in date fields. For example, the original text date field for the last update to a record was edited manually in the past—in BRAD this is a datetime field maintained by the system. Fields also were added to track the creation date.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide