Using mSQL in a Web-Based Production Environment
Over the past few years, many companies have realized the benefits of using Linux to serve web content to the masses. The power of a freely available, feature-laden 32-bit operating system, coupled with a vast number of utilities and development tools, provides a cost-effective solution for implementing enterprise and publicly-available information servers.
While many organizations have championed Linux as a web server, few have taken advantage of perhaps one of the most interesting aspects of the Web: dynamic content generation and delivery. Think about it. Of all the web sites you visit on a regular basis, how many of them have static content? Not many. Many us go to Yahoo! each day to see “What's New on the Internet”. Many cruise over to catch the news on CNN throughout the day. These sites have dynamic content. If the listings on Yahoo! didn't change every day, how many of us would go back after the first visit?
To provide dynamic content to your cyberguests, you can use a variety of tools and methods. One of the more popular approaches is to integrate data repositories with the Web. Creating web-based applications that integrate with existing database pools seems to be the rage this year. This paradigm has led to some amazing third-party products such as Bluestone Software's Sapphire/Web (http://www.bluestone.com/) and Haht Software's HahtSite (http://www.haht.com/). These products provide full development environments for designing, creating and deploying web-based applications. Unfortunately, the majority of these products are not yet available for Linux (iBSC options ignored for the moment). However, there is an alternative.
You can retrofit a Linux-based web server to provide access to enterprise data in a very cost-effective manner. Third-party packages typically have an integrated development environment (IDE) to provide for seamless, somewhat painless development. This can be easily replaced by your favorite text/HTML editor. Third-party packages typically interface nicely with expensive, proprietary database platforms such as Oracle, Sybase and Informix. These database systems cost thousands of dollars, and generally require a seasoned database administrator (DBA) to operate efficiently. In our Linux model, we will employ David Hughes' mSQL engine, which costs a whopping $170 USD, and is a breeze to use. To fully implement such an approach, expect to spend no less than $10,000 on the software alone. The Linux/mSQL approach (including the cost of a Linux CD-ROM distribution, the mSQL engine and coffee) should cost around $250. Senior management has always had a love affair with saving money—show them the numbers. It sells itself, folks.
In this article, the following assumptions are made:
You have a working, fully installed Linux server.
You have a functional HTTP server running (NCSA, CERN, Apache, etc.).
You have installed either BASH, pdksh or ksh93.
You have the standard Unix tools in place (awk, sed, Perl, etc.).
The first item you need is the mSQL (mini Structured Query Language) engine itself. The mSQL package implements a relatively fast, lightweight database engine that uses a subset of the ANSI SQL standard to perform its operations. As of this writing, the current stable release is version 1.0.16, although the long awaited v2.0 release has been promised soon. It can be obtained via ftp at ftp://bond.edu.au/pub/Minerva/msql/. The official home of mSQL is at http://Hughes.com.au/.
Next, you need the w3-msql package, also written and distributed by David Hughes. This package provides the CGI (Common Gateway Interface) interface to the databases managed by mSQL. As of this writing, the current version of w3-msql is version 1.0, although 2.0 is in the works. It is available via ftp at ftp://bond.edu.au/pub/Minerva/w3-msql/.
Finally, the example scripts presented in this article are available via ftp at ftp://www.dcicorp.com/pub/unix/msqlweb/. Unless you are a typing enthusiast and are already familiar with mSQL, I recommend you snag the examples.
Once you have obtained the distribution archive, move it to either a scratch directory or the base of your normal source tree. You can extract the package as follows:
gzip -d msql-1.0.16.tar.gz tar xf msql-1.0.16.tar
To prepare for compilation, switch to the ./msql-1.0.16 directory and execute the following commands:
make target cd targets/Linux* ./setupYou will be asked the following questions pertaining to the actual build of the package. Here are a few notes to guide you:
Top of install tree? While mSQL can be installed virtually anywhere on your system, you should use the default path, /usr/local/Minerva. It makes installing third-party add-ons easier.
Will this installation be running as root? This question is concerned primarily with the TCP port mSQL uses for network communication. If your distribution is running as root, the default TCP port is 1112; otherwise port 4333 is used. You can tailor these defaults in the ./common/site.h header file. Also, take a look at the mSQL FAQ, available at the mSQL web site, which describes a number of other scenarios this setting affects.
Directory for PID file? Where do you keep your PID files? The default is /var/adm, which is fine for most folks.
At this point, the script will finish its tailoring process. Before you actually compile the package, you can perform several customizations by editing a few of the source files. The first, ./common/site.h, contains such gems as selecting the German language over English for error reporting. Give it a quick glance and make sure you are comfortable with the settings. Another possible modification lies in the ./msql/msql_priv.h file. I like to bolster my database limits a bit. At the top of this file are several values you can alter to suit your needs, including the maximum number of fields returned in a query, maximum number of network connections allowed, and the maximum length for field and table names. Feel free to modify these as you see fit. For the non-adventurous, the defaults should suffice.
To compile the package, simply execute the following command from the base source directory (./targets/Linux*):
Compilation on a Pentium-class machine generally takes a little over a minute. If there are no compiler errors, you can install the package by executing the following command:
make installThe system is installed in /usr/local/Minerva (or whatever you set the install directory to when you ran setup).
Compiling and installing the w3-msql utility is much simpler. After you obtain the distribution archive, extract it into your source or scratch directory as follows:
gzip -d w3-msql-1.0.tar.gz tar xf w3-msql-1.0.tar
Change into the w3-msql-1.0 directory, and remove the -lsocket -lnls assignment to the make variable LIBS. Linux does not require these libraries to be linked into the application. Run make, and you are in business. If the build was successful, simply copy the w3-msql binary image over to your web server's cgi-bin directory.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Returning Values from Bash Functions
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide