FastCGI: Persistent Applications for Your Web Server
The Common Gateway Interface is nearly as old as the web server itself. NCSA added the CGI 1.0 specification to version 1.0 of the granddaddy of all web servers, httpd. CGI 1.1, the current specification, was added to the 1.2 release. Every popular web server package developed since then has incorporated CGI as a way—usually the way—for web-borne visitors to access server-based executables.
CGI works well with small or infrequently used programs in which their sole function is to respond to one-time requests, such as processing simple information from HTML forms. There's no sense in clogging up your memory or process table with small applications invoked only a few times an hour.
The opposite is true, however, for complex or frequently used programs. Your web server can slow to a crawl if your site depends on a script with a long initialization process; in particular, one that involves connecting to a database or reading and structuring information from large text files. Speed issues are even more critical for small sites with servers that also process mail, FTP or DNS requests.
CGI applications must be launched anew with each invocation, a limitation that leads to two problems. First, the hardware and operating system have to deal with the overhead of creating a process for every CGI request. Second, CGI scripts can't handle persistent variables or data structures; they must be rebuilt with each invocation.
Open Market's FastCGI interface is one way to overcome CGI's limitations. FastCGI applications are invoked via URLs just like their CGI counterparts. The difference is that they're persistent; they function like servers within a server. FastCGI offers benefits in three areas:
Speed: a FastCGI script goes through only one process-creation cycle. Initialization of data and database connections are done only once. A further benefit of FastCGI is that it can connect to processes running on remote machines, taking the processing burden off the main server.
Persistence: even if you don't have access to an SQL server, FastCGI enables many database-like functions by storing your complex methods, objects and variables in RAM. Data can be stored across sessions, providing a workaround for the statelessness of HTTP connections.
Process management: Apache's implementation of FastCGI gives the server daemon the ability to take care of FastCGI applications, automatically restarting them should they die off. Other servers may share this ability; my experience is limited to Apache.
FastCGI is not the first—nor, I'm sure, will it be the last—approach to move beyond CGI. Most web servers have APIs that allow developers to write new functions right into the server. Doug MacEachern's mod_perl module allows the Perl runtime library to be compiled directly into Apache, giving hackers the ability to write server modules entirely in Perl.
I prefer FastCGI to its alternatives for four reasons:
It's not language-specific. mod_perl and proprietary APIs all dictate that the developer use a certain programming language. FastCGI applications can currently be written in Perl, C/C++, Java or Python, and the standard is flexible enough that other languages could be added in the future.
It's not server-specific. Actually, the implementations of FastCGI are server-specific, but the standard is not tied to one software package. FastCGI is currently supported on Apache, Roxen, Stronghold, and Zeus; a commercial variation is available for Netscape and Microsoft servers from Fast Engines, http://www.fastserv.com/.
FastCGI applications don't run in the server's name space. If a FastCGI application dies, it doesn't take the server down with it. Also, since FastCGI scripts run as separate processes, they don't increase the size of the server executable.
It's scalable. FastCGI scripts can be configured to run remotely via a TCP/IP connection, providing a method for load sharing.
My server platform is a fairly standard Linux box: a 120MHz Pentium with 64MB RAM running the 2.0.27 kernel. If you already get decent performance from your hardware, you won't have any trouble with FastCGI.
As to software, I use Apache and Perl; the material below is unabashedly biased in their favor. If you want to do your coding in C/C++, tcl, Java or Python, or if you want to use different server software, I suggest you visit http://fastcgi.idle.com/ for further information. On the other hand, most of the coding hints I'll be providing are applicable to any language.
To use Perl and Apache, you'll need to do some recompiling. Apache needs to be rebuilt with the FastCGI module. You'll also have to compile a Perl module. A few months ago you would have needed to rebuild Perl—I'll provide instructions in case you need or want to do so—but now can probably get by on your current Perl build. Even if you're not an accomplished C programmer, however, the compilation process is fairly painless. Here's what you'll need:
A C compiler of recent vintage: I've used gcc 18.104.22.168 without any problems.
The Apache source code: I use Apache 1.3.0 (1.3.1 is the current revision). Apache comes with most Linux distributions, or you can download it from http://www.apache.org/.
The Perl 5.004 or 5.005 source code: I strongly recommend that you upgrade if you're using an older version. If nothing else, it's a good opportunity to take a peek at the new and improved Perl home page at http://www.perl.com/.
The mod_fastcgi source code: The current version (as of September 3, 1998) is 2.0.17. It's compatible with Apache 1.3.1 and is available at http://fastcgi.idle.com/fastcgi.tar.gz.
Documentation and sample scripts: These are available with the The FastCGI Developer's Kit, links to which are provided at http://fastcgi.idle.com/.
Sven Verdoolaege's FCGI.pm (http://www.perl.com/CPAN/modules/by-module/FCGI/) handles the Perl-to-FastCGI interaction; this is the module on which my examples are based. Alternatively, you can use Leonard Stein's CGI::Fast module included in the standard Perl distribution (in which case you'll need to tweak my example code a bit).
You may need AT&T's freely distributed Safe/Fast I/O (sfio) libraries, available from http://www.research.att.com/sw/tools/sfio/. Until last June, Perl needed to be rebuilt with sfio to be able to handle FastCGI I/O streams. The new versions of FCGI.pm work without sfio (at least I've had no trouble), but some posts to the fastcgi-developers mailing list suggest that there may still be a few kinks in the new module. My recommendation is that you try FastCGI on a stock Perl build before resorting to building it with sfio.
Once you've gathered the necessary source code, you'll be ready to spend some quality make time. Compilation is done in two segments: Perl and Apache. Either can be done first, but within each segment the steps have to be completed in a certain order.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide