Using C for CGI Programming

You can speed up complex Web tasks while retaining the simplicity of CGI. With many useful libraries available, the jump from a scripting language to C isn't as big as you might think.
Debugging CGI Programs

One distinct disadvantage of debugging C is that errors tend to cause a segmentation fault with no diagnostic message about the source of the error. Debuggers are fine for most other types of programs, but CGI programs present a special challenge because of the way they acquire input.

To help with this challenge, the cgic library includes a CGI program called capture. This program saves to a file any CGI input sent to it. You need to set this filename in capture's source code. When your CGI program needs debugging, add a call to cgiReadEnvironment(char*) to the top of your cgiMain() function. Be sure to set the filename parameter to match the filename set in capture. Then, send the problematic data to capture, making it either the action of the form or the script in your request. You now can use GDB or your favorite debugger to see what sort of trouble your code has generated.

You can take some steps to simplify later debugging and development. Although these apply to all programming, they pay off particularly well in CGI programming. Remember that a function should do one thing and one thing only, and test early and test often.

It's a good idea to test each function you write as soon as possible to make sure it performs as expected. And, it's not a bad idea to see how it responds to erroneous data as well. It's highly likely that at some point the function will be given bad data. Catching this behavior ahead of time can save unpleasant calls during your off hours.

Deployment

In most situations, your development machine and your deployment machine are not going to be the same. As much as possible, try to make your development system match the production system. For instance, my software tends to be developed on Linux or OpenBSD and nearly always is deployed on FreeBSD.

When you're preparing to build or install on the deployment machine, it is particularly important to be aware of differences in library versions. You can see which dynamic libraries your code uses with ldd. It's a good idea to check this information, because you often may be surprised by what additional dependencies your libraries bring.

If the library versions are close, usually reflected in the same major number, there probably isn't a big problem. It's not uncommon for deployment and development machines to have incompatible versions if you're deploying to an externally hosted Web site.

The solution I use is to compile my own local version of the library. Remove the shared version of the library, and link against this local version rather than the system version. It bulks up your binary, but it removes your dependency on libraries you don't control.

Once you have built your binary on the deployment system, run ldd again to make sure that all of the dynamic libraries have been found. Especially when you are linking against a local copy of a library, it's easy to forget to remove the dynamic version, which won't be found at runtime (or by ldd). Keep tweaking the build process; build and recheck until there are no unfound libraries.

Speed: CGI vs. PHP

Conventional wisdom holds that a program using the CGI interface is slower than a program using a language provided by a server module, such as mod_php or mod_perl. Because I started writing Web applications with PHP, I use it here as my basis for comparison with a CGI program written in C. I make no assertions about the relative speed of C vs. Perl.

The comparison that I used was the external interface to the database (events.cgi and events.php), because both used the same method for providing interface separation. The internal interface was not tested, as calls to the external interface should dwarf calls to the internal.

Apache Benchmark was used to hit each version with 10,000 queries, as fast as the server could take it. The C version had a mean transaction time of 581ms, and the PHP version had a mean transaction time of 601ms. With times so close, I suspect that if the tests were repeated, some variation in time would be seen. This proved correct, although the C version was slightly faster than the PHP version more times than not.

My normal development uses a more complex interface separation library, libtemplate (see Resources). I have PHP and C versions of the library. When I compared versions of the event scheduler using libtemplate, I found that C had a much more favorable response time. The mean transaction time for the C version was 625ms, not much more than it was for the simpler version. The PHP version had a mean transaction time of 1,957ms. It also was notable that the load number while the PHP version was running generally was twice what was seen while the C version was running. No users were on the system, and no other significant applications were running when this test was done.

The fairly close times of the two C versions tell us that most of the execution time is spent loading the program. Once the program is loaded, the program executes quite quickly. PHP, on the other hand, executes relatively slowly. Of course, PHP doesn't escape the problem of having to be loaded into memory. It also must be compiled, a step that the C program has been through already.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

source code url broken

Anonymous's picture

Thanks for you cool post. I'm trying to develop rest service by c/cgi, could you fix the broken url and make your source code and makefile available?

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix