As I write this, Apache 2.0 has been out in stable form for nearly a month—and from everything I can tell, it's definitely ready for prime time. While there are other open-source HTTP servers, Apache is definitely the best known and best supported. Apache is used on 60% of the web sites in the world, comes with virtually every Linux distribution and is even part of several commercial application servers. Both Zope and Jakarta-Tomcat have their own built-in HTTP servers, but almost no one exposes these servers directly to the Web. Rather, they use Apache as a front end because of its combination of performance and flexibility. This month, we take a closer look at Apache 2.0 [see also “Apache 2.0: the Internals of the New, Improved 'A PatCHy”', available at www.linuxjournal.com/article/4559].
If you are familiar with Apache 1.x, then very few things in Apache 2.0 will surprise you. For starters, Apache continues to be highly modularized, allowing you to include only those modules that you deem necessary in your server. But whereas Apache 1.3 had a core module that included the basic HTTP implementation, Apache 2.0 has delegated even more supported protocols to modules. This has a number of advantages, including the fact that we can now add (and subtract) protocols as necessary from Apache. In other words, Apache has now become a general-purpose internet server, rather than just an HTTP server. How many projects will take advantage of this functionality remains to be seen.
Apache was never meant to be the fastest server on the planet. Rather, it was designed to be extensible via a system of modules. Each module provided a different piece of functionality; administrators interested in squeezing the last ounce of power from their systems don't have to include irrelevant modules. For example, if we know that our server will never run any CGI programs, then we can easily remove mod_cgi, gaining some CPU cycles and memory in the process.
Apache 2.0 continues in the long-standing Apache tradition of handling each HTTP transaction in a number of named phases. A module may examine or modify the transaction during any one of these phases by attaching its own handler to the appropriate hook. For example, mod_speling (which corrects capitalization and spelling mistakes in URLs—the name is purposely misspelled) attaches its handler to the “fixup” phase hook, executing immediately before the server generates a response.
In Apache 1.x, only one handler could fire for a given hook. In Apache 2.0, each handler not only registers itself for a given hook, but indicates when it would like to execute relative to other modules; mod_speling, for example, registers its handler as the final (APR_HOOK_LAST). If another module were to register with the fixup handler, it would execute before mod_speling. The fact that multiple handlers can fire for a given hook opens a world of possibilities that were previously too difficult to achieve.
On a similar note, Apache now makes it possible for one module to filter, or modify, the output of another module. This is currently possible with mod_backhand, but that module depends on a number of tricks and dark corners in the Apache API. Apache 2.0 is designed to allow modules to act as input or output filters. This means that if you want to add a standard set of headers or footers to your HTML pages, you can now do this across the board, including for dynamically generated pages created by CGI programs, server-side includes and mod_perl handlers.
The Apache configuration system now uses GNU autoconf rather than the Apache-specific system that was in use for versions 1.x. And, many of the C-language abstractions (such as hash tables and strings) that were included in previous versions of Apache have now been named the Apache Portable Runtime (APR). The APR is included with Apache and is configured and compiled into the server automatically when you build it.
Finally, Apache now comes with mod_ssl, which provides SSL and TLS encryption. Not only did Apache 1.x fail to come with such a module, but the two modules (Apache-SSL and mod_ssl) were incompatible and required patching the Apache source code before installation. The fact that mod_ssl will now be a standard part of every Apache installation is a huge relief for web site administrators and is most welcome.
UNIX systems have long had the ability to run multiple processes simultaneously. I typically run Emacs, a GNOME terminal and Galeon on my Linux box; while a casual glance might only reveal these three processes, there are actually dozens more (sendmail, gnome-panel, Apache, syslogd and the like) that are executing without my direct knowledge. For a complete list of what is running on my computer, I can use the command ps aux.
The good news is that the process model is simple to understand, ensures stability on the system and is portable across many operating systems. Unfortunately, however, processes are relatively heavy and slow. Linux users are especially spoiled on this front because creating a new process on Linux is a surprisingly lightweight operation. But even on Linux, spawning a new process can sometimes be a bit extreme.
For this reason, an alternative model of threads has grown over the years. Using threads, a single process can be executing in multiple places at the same time. Threads offer many of the benefits of processes without the overhead. But there is a cost: programming with threads can be extremely tricky because it's always possible that a particular piece of code is executing in two different threads. You can always write (or rewrite) code to be threadsafe, but this is often a difficult task.
Because threads were both difficult and tricky to handle, and because Apache was originally designed to work only on UNIX machines, Apache 1.x worked exclusively at the process level—if you want to handle ten simultaneous HTTP requests, then you must have ten Apache processes running. Because it takes time to create a new process, Apache 1.x took an idea from NCSA HTTPd, preforking processes before they are actually needed. This means that Apache can be a bit slow to start up, but that handling the incoming connections does not take much time. Apache also allows administrators to indicate how many “spare servers” should always exist, adding and removing Apache processes as necessary.
Preforked Apache servers are solid, well understood and robust. But on many systems, using processes is inferior to threads. In particular, Windows uses threads far more than processes, which means that by sticking with processes, Apache was limited in its ability to penetrate the Windows market.
Apache 2.0 solves these problems with MPMs (multiprocessing modules). Each MPM is an Apache module that handles the details of processes and threads. On Windows, OS/2 and BeOS, this means that you can finally run Apache using a threading mechanism that is native to your operating system. On UNIX and Linux systems, you can experiment with a number of different models, choosing one that is appropriate for your needs.
The prefork MPM, which runs in exactly the same way as Apache 1.x did, is the default choice when you install Apache. Two other choices for Linux users are: 1) worker: the number of threads rises and falls (according to the number of incoming requests), but the number of processes remains constant; and 2) perchild: each process contains a fixed number of threads, but the number of such processes rises and falls according to the number of incoming requests.
It's too early to tell, but I expect that more MPMs will emerge over time, and that there will be numerous modules that take advantage of threads to pool database connections, share application data and spawn asynchronous tasks in the background.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Linux Systems Administrator
- Senior Perl Developer
- New Products
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Have you tried Boxen? It's a
13 min 57 sec ago
- seo services in india
4 hours 45 min ago
- For KDE install kio-mtp
4 hours 46 min ago
- Evernote is much more...
6 hours 46 min ago
- Reply to comment | Linux Journal
15 hours 31 min ago
- Dynamic DNS
16 hours 5 min ago
- Reply to comment | Linux Journal
17 hours 4 min ago
- Reply to comment | Linux Journal
17 hours 54 min ago
- Not free anymore
21 hours 56 min ago
1 day 1 hour ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?