kHTTPd, a Kernel-Based Web Server
Web servers have become an important part of today's infrastructure for business, trading, entertainment and information. Some of the Web's sites take millions of hits every day, or even every hour. It is only natural that computer science researchers soon began wondering how to make web servers faster, more resource-efficient and fail-safe.
The search for more speed triggered a whole new area of operating system theory research, that of execution path analysis. The author of this article, while a Ph.D. student, researched this topic at length studying the Apache server. One of the most interesting discoveries was that for static pages, more than 80 percent of the instructions are actually executed in kernel space (protected mode). This has some serious implications.
First, as we learned in the previous three columns, the only way to enter kernel space for a user program such as Apache is to execute a system call. System calls are expensive because they involve complex checks and search many kernel tables.
Also, switching from user space and back often flushes the on-processor TLB (Translation Lookaside Buffer) cache as well as primary and secondary cache entries.
As a consequence, Linux kernel developers realized that a kernel-based web server was needed. Such a kernel-space web server would not incur the costs involved in switching back and forth to and from protected mode.
Just such a kernel-space web server, called kHTTPd, was implemented in Linux kernel versions 2.3.x and 2.4. kHTTPd is different from other kernel web servers in that it runs from within the Linux kernel as a module (device driver).
kHTTPd handles only static (file-based) web pages, and passes all requests for non-static information to a regular user-space web server such as Apache or Zeus. Static web pages, while not complex to serve, are nevertheless very important. This is because virtually all images are static, as are a large portion of HTML pages. A “regular” web server adds little value for static pages; it is simply a “copy file to network” operation. The Linux kernel is very good at this; the NFS (network file system) dæmon, for example, also runs in the kernel.
“Accelerating” the simple case of serving static pages within the kernel leaves user-space dæmons free to do what they are very good at: generating user-specific, dynamic content. A user-space web server such as Apache, typically loaded with many features and many execution paths, can't be as fast as kHTTPd. There are, however, a few web servers that are as simple as kHTTPd but implemented in user space, so they are not expensive consumers of processor cycles, even compared with kHTTPd.
kHTTPd is very simple; it can't handle dynamic content. So, it proxies all requests for those directories you configure via the sysctl called “dynamic” to a fully functional user-space web server such as Apache. It's a global win, though, since most of the transfers of a common web server are images, which are definitely static.
kHTTPd is actually not much different from a normal http dæmon in principle. The main difference is that it bypasses the syscall layer. Normally, an http server contains code like this:
socket(..) bind(..) listen(..) accept(..)
and each call has to enter the kernel, look up kernel structures as function(s) of the parameter(s) passed, return information to user space, etc.
Being a kernel dæmon itself, kHTTPd interfaces directly with the internal kernel structures and system calls involved and so avoids the user-kernel interaction completely. Also, because it's a kernel dæmon, it avoids switch_mm and TLB flushes. Last but not least, it avoids all enter/exit kernel overhead.
There are not many data structures for kHTTPd. They are in net/kHTTPd/structure.h.
The first is a per-connection structure. The second is a per-kHTTPd-thread structure by which many http_requests can be queued.
kHTTPd can be compiled as a loadable module, or linked statically into the kernel. Linking statically into the kernel will provide better performance, because it will be allocated in a more efficient and TLB-persistent page-table mapping.
Control of kHTTPD is performed via the /proc filesystem at /proc/sys/net/khttpd. Table 1 shows the sysctl parameters which can be set, along with a description of each (from the docs).
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide