Open-Source Web Servers: Performance on a Carrier-Class Linux Platform

Ibrahim tests the performance of three open-source webservers on a typical Ericsson Research Linux clusterplatform.
Scalability Results

We collected graphs for systems with 1, 2, 4, 6, 8, 10 and 12 Linux processors. For each graph, we recorded the maximum number of requests per second that each configuration can service. When we divide this number by the number of Linux processors, we get the maximum number of requests that each processor can process per second in each configuration.

Figures 12 and 13 show the transaction capability per processor plotted against the cluster size for both versions of Apache. In both figures the line is not flat, which means that the scalability is not linear, i.e., not optimum.

Figure 12. Apache 2.08a Scalability Chart

Figure 13. Apache 1.3.14 Scalability Chart

If we collect the scalability data of Apache 1.3.14 and 2.08a (see Figure 14) and create the corresponding graph, Figure 15, we observe that both servers have similar scalability compared to each other.

Figure 14. Scalability Data Comparison

Figure 15. Apache 1.3.14 vs. 2.08a Scalability

On Linux systems both versions of the server have similar scalability. According to our results, Apache 2.08a is around 2% more scalable than the 1.3.14 version. In either case, we have a slow linear decrease. The more CPUs we add after we reach eight CPUs, the less performance we get per CPU.

As for the Java-based web server, although Tomcat showed a better performance (servicing more requests per second) than Jigsaw, it showed a slight scalability problem. Figure 16 shows a slight decrease in performance per processors as we add more processors.

Figure 16. Tomcat Scalability Chart

Nonetheless, there are many possible explanations for the scalability degradation with the addition of more processors.

Factors Affecting Results

Several factors could have affected the results of the benchmarking tests:

  1. We used NFS to store the workload tree of WebBench to make it available for all the CPUs. This could present a bottleneck at the NFS level when hundreds of clients per second are trying to access NFS-stored files.

  2. Jigsaw and Tomcat are Java-based web servers, and thus their performance depends much on the performance of the Java Virtual Machine, which is also started from an NFS partition (since the CPUs are diskless and share I/O space through NFS).

  3. To generate Web traffic, we were limited to only 16 Celeron rackmount units. The generated traffic may not have been enough to saturate the CPUs, especially in the case of Apache when we were testing more than six CPUs.

Problems Faced

During our work on this activity, we faced many problems ranging from hardware problems and working on prototyped hardware to software problems, such as supported drivers and devices. In this section, we will focus only on the problems we faced while completing our benchmarks.

We suffered stability problems with the ZNYX Ethernet Linux drivers. The drivers were still under development; they were not production-level yet. After reaching a high number of transactions per second, the driver would simply crash. The following is a sample benchmark on one CPU running Apache 2.08a. Once the CPU reaches the level of servicing 1,053 requests per second (throughput of 6,044,916 bytes per second), the Ethernet driver would crash and we would lose connectivity to the ZNYX ports (see Figure 17).

Figure 17. Ethernet Driver Crashing on High Load

We did much testing and debugging with the people from ZNYX and we were able to fix the driver problem and maintain a high level of throughput without any crash.

The second problem we faced when booting the cluster is related to inetd. The inetd dæmon acts as the operator for other system dæmons. It sits in the background and listens to network ports for incoming connections. When a connection is made, inetd spawns off a copy of the appropriate dæmon for that port. The problem we faced was that inetd was blocking for unknown reasons on UDP requests, and we needed to restart the dæmon every time it blocked. We are still having this problem even with the latest release of xinetd.

Another issue we faced was that we were not able to saturate the CPUs with enough traffic. That was obvious. We needed more power than what we were trying to benchmark. At the time we conducted this activity, we only had 17 machines deployed (one controller and 16 clients) for benchmarking purposes. It could be one reason why we were not able to scale up. However, we have increased the capacity of our benchmarking environment to 63 machines, and now we will be able to rerun some of the tests and verify our results.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

This great work needs updating

Pierreg's picture

There's a FREE Web Server which is faster than all others, up to:

25x faster than Apache
20x faster than nginx and Rock (webspec's 2008/2009 winner)
400x faster than PHP, 200x faster than Python

With TrustLeap G-WAN, organizations can use much less computers
(and electricity) to achieve the same works:

http://www.trustleap.ch/

Re: Open-Source Web Servers: Performance on a Carrier-Class...

Anonymous's picture

Very useful article - however, comparing Tomcat and Apache is like comparing apples and oranges: Apache is designed to serve static content, while Tomcat is primarily a JSP/Servlet engine, and contains a standalone web server as a convenience.

Apache 2.0 threading

Anonymous's picture

Good article. Would like to know if Apache 2.0 was set up in this test to run threaded, or multi-process. The
similarity in performance makes me think both Apache 2.0 and 1.3 versions were running multiple Apache processes, with
resulting overhead from spawing new processes. Under Linux this isn't huge, but other unices have problems with
this model.

I'm also interested in Apache 2.0's multithreded performance when running as an app server - mod_perl, mod_php or
mod_python for example. Does threading allow sharing of persistent database connections, and what effect does that
have on memory usage, speed, and behaviour under heavy loads?

Re: Open-Source Web Servers: Performance on a Carrier-Class...

fyl's picture

I'm very pleased we got to run an article like this. This is our best defense against FUD from vendors of "less capable" web servers. When I got into Linux I never expected to see IBM running TV ads about Linux but what we see here shows me that IBM (and the rest of us) are on the right team.

Re: Open-Source Web Servers: Performance on a Carrier-Class Linu

Anonymous's picture

a very useful article.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState