Linux on Carrier Grade Web Servers

Ibrahim and Makan describe and test the Linux Virtual Server.
Operation of the CPUs

The CPUs use the Linux kernel 2.2.14-5.0 that comes with the Red Hat 6.2 distribution. When we start the CPUs, they boot from LAN and broadcast a DHCP request to all addresses on the network. The DHCP server (the CPU with disk) will reply with a DHCP offer and will send the CPUs the information they need to configure network settings such as IP-addresses (one for each interface, eth0 and eth1), gateway, netmask, domain name, the IP address of the boot server and the name of the boot file. Using bpbatch, a freeware preboot software, the diskless CPUs will then download and boot the specified boot file in the bpbatch configuration file. The boot file is a kernel image located under the /tftpboot directory on the bpbatch server. Finally, the CPUs will download a RAM disk and start the Apache web server.

The Apache web server is part of the RAM disk that is downloaded by the CPUs. Because we have a homogeneous environment, all the CPUs share the same configuration files and serve the same content, which is available via NFS.

Benchmarking Environment

To conduct benchmarking activities, we setup a hardware and a software benchmarking environment. At the hardware level, we used 18 Intel Celeron 500MHz 1U rackmount units with 512MB of RAM each. These machines were used to generate web traffic using WebBench, a freeware tool available from zdnet.com.

Figure 3. WebBench Architecture

WebBench is a software tool for measuring the performance of web servers. It consists of one controller and many clients (Figure 3). The controller provides means to setup, start, stop and monitor the WebBench tests. It is also responsible for gathering and analyzing the data reported from the clients (Figure 4). On the other hand, the clients execute the WebBench tests and send requests to the server.

Figure 4. WebBench Control Window with 379 Connected Clients Ready to Generate Traffic

Building the LVS-NAT Setup

To build the setup, we decided to use the CPU with disk as the Linux Virtual Server and the eight diskless CPUs, running the Apache server, as traffic processors.

As mentioned previously, we were using the Linux kernel supplied with Red Hat 6.2. This kernel comes with LVS support, and thus there was no need to do any work at the kernel level. We only needed to perform the configuration. For those who like GUI configuration tools, they will find the LVS GUI-configuring tool a definite advantage. It is a complete tool to setup and manage an LVS environment, and it is easy to use. However, we performed our configuration manually—a matter of personal preference.

If you are not using a kernel provided with Red Hat (6.1 or later), then you need to build a new kernel and setup IP masquerading and IP chain rules manually. The kernel you build must have support for the following options: network firewalls, IP forwarding/gatewaying, IP firewaling, IP masquerading, IP: ipportfw masq and the Virtual Server support.

At the same time, you need to choose a scheduling algorithm from among the following: weighted round-robin, least-connection or the weighted-least connection.

Once this is done, you compile the kernel, update your system and reboot. After rebooting with the new kernel, you need to setup the IP configuration for the NAT director, which includes setting an alias IP address to be used for access from outside the cluster and setting an alias for the NAT router for access from inside the cluster and enabling ip_forward and ip_always_defrag on NAT director.

When we completed these steps, we configured LVS by editing the lvs.conf file and started the lvs binary that came with Red Hat. It performed the necessary ipvsadm commands for the LVS server and the real servers.

Please note that if you are not using the Red Hat distribution, then you need to start the ipvsadm commands manually. After that, you need to start ipchains with the forward and MASQ options.

Setup Tip

If you plan to deploy LVS in a LAN environment, you have to be careful when isolating the real servers from the rest of the LAN and from the outside world. There must be no direct route from any real server to the web clients. This is very important because if the HTTP answer packets go to the client through a way other than the NAT director, they will be discarded by the client. One easy way to achieve this is to build your own private subnet and use the NAT director as your gateway to the rest of the LAN.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState