Linux on Carrier Grade Web Servers

Ibrahim and Makan describe and test the Linux Virtual Server.
Virtual Server via NAT

The LVS-NAT implementation was necessary due to the large usage of private addresses. For many reasons, including the shortage of IP addresses in IPv4 for security reasons, more networks are using private IP addresses that cannot be used outside the network.

Network address translation relies on the fact that the headers for internet protocols can be adjusted appropriately so that clients believe they are contacting one IP address, but servers at different IP addresses believe they are contacted directly by the clients. This feature can be used to build a virtual server, i.e., parallel services at the different IP addresses can appear as a virtual service on a single IP address via NAT.

The architecture of a virtual server via NAT is illustrated in Figure 2. The load balancer and real servers are interconnected by a switch or a hub. The real servers usually run the same service, and they have the same set of contents. The contents are replicated on each server's local disk, shared on a network filesystem or served by a distributed filesystem. The load balancer dispatches requests to different real servers via NAT.

Figure 2. Virtual Server via NAT

The Process of Address Translation

The process of address translation is divided into several steps:

  1. The user accesses the service provided by the server cluster.

  2. The request packet arrives at the load balancer through the external IP address of the load balancer.

  3. The load balancer examines the packet's destination address and port number. If they match a virtual server service (according to the virtual server rule table), then a real server is chosen from the cluster by a scheduling algorithm, and the connection is added into the hash table, which records all established connections.

  4. The destination address and the port of the packet are rewritten to those of the chosen server, and the packet is forwarded to the real server.

  5. When the incoming packet belongs to this connection, and the established connection can be found in the hash table, the packet will be rewritten and forwarded to the chosen server.

  6. When the reply packets come back from the real server to the load balancer, the load balancer rewrites the source address and port of the packets to those of the virtual service.

  7. When the connection terminates or timeouts, the connection record will be removed from the hash table.

Scheduling Algorithms

LVS supports four scheduling algorithms: round-robin, weighted round-robin, least-connection scheduling and weighted-least-connection scheduling. The round-robin algorithm directs the network connections to the different server in a round-robin manner. It treats all real servers as equals regardless of number of connections or response time. The weighted round-robin scheduling is able to treat the real servers of different processing capacities. Each server can be assigned a weight and integer value, indicating its processing capacity, and the director schedules requests depending on the server's weight in a round-robin fashion.

The least-connection scheduling algorithm directs network connections to the server with the least number of established connections. This is a dynamic scheduling algorithm because it needs to count live connections for each server dynamically. At a virtual server where there is a collection of servers with similar performance, the least-connection scheduling is good for smoothing distribution when the load of requests varies a lot because all long requests won't have a chance to be directed to a server.

The weighted least-connection scheduling is a superset of the least-connection scheduling in which you can assign a performance weight to each real server. The servers with a higher weight value will receive a larger percentage of live connections at any one time. The virtual server administrator can assign a weight to each real server, and network connections are scheduled to each server in which the percentage of the current number of live connections is a ratio to its weight. The default weight is one.

Testing Environment

For the purpose of testing and evaluating LVS's performance, we setup a powerful testing environment at the Ericsson Research Lab in Montréal that consisted of eight CompactPCI diskless Pentium III CPU cards running at 500MHz with 512MB of RAM. The CPUs have two on-board Ethernet ports, and each CPU is paired with a four-port ZNYX Ethernet card. Our setup also included a CPU with the same configuration as the others except that it has three SCSI disks: 18GB each configured with a RAID-1 and RAID-5 setup. This CPU acts as the NFS, DHCP, Bpbatch and TFTP server for the other CPUs.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState