Overcoming Asymmetric Routing on Multi-Homed Servers
Asymmetric TCP/IP routing causes a common performance problem on servers that have more than one network connection. The atypical network flows created by asymmetric routes occur most often in server environments where a different interface is used for sending traffic than is used to receive traffic. The flows are considered to be unusual because traffic from one of end of the connection (A→B) travels over a different set of links than does traffic moving in the opposite direction (B→A). Asymmetric routes have legitimate uses, such as taking advantage of high bandwidth but unidirectional satellite links, but more often are a source of performance problems.
These abnormal packet flows interact poorly with TCP's congestion control algorithm. TCP sends packets in both directions even when the data flow, or goodput, is unidirectional. TCP's congestion control algorithm anticipates that the data packets share delay and loss characteristics similar to what their corresponding acknowledgment and control packets carry when traveling in the reverse direction. When the two types of data travel across physically different paths, this assumption is unlikely to be upheld. The resulting mismatch generally results in suboptimal TCP performance (see Resources).
A more serious problem occurs when the asymmetric routing introduces artificial bandwidth bottlenecks. A server with two interfaces of equal capacity can develop a bottleneck if it receives traffic on both interfaces but always responds through only one. Servers commonly add multiple interfaces, even multiple interfaces connected to the same switch, in order to increase the aggregate transmission capacity of the server. Asymmetric routing is a commonly unanticipated outcome of this configuration that comes about because traditional routing is wholly destination-based.
Destination-based routing uses only some leading prefix of the packet's destination IP address when selecting on which interface to send the packet out. Each entry in the routing table contains the IP address of the next-hop router (if a router is necessary) and the interface through which that packet should be sent. The entry also contains a variable length IP address prefix to match candidate packets against. That prefix could be as long as 32 bits for an IPv4 host route or as short as 0 bits for a default route that matches everything. If more than one routing table entry matches, the entry with the longest prefix is used.
A typical server not participating in a dynamic routing protocol, such as OSPF or BGP, has a simple routing table. It contains one entry for each interface on the server and one default route for reaching all the hosts not directly connected to its interfaces. This simple approach, which relies heavily on a single default route, results in a concentration of outgoing traffic through a single interface without regard to the interface through which the request originally was received.
A good illustration of this situation is a Web server equipped with two 100Mb full duplex interfaces. Both of the interfaces are configured on the same subnet. This setup should provide 200Mb/sec of bandwidth from both incoming and outgoing traffic if it is attached to a full duplex switch with a multi-gigabit backplane. This arrangement is an attractive server design because it allows the server to exceed 100Mb of capacity without having to upgrade to gigabit network infrastructure. This is a cost effective approach, as even though copper-based gigabit NICs are becoming inexpensive, the switch port costs to utilize them are still significantly more than what would be incurred for even several 100Mb ports.
Typically, clients connecting to this Web server first would encounter some kind of load balancer, either DNS-based or perhaps a Layer-4 switching appliance, that would direct half of the requests to one interface and half to the other. Listing 1 shows what the default routing table might look like on that Web server if it had two interfaces, both configured on the 192.168.16.0/24 subnet.
Listing 1. Typical Routing Table
Destination Gateway Genmask Flags Iface 192.168.16.0 * 255.255.255.0 U eth0 192.168.16.0 * 255.255.255.0 U eth1 127.0.0.0 * 255.0.0.0 U lo default 192.168.16.1 0.0.0.0 UG eth0
In this circumstance incoming load is distributed evenly, thanks to the load balancer. However, the response traffic all goes out through eth0 because, by default, the server uses destination-based routing.
Most of the traffic volume on a Web server is outgoing because HTTP responses tend to be much larger than are requests. Therefore, the effective bandwidth of this server still is limited to 100Mb/sec, even though it has two load-balanced interfaces. Load balancing the requests alone does not help, because the bottleneck is on the response side. Packets either use the default rule through eth0 or, if they are destined for the local subnet they have to choose between two equally weighted routes. In that case the first route (again to eth0) is selected. The end result is the Web requests are balanced evenly across eth0 and eth1, but the larger and more important responses all are funneled through a bottleneck on eth0.
- My Childhood in a Cigar Box
- Papa's Got a Brand New NAS
- Applied Expert Systems, Inc.'s CleverView for TCP/IP on Linux
- Tech Tip: Really Simple HTTP Server with Python
- Rogue Wave Software's TotalView for HPC and CodeDynamics
- Panther MPC, Inc.'s Panther Alpha
- NethServer: Linux without All That Linux Stuff
- Simplenote, Simply Awesome!
- Debugging Democracy