Overcoming Asymmetric Routing on Multi-Homed Servers

Did you pay for two network interfaces and find your server is using only one? Balance the traffic with some simple routing hints.

Asymmetric TCP/IP routing causes a common performance problem on servers that have more than one network connection. The atypical network flows created by asymmetric routes occur most often in server environments where a different interface is used for sending traffic than is used to receive traffic. The flows are considered to be unusual because traffic from one of end of the connection (A→B) travels over a different set of links than does traffic moving in the opposite direction (B→A). Asymmetric routes have legitimate uses, such as taking advantage of high bandwidth but unidirectional satellite links, but more often are a source of performance problems.

These abnormal packet flows interact poorly with TCP's congestion control algorithm. TCP sends packets in both directions even when the data flow, or goodput, is unidirectional. TCP's congestion control algorithm anticipates that the data packets share delay and loss characteristics similar to what their corresponding acknowledgment and control packets carry when traveling in the reverse direction. When the two types of data travel across physically different paths, this assumption is unlikely to be upheld. The resulting mismatch generally results in suboptimal TCP performance (see Resources).

A more serious problem occurs when the asymmetric routing introduces artificial bandwidth bottlenecks. A server with two interfaces of equal capacity can develop a bottleneck if it receives traffic on both interfaces but always responds through only one. Servers commonly add multiple interfaces, even multiple interfaces connected to the same switch, in order to increase the aggregate transmission capacity of the server. Asymmetric routing is a commonly unanticipated outcome of this configuration that comes about because traditional routing is wholly destination-based.

Destination-based routing uses only some leading prefix of the packet's destination IP address when selecting on which interface to send the packet out. Each entry in the routing table contains the IP address of the next-hop router (if a router is necessary) and the interface through which that packet should be sent. The entry also contains a variable length IP address prefix to match candidate packets against. That prefix could be as long as 32 bits for an IPv4 host route or as short as 0 bits for a default route that matches everything. If more than one routing table entry matches, the entry with the longest prefix is used.

A typical server not participating in a dynamic routing protocol, such as OSPF or BGP, has a simple routing table. It contains one entry for each interface on the server and one default route for reaching all the hosts not directly connected to its interfaces. This simple approach, which relies heavily on a single default route, results in a concentration of outgoing traffic through a single interface without regard to the interface through which the request originally was received.

A good illustration of this situation is a Web server equipped with two 100Mb full duplex interfaces. Both of the interfaces are configured on the same subnet. This setup should provide 200Mb/sec of bandwidth from both incoming and outgoing traffic if it is attached to a full duplex switch with a multi-gigabit backplane. This arrangement is an attractive server design because it allows the server to exceed 100Mb of capacity without having to upgrade to gigabit network infrastructure. This is a cost effective approach, as even though copper-based gigabit NICs are becoming inexpensive, the switch port costs to utilize them are still significantly more than what would be incurred for even several 100Mb ports.

Typically, clients connecting to this Web server first would encounter some kind of load balancer, either DNS-based or perhaps a Layer-4 switching appliance, that would direct half of the requests to one interface and half to the other. Listing 1 shows what the default routing table might look like on that Web server if it had two interfaces, both configured on the 192.168.16.0/24 subnet.

In this circumstance incoming load is distributed evenly, thanks to the load balancer. However, the response traffic all goes out through eth0 because, by default, the server uses destination-based routing.

Figure 1. An Imbalanced Server

Figure 2. To use both interfaces effectively, we need to use policy-based routing.

Most of the traffic volume on a Web server is outgoing because HTTP responses tend to be much larger than are requests. Therefore, the effective bandwidth of this server still is limited to 100Mb/sec, even though it has two load-balanced interfaces. Load balancing the requests alone does not help, because the bottleneck is on the response side. Packets either use the default rule through eth0 or, if they are destined for the local subnet they have to choose between two equally weighted routes. In that case the first route (again to eth0) is selected. The end result is the Web requests are balanced evenly across eth0 and eth1, but the larger and more important responses all are funneled through a bottleneck on eth0.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Local Traffic

Carson Gee's picture

I just ran into this today when fixing some routes. If you want those two interfaces to send traffic normally on their local network ( 192.168.16.0/24 ) without going through the gateway and forming an asymmetric route with hosts on that network you'll need to add:

#ip route add 192.168.16.0/24 dev eth0 tab 1
#ip route add 192.168.16.0/24 dev eth1 tab 2

to use link routing on the local subnet.

Ubuntu ip route commands - what file do I put them in?

tc0nn's picture

So, I tried /etc/network/if-up.d/ip and /etc/rc.local, but all routing breaks when the box reboots. Where should I put these? Currently, I let the box boot up, then run the commands manually and everything works great. Any suggestions?

1. vi

Anonymous's picture

1.
vi /etc/init.d/iproutes-asym and add the commands you need in there
chmod 755 /etc/init.d/iproutes-asym
2.
cd /etc/rc3.d
ln -s ../init.d/iproutes S99z-iproutes-asym

this is what my iproutes-asym file looks like

ip route add default via 10.53.1.252 dev eth0 tab 1
ip route add default via 10.53.1.252 dev eth1 tab 2
ip rule add from 10.53.1.55/32 tab 1 priority 500
ip rule add from 10.53.1.54/32 tab 2 priority 600
ip route flush cache

Muchas gracias

anomie's picture

Thanks for putting this together. Proper routing on a multi-homed server is poorly documented by my Linux distro vendor. Your article was a great help in understanding iproute2 (in this context) and getting things working properly.

solutions

Bgs's picture

Network interface level problem can be solved with bonding too and it's easier to manage. iproute2 can be used to have multiple loadbalancers and/or gateways though.

Need some HELP for linux asymmetric routing

Michael Rack's picture

Hello Friends! I have two ISP-Links from the same Service-Provider. I got for each link an IP-Address on Subnet /30. eth0 runs on x.x.24.66, and eth1 on x.x.24.234.

The default-route is set to x.x.24.233 dev eth1. Now, when a ICMP-Ping reached by x.x.24.234 on eth1, ping will be responded. When a ping reached by x.x.24.66 on eth0, nothing happens.

The ICMP-Ping-Request pass the eth0-interface, but will not be responded via eth1 (default-route)... When i listen on eth1 with tcpdump, there no outgoing-packets to handle ICMP-Responses.

Whats the problem?

Thanks, Mike.
http://www.michaelrack.de

Thank you! Also..

Nathan Dornquast's picture

Patrick,

Thank you! I have been struggling with this for weeks. I wish I had found this article first. This is the first time I have found a good explanation of rules and tables and their relationship in the same place.

Regarding SNAT. I listed two source addresses in my iptables firewall.. it mostly works well. However, some outbound connections fail - most noteably SSH, Yahoo IM, IRC all reset after a short time (though web traffic seems ok). I can SNAT to one of my outbound addresses and use an ip rule to designate a single gateway. This works, but I am no longer NAT load balancing over my two WAN links. Anyone know a solution?

-Nathan

Thank you

atrix's picture

Thank you very much for this excellent article
Best wishes

Super

Cristian C.'s picture

Very nice and educative article. Good reading.

Re: Overcoming Asymmetric Routing on Multi-Homed Servers

Anonymous's picture

Minimalist load balancer. From lartc.org section 4.2.2

# ip route default nexthop via gw_1 nexthop via gw_2

Mohammad Bahathir Hashim
Malaysia.

rules vs. nat

Anonymous's picture

What about the SNAT target in iptables? It modifies the source IP address of the packet, but applies only in the POSTROUTING chain. Are the rules (the policy) evaluated *after* that again? The name POSTROUTING makes me think the routing part is already over...

Re: rules vs. nat

SeanW's picture

If it's anything like a Cisco router, outbound NAT happens after policy routing, and doesn't get another chance at the policy engine.

Sean

Re: rules vs. nat

Anonymous's picture

The SNAT target allows you to specify multiple source ip's and they will be used one after the other. That would probably give you simple outbound load-balancing.

From the iptables man page:

You can add several --to-source options. If you specify more than one source address, either via an address range or multiple --to-source options, a simple round-robin (one after another in cycle) takes place between these addresses.

L2 vs L3...

SeanW's picture

Great article on policy routing, but isn't this problem what bonding was designed to solve?

http://linux-ip.net/html/ether-bonding.html
/usr/src/linux-2.4/Documentation/networking/bonding.txt

Sean

Re: L2 vs L3...

Anonymous's picture

From the link you posted:

" Bonding for link aggregation must be supported by both endpoints."

"Bonding for link aggregation

p5k's picture

"Bonding for link aggregation must be supported by both endpoints."

Sounds like something our marriage therapist once told my (now ex-) wife and I... ;) Needless to say, it was NOT supported by *both* endpoints!

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState