Critical Server Needs and the Linux Kernel

A discussion of four of the kernel features needed for mission-critical server environments, including telecom.
Networking Features

Routers are core elements of modern telecom networks. They propagate and direct billion of data packets from their sources to their destinations using air transport devices or high-speed links. Routers must operate as fast as the medium they use in order to deliver the best quality of service and have a negligible effect on communications. To give some figures, it is common for routers to manage between 10,000 and 500,000 routes. In these situations, good performance is achievable by handling around 2,000 routes/sec.

Challenges and Proposed Solutions to the Linux Networking Stack

The actual implementation of the IP stack in Linux works fine for home or small business routers. However, with the high expectation of telecom operators and the new capabilities of telecom hardware, it barely is possible to use Linux as an efficient forwarding and routing element of a high-end router for large networks (core/border/access router) or as a high-end server with routing capabilities.

Two problems with the networking stack in Linux is the lack of support for multiple forwarding information bases (multi-FIB) with overlapping interface IP addresses and the lack of appropriate interfaces for addressing FIB. Another problem with the current implementation is the limited scalability of the routing table.

The solution to these problems is to provide support for multi-FIB with overlapping IP address. As such, we can have different VLANs or different physical interfaces forming independent networks in the same Linux box. A good reason to separate VLANs is for security through separation of services. For instance, a GSN node having multiple company networks connected to it could use VLAN for separation, but that might not hold on the other side of the node. The only way to keep separation (and security) would be to have multiple FIBs.

Consider the example (see Figure 2) of having two HTTP servers serving two different networks with potentially the same IP address. One HTTP server serves the network/FIB 10, while the other HTTP server serves the network/FIB 20. The advantage gained is to have one Linux box serving two different customers using the same IP address. ISPs adopt this approach by providing services for multiple customers sharing the same server (server partitioning), instead of using a server per customer.

Figure 2. Example of Usage

The way to achieve this is to have an ID (an identifier that identifies the customer or user of the service) to separate the routing table completely in memory. Two approaches to doing this exist. The first is to have separate routing tables; each routing table is looked up by its ID, and within that table the lookup is done by the prefix. The second approach is to have one table, in which the lookup is done on the combined key = prefix + ID.

A different kind of problem arises when we are not able to predict access time with the chaining in the hash table of the routing cache and FIB. This problem is of particular interest in an environment that requires predictable performance.

Another aspect of the problem is the route cache and the routing table are not kept synchronized most of the time (path MTU, to name one). The route cache flush is executed regularly; therefore, any updates on the cache are lost. For example, if you have a routing cache flush, you have to rebuild every route you currently are talking to by going for every route in the hash/try table and rebuilding the information. First, you have to look it up in the routing cache; if you have a miss, you need to go in the hash/try table. This process is slow and not predictable, because the hash/try table is implemented with linked lists and the potential for collisions is high when a large number of routes are present. This design is suitable for a home PC with a few routes, but it is not scalable for a large server.

To support the various routing requirements of server nodes operating in high-performance, mission-critical environments, Linux should support the following:

  • An implementation of multi-FIB using tree (radix, patricia and so on). It is important to have predictable performance in insert/delete/lookup from 10,000 to 500,000 routes. In addition, it is favorable to have the same data structure for both IPv4 and IPv6.

  • Socket and ioctl interfaces for addressing multi-FIB.

  • Multi-FIB support for neighbors (arp).

Providing these implementations in Linux affects a large part of net/core, net/ipv4 and net/ipv6; these subsystems, mostly the network layer, will need to be re-written. Other areas will feel minimal impact at the source code level; most of the impact will be at the transport layer--socket, TCP, UDP, RAW, NAT, IPIP, IGMP and so on.

As for the availability of an open-source project that can provide these functionalities, an existing project, Linux Virtual Routing and Forwarding, may be able to help. This project aims at implementing a flexible and scalable mechanism for providing multiple routing instances within the Linux kernel. The project has some potential for providing the needed functionalities; however, no progress has been made since 2002, and the project now appears to be inactive.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Critical Server Needs and the Linux Kernel

OscarHinostroza's picture

The next generation Linux Server Based Telecom

Tank you Ibrahim Haddad

Re: Critical Server Needs and the Linux Kernel

smurfix's picture

Multi-FIB is already possible, in that you can mimic most of its effects with iptables; the kernel has supported multiple routing tables for some time now.

However, I do question the idea of doing this in the first place. This use case can only arise when two separate customers insist on using overlapping RFC-internal IPv4 address spaces for their servers and you need to put both of them onto one host. Better solutions to this problem exist (remap in the external load balancer, use IPv6, use virtual servers, use separate physical servers, ...).