Critical Server Needs and the Linux Kernel

A discussion of four of the kernel features needed for mission-critical server environments, including telecom.

This article provides some examples of features and mechanisms needed in the Linux kernel for server nodes operating in mission-critical environments, such as telecom, where reliability, performance, availability and security are extremely important. Here, we discuss four such features: a cluster communication protocol, support for multiple-FIB, a module to verify digital signatures of binaries at run time and an efficient low-level asynchronous event mechanism. For some of these example features, open-source projects already exist to provide their implementations. For other features, there currently is no open-source project that can implement them. For each of our four examples features, we discuss the feature, its importance, the advantages it provides, its implementation when available and the status of its integration with the Linux kernel.

Today's computing and telecommunication environments increasingly are adopting clustered servers to gain benefits in performance, availability and scalability. The resulting benefits of a cluster are greater and/or more cost-efficient than what a single server offers. Furthermore, in the case of the telecommunication industry, the interest in clustering originates from the fact that clusters address carrier-grade characteristics--guaranteed service availability, reliability and scaled performance--using cost-effective hardware and software. Without being absolute about these requirements, they can be divided into three categories: short failure detection and failure recovery, guaranteed availability of service and short response times. The most widely adopted clustering technique is use of multiple interconnected, loosely coupled nodes to create a single highly available system.

The direct advantages of clustering in telecom servers include:

  1. High availability through redundancy and failover techniques, which isolate or reduce the impact of a failure in the machine, resources or device.

  2. Manageability through appropriate system management facilities that reduce system management costs and balance loads for efficient resource utilization.

  3. Scalability and performance through expanding the capacity of the cluster by adding more servers or, in terms of servers, adding more processors, memory, storage or other resources to support growth and to achieve higher levels of performance.

In addition, using commercial off-the-shelf building blocks in clustered systems offers a number of advantages, including a better price/performance ratio when compared to specialized parallel supercomputers; deployment of the latest mass-market technology as it becomes available at low cost; and added benefits from the latest standard operating system features, as they become available.


One feature missing from the Linux kernel in this area is a reliable, efficient and transparent interprocess and interprocessor communication protocol that we can use to build highly available Linux clusters. Transparent interprocess communication (TIPC) is a suitable open-source implementation that fills this gap and provides an efficient cluster communication protocol, leveraging the particular conditions present within loosely coupled clusters.

Figure 1. Functional View of TIPC

TIPC is unique because no other protocol seems to provide a comparable combination of versatility and performance. It includes some original innovations, such as functional addressing, topology subscription services and reactive connection concept. Other important TIPC features include full location transparency, support for lightweight connections, reliable multicast, signaling link protocol, topology subscription services and more.

TIPC should be regarded as a useful toolbox for anyone wanting to develop or use carrier-grade or highly available Linux clusters. It provides the necessary infrastructure for cluster, network and software management functionality, as well as a good support for designing site-independent, scalable, distributed, high-availability and high-performance applications.

It also is worth mentioning that the ForCES working group within IETF has agreed that it must be possible to carry its router internal protocol (the ForCES protocol) over different types of transport protocols. There is consensus that TCP is the protocol to be used when ForCES messages are transported over the Internet, while TIPC is the protocol to be used in closed environments (LANs), where special characteristics such as high performance and multicast support is desirable. Other protocols also may be added as options.

In addition, TIPC meets several priority level 1 and 2 requirements, as defined in the OSDL Carrier Grade Linux Requirements Definition, Versions 2.0 and 3.0, providing an implementation for the various protocols under the Cluster Communication Service requirements.

TIPC is a contribution from Ericsson to the Open Source community. It has undergone a significant redesign over the past two years and now is available as a portable source code package of about 12,000 lines of C code. The code implements a kernel driver, a design that has made it possible to boost performance--35% faster than TCP--and minimize the code footprint. The current version is available under a dual BSD and GPL license. It runs on Linux 2.4 and 2.6 and was announced on LKML (see Resources). Several proprietary ports to other operating systems (OSE, True64, Dicos, VxWare) exist, and more are planned before the end of 2004.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Critical Server Needs and the Linux Kernel

OscarHinostroza's picture

The next generation Linux Server Based Telecom

Tank you Ibrahim Haddad

Re: Critical Server Needs and the Linux Kernel

smurfix's picture

Multi-FIB is already possible, in that you can mimic most of its effects with iptables; the kernel has supported multiple routing tables for some time now.

However, I do question the idea of doing this in the first place. This use case can only arise when two separate customers insist on using overlapping RFC-internal IPv4 address spaces for their servers and you need to put both of them onto one host. Better solutions to this problem exist (remap in the external load balancer, use IPv6, use virtual servers, use separate physical servers, ...).