Security Distribution for Linux Clusters

Here are the kernel mechanisms used in DSM to embed security information into IP messages in a transparent way.

This article is a follow-up to previous articles in LJ that discuss the Distributed Security Infrastructure (DSI) and the Linux Distributed Security Module (DSM) [see “Linux Distributed Security Module”, LJ, October 2002, available at /article/6215, and “DSI: a New Architecture for Secure Carrier-Class Linux Clusters”, available at www.linuxjournal.com/article/6053]. In this article, we focus on how we used IP options in DSM to send security information in a distributed environment for the process level of security. We discuss network buffer handling, adding hooks into the kernel, IP options and modifying IP headers. We then cover the network hooks in DSM and present some early performance results.

The DSI Project

The Open System Lab at Ericsson Research started the Open Source DSI Project to design and develop a cluster security infrastructure targeted at soft, real-time telecom applications running on Linux carrier-grade clusters. These clusters are expected to operate nonstop, regardless of any hardware or software errors. They must allow operators to upgrade hardware and software, kernel and applications, during normal operations, without any scheduled downtime and without affecting the offered services.

DSI originally was designed to offer carrier-grade characteristics, such as reliability, scalability, high availability and efficient performance. Furthermore, it supports several other important features, including a coherent framework, a process-level approach and support for both preemptive security and dynamic security policies.

One important feature of DSI is its process-level access control. Currently implemented security mechanisms are based on user privileges and do not support authentication checks for interactions between two processes belonging to the same user, even if the processes are created on remote processors. For telecom applications, only a few users run the same application for a long period of time without any interruption. Applying the above concept grants the same security privileges to all processes created on different nodes, which leads to no security checks for many actions through the distributed system. The granularity of the basic entity for the above security control is the user. For carrier-class applications, this granularity is not sufficient, and the need for a more fine-grained basic entity, the individual process, is required and thus supported in DSI.

The Distributed Security Module

The DSM is a core component of DSI that provides the implementation of mandatory access control within a Linux cluster. The DSM is responsible for enforcing access control and providing labeling for the IP messages with the security attributes of the sending process and node across the nodes of the cluster.

The DSM is implemented as a Linux module using the Linux Security Module (LSM) hooks. The development started using Linux kernel 2.4.17 along with the appropriate LSM kernel patch. The implementation was based on CIPSO and FIPS 188 standards, which specify the IP header modification.

One important aspect of the DSM implementation is its distributed nature. Access control in a cluster can be performed from a subject located on one node to a resource located on another node. Therefore, a need exists to transfer the security information between the nodes in the same cluster. The distributed nature of DSM provides location transparency of the security resources in the cluster from the security point of view.

Network Buffer Handling

Here, we briefly discuss the topic of the network buffer handling to provide a better understanding of how security information is embedded into the network packet. We describe how the kernel handles network buffers starting from the application layer down to the hardware layer and vice versa.

Figure 1. Network Packet Flow in the Kernel

Figure 1 shows the flow of the network packet in the kernel. Packet handling occurs in two cases, the incoming packet and outgoing packet. The outgoing network packet is handled as follows, starting from the application layer: the application prepares the data to be sent on the network; the application issues a system call to the kernel to send a packet; the packet, in the form of an sk_buff structure, goes through filters and routing functions inside the kernel; and the packet then is passed to the network driver that sends it to the network card (DMA).

The incoming network packet, starting from the network card, begins with the network card capturing the network packets either with its own address or the broadcast address; it then reads them to the network memory and generates an interrupt. The interrupt service routine, which is triggered by the hardware interrupt and is a part of the network card driver that runs inside the kernel, allocates an sk_buff and moves the data from the card memory into this buffer (DMA). Next, the packet is put on the CPU queue for upper-layer processing, and the processing is deferred to a later time when interrupts are enabled. Finally, the packets go through the filters and the routing functions and are passed to the application layer.

Based on the generic information on how the network buffers are handled in the Linux kernel, we now demonstrate how this information can be used to extend the kernel security. We look into the hooks added to the IP routing functions that allow us to manipulate the IP packets and add extra security to the IP messages.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState