DSI: A New Architecture for Secure Carrier-Class Linux Clusters

An approach for distributed security architecture that supports advanced security mechanisms for current and future security needs, targeted at telecom applications running on carrier-class Linux clusters.
Security Manager

The security manager enforces security on each node. It is primarily a lookup service to register different security services and service providers and connect them together. All communications between security managers and security servers pass through the secure communication channel.

The security manager is instantiated at boot time with digital signatures to make certain it is not replaced with a malicious security manager. Upon its creation, it joins the DSI framework and exchanges keys with the security server. Each security manager must publish any change to the security contexts of its local entities involved with remote entities and subscribe to changes in the security contexts of remote, related entities (see Section 8).

The primary tasks for security managers include key management, access control, process authentication, audit levels management, alarm publication, as well as maintenance and update of the locally stored distributed security policy.

Secure Communication Channel (SCC)

The secure communication channel provides secure communications for the security components inside and outside the cluster. Within the cluster, it provides authenticated and encrypted communications among security components (Figure 4). It supports priority queuing to send and receive out-of-band alarms and is coupled to the security manager by an event dispatching mechanism.

Figure 4. SCC is based on event-driven logic and different channels.

For large-scale clusters, an event-driven approach based on subscription to events from defined channels reduces the system load compared to the polling mechanisms. Further more, the benefits of this approach are:

  • It does not present a single point of failure.

  • It gives the possibility of event filtering, therefore less bandwidth is used and less time is needed for treating irrelevant information before discarding it.

The secure communication channel provides channels for alarms and warnings, security management, service discovery and distribution of the security policy. It also provides a portability layer to avoid dependency on low-level communication mechanisms.

Security Context

For efficiency, a security identifier (SID) is defined as an integer that corresponds to a security context. All entities in the cluster have a SID. This SID is added at kernel level and cannot be tampered by users . It can be transferred across processors by security managers and interpreted through the whole cluster. Once the security context for a subject is needed outside the local processor (for instance, if this process accesses a remote object), its SID is sent to the security manager of the node containing the object. The SID propagation inside the cluster is based on SelOpt open source [6]. To avoid retransmissions, security managers rely on caching mechanisms. The security manager of the accessed node subscribes through SCC to the event of a possible change in the security context of the access initiator entity.

A Coherent Vision: Security Contexts and the Distributed Security Policy (DSP)

Security configuration must be kept simple. Following this approach, DSI relies on a centralized security policy stored and managed on the security server. However, to maintain the cluster's scalability, read-only copies of the policy are pushed from the security server to the individual security managers through the SCC. This Distributed Security Policy (DSP) is an explicit set of rules that governs the configurable behavior of DSI. Each node, at secure boot time, relies on a minimal security policy that is either stored in Flash memory or downloaded along with its digital signature. As soon as the DSP becomes available on a node, it prevails.

Many of the various DSI services and subsystems benefit from configurable behavior and can rely on DSP. They include mainly access control, then authentication, confidentiality and integrity, and packet filtering. The DSI administrator (a human being) manipulates the primary copy of the DSP that resides on the security server. Thus, it must be represented in a human readable format. The basic update mechanism for DSP is to push a full copy of each new version of the policy through the SCC. However, given the mere size that the policy can take, an incremental update mechanism will be made available.

There can be several possible originating sources for the security policy rules. Manual configuration by the DSI administrator allows for the most flexibility, but it rapidly becomes cumbersome. Thus, default policy rules are inferred from the nature of the various software packages installed and running on the system. These default rules codify good security practices. The DSP should only need to be updated because of events such as the installation of new software components; it should not be updated whenever ordinary recurring events occur.

A security session manager handles this kind of event by updating the security context repository. A security context defines privileges associated with each entity. It is defined uniquely through the whole cluster, but it is the responsibility of the security manager who created it.


White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState