DSI: A New Architecture for Secure Carrier-Class Linux Clusters

An approach for distributed security architecture that supports advanced security mechanisms for current and future security needs, targeted at telecom applications running on carrier-class Linux clusters.
Security Services

In this section, we go further into details for access control, authentication and auditing services.

Access Control Service (ACS)

Access control can be defined as the prevention of unauthorized use of a resource [2]. It relies on the notions of subject (or access request initiator), object (or target), environment, decision and enforcement. The Access Control Service (ACS) assumes that the subjects have been properly authenticated (see the Authentication Service). DSI allows verifying the access control privileges even when subjects and objects are located on different nodes in the cluster. In order to simplify, we handle the access control in two levels: local when subject and object are on the same node and remote when they are on different nodes. For local access control, the access rights are the functions of the security IDs of the subject (SSID) and the object (TSID).

For remote access control, we extend the local access control mechanisms by adding a new parameter: the security node ID. Therefore, the access rights are not merely the functions of the subject and target security IDs, but as well, the function of the security node ID (SNID). The SSID along with the SNID are sent to the node containing the object. The security manager for the node of the object makes the access control decision based on SSID, SNID and TSID.

ACS Architecture

The ACS that runs on the cluster's processors is comprised of two parts:

  • A kernel-space part: Responsible for implementing both the enforcement and the decision-making tasks of access control. These two responsibilities are separated, as advocated by [1]. The kernel-space part maintains an internal representation of the information upon which it bases its decisions. On Linux, this part is implemented as a Linux Security Module (LSM).

  • A user-space part: This part has many responsibilities. It takes the information from the Distributed Security Policy and from the Security Context Repository, combines them together and feeds them to the kernel space part in an easily usable form. It also takes care of propagating back alarms from the kernel space part to the security manger, which will feed them to the Auditing and Logging Service and, if necessary, propagate to the security server through SCC.

Both parts are started and monitored by the local Security Manager (SM). The SM also introduces them to other services and subsystems of the infrastructure with which they need to interact.

ACS Principles of Operation

The ACS aims to provide fine-grained access control (at a sub-system call level). It respects the minimization principles of least privilege to limit the propagation and damage caused by eventual security breaches. As such, it provides defense in depth.

The ACS running on a processor must make as few assumptions as possible about other processors, including whether they have been compromised. For that reason, an ACS instance is always the one making access decisions about resources that are local to its processor.

For the purpose of access control, system activities are categorized in distinct phases, each having its own set of permissions. These phases include software installation, software activation, software configuration and software execution.

For the initial design of the ACS, only grant/deny decision will be considered. Other more involved decisions would involve rate limiting and total usage limiting. Actions other than access control decision, such as interposition and active reactions, are not implemented either.

Authentication

The authentication standard for now is the authentication by assertion. It means that the program accessing resources on remote processors asserts that it does this on behalf of a user. Neither the user schema nor the assertion on its own can be trusted seriously in an environment exposed to external attacks.

The authentication service is based on public key mechanisms and use of the SSL/TLS protocol. The public key infrastructure is based on a root certification authority accessed through the security server and secondary certification authorities running at every node and accessed through the security managers.

The certificates are generated and signed locally on each node by the security manager. The certificates are not stored in directories but in access-controlled zones of memory. A process can access only the corresponding certificate. The process does not access its private key directly, but use an API for cryptographic instead. Processes inside the cluster are authenticated through their corresponding certificates.

We detail different steps of the authentication:

Call interception: Upon first demand for opening a connection, the local SM intercepts the request.

  1. SM verifies, with its local copy of DSP and SID of the process, if the process has the privileges to access the network.

  2. If yes, SM asks the security service provider, key management service to generate a pair of keys and the corresponding certificate. Then, through the secondary certification authority, SM signs the public key with its private key and add its certificate as a chain certificate to the certificate for the process

  3. SM puts the certificate in a defined, shared memory zone, then returns the pointer to the certificate to the demanding process. Notice that the shared memory zone where the certificates are stored is checked for access control purposes. When a process dies, the corresponding certificates are cleared.

  4. Process proceeds to normal SSL/TLS connection with its certificate.

  5. The SM in the target node checks the certificate and verifies it through the chain of certificates. Notice that the SM has the public key for the SS through the secure boot.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState