Linux Distributed Security Module
The distributed security module is part of DSI. The purpose of implementing DSM is to enforce access control and to provide labeling for the IP messages across the nodes of the cluster with the security attributes of the sending process and node.
We started the development of DSM using Linux kernel 2.4.17 and the appropriate security patch for that kernel version (lsm-full-2002_01_15-2.4.17.patch). The implementation of DSM was based on CIPSO and FIPS-188 standards, which specify the IP header modification (adding options to IP header), and on hooks added to the IP stack.
Because DSM is a component of DSI, let's briefly look at it. As part of a carrier-grade Linux cluster, DSI must comply with carrier-grade requirements such as reliability, scalability and high availability. Furthermore, DSI supports the following requirements: coherent framework, process-level approach, pre-preemptive security, dynamic security policy, transparent key management and minimal impact on performance.
The security server is the central point of management. It is the central security authority for all other security components in the system, and it is responsible for the distributed security policy. It also defines the dynamic security environment of the whole cluster by broadcasting changes in the distributed policy to all security managers.
Security managers enforce security locally at each node of the cluster. They are responsible for enforcing changes in the security environment. Security managers only exchange security information with the security server. For a detailed description of DSI, see Resources.
Designing an efficient solution to the cluster mandatory access control is a complex task. Many factors are involved in defining the access rights, because the subjects and resources can be located on different nodes in the cluster. To simplify the relationships, we can handle the access control at two levels. At the local level, the subject and resource are located on the same node while on the remote level, the subject and resource are located on different nodes.
For local access control, the access rights are the functions of the security IDs of the subject (SSID) and the resource (TSID), see Figure 1. This equation is based on the FLASK architecture: Access = Function (SSID, TSID).
The FLASK architecture can serve as a solution for single-node processing. When the nodes are presented as a cluster, security solutions become more complicated. In this case, we extend the FLASK architecture to the cluster remote access model (Figure 3).
One of the new parameters is the security node ID (SnID), an ID that, as the name implies, defines the node in terms of security. Access rights are not only the function of the subject and target security IDs, but the function of the security node ID as well.
The architecture of the distributed access control is illustrated in Figure 3. The equation of the architecture can be described as Access = Function (SnID2, SID2, SnID1, SID1).
An important part of the distributed system is the network, which spans the nodes of the cluster. To apply the access control functions in the cluster, there must be a way to pass the security parameters between the nodes in a transparent fashion. Our prototype implementation of distributed mandatory access control will be exercised in the Linux kernel, which provides us security transparency and better performance.
In Figure 3, we illustrate an example of how the distributed access control works. The security server is responsible for passing the security policy to the security module. It is also responsible for providing the security node ID to each node of the cluster (1). Let's assume the subject2 in node SnID2 tries to access a resource (file) on the node SnID1 (2). In this case subject2 first must get access to the local communication resources (3) and then get a pair of identifiers (SnID2, SID2) that then must be passed to the remote node (4). Those identifiers will be validated on the remote node SnID1. When the access is granted, the message can be passed to process 1 (5). Now process 1 will perform an access on behalf of process 2 (6).
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
6 hours 8 min ago
- BASH script to log IPs on public web server
10 hours 35 min ago
14 hours 10 min ago
- Reply to comment | Linux Journal
14 hours 43 min ago
- All the articles you talked
17 hours 6 min ago
- All the articles you talked
17 hours 10 min ago
- All the articles you talked
17 hours 11 min ago
21 hours 36 min ago
- Keeping track of IP address
23 hours 27 min ago
- Roll your own dynamic dns
1 day 4 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?