DSI: A New Architecture for Secure Carrier-Class Linux Clusters
In this section, we go further into details for access control, authentication and auditing services.
Access control can be defined as the prevention of unauthorized use of a resource . It relies on the notions of subject (or access request initiator), object (or target), environment, decision and enforcement. The Access Control Service (ACS) assumes that the subjects have been properly authenticated (see the Authentication Service). DSI allows verifying the access control privileges even when subjects and objects are located on different nodes in the cluster. In order to simplify, we handle the access control in two levels: local when subject and object are on the same node and remote when they are on different nodes. For local access control, the access rights are the functions of the security IDs of the subject (SSID) and the object (TSID).
For remote access control, we extend the local access control mechanisms by adding a new parameter: the security node ID. Therefore, the access rights are not merely the functions of the subject and target security IDs, but as well, the function of the security node ID (SNID). The SSID along with the SNID are sent to the node containing the object. The security manager for the node of the object makes the access control decision based on SSID, SNID and TSID.
The ACS that runs on the cluster's processors is comprised of two parts:
A kernel-space part: Responsible for implementing both the enforcement and the decision-making tasks of access control. These two responsibilities are separated, as advocated by . The kernel-space part maintains an internal representation of the information upon which it bases its decisions. On Linux, this part is implemented as a Linux Security Module (LSM).
A user-space part: This part has many responsibilities. It takes the information from the Distributed Security Policy and from the Security Context Repository, combines them together and feeds them to the kernel space part in an easily usable form. It also takes care of propagating back alarms from the kernel space part to the security manger, which will feed them to the Auditing and Logging Service and, if necessary, propagate to the security server through SCC.
Both parts are started and monitored by the local Security Manager (SM). The SM also introduces them to other services and subsystems of the infrastructure with which they need to interact.
The ACS aims to provide fine-grained access control (at a sub-system call level). It respects the minimization principles of least privilege to limit the propagation and damage caused by eventual security breaches. As such, it provides defense in depth.
The ACS running on a processor must make as few assumptions as possible about other processors, including whether they have been compromised. For that reason, an ACS instance is always the one making access decisions about resources that are local to its processor.
For the purpose of access control, system activities are categorized in distinct phases, each having its own set of permissions. These phases include software installation, software activation, software configuration and software execution.
For the initial design of the ACS, only grant/deny decision will be considered. Other more involved decisions would involve rate limiting and total usage limiting. Actions other than access control decision, such as interposition and active reactions, are not implemented either.
The authentication standard for now is the authentication by assertion. It means that the program accessing resources on remote processors asserts that it does this on behalf of a user. Neither the user schema nor the assertion on its own can be trusted seriously in an environment exposed to external attacks.
The authentication service is based on public key mechanisms and use of the SSL/TLS protocol. The public key infrastructure is based on a root certification authority accessed through the security server and secondary certification authorities running at every node and accessed through the security managers.
The certificates are generated and signed locally on each node by the security manager. The certificates are not stored in directories but in access-controlled zones of memory. A process can access only the corresponding certificate. The process does not access its private key directly, but use an API for cryptographic instead. Processes inside the cluster are authenticated through their corresponding certificates.
We detail different steps of the authentication:
Call interception: Upon first demand for opening a connection, the local SM intercepts the request.
SM verifies, with its local copy of DSP and SID of the process, if the process has the privileges to access the network.
If yes, SM asks the security service provider, key management service to generate a pair of keys and the corresponding certificate. Then, through the secondary certification authority, SM signs the public key with its private key and add its certificate as a chain certificate to the certificate for the process
SM puts the certificate in a defined, shared memory zone, then returns the pointer to the certificate to the demanding process. Notice that the shared memory zone where the certificates are stored is checked for access control purposes. When a process dies, the corresponding certificates are cleared.
Process proceeds to normal SSL/TLS connection with its certificate.
The SM in the target node checks the certificate and verifies it through the chain of certificates. Notice that the SM has the public key for the SS through the secure boot.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide