Linux Distributed Security Module
In greater detail, this section explains what happens on a single node of a cluster. Access control on any node of the cluster consists of two parts (Figure 4):
Kernel space: responsible for implementing both the enforcement and the decision-making tasks of access control as separate responsibilities. The kernel space maintains the security policy upon which it bases its decisions. The security policy is supplied by the security server and stored in the local memory for fast access (hash table).
User space: its many responsibilities (Figure 4) include taking the information from the distributed security policy (1) and from the security context repository, combining them together and feeding them to the kernel space part in an easily usable form (2, 3 and 4). It propagates alarms from the kernel space back to the security manager, which will feed them to the auditing and logging services and, if necessary, propagate them to the security server through the security communication channel (see Figure 2).
Both kernel space and user space are started and monitored by the local security manager (SM) on each node. The SM also introduces them to other services and subsystems of DSI with which they need to interact. When a user process tries to access the system resource (5), the system call is forwarded to DSM (6), where the decision is made based on the DSP internal representation (7).
All the subjects and resources must be labeled. Because the security module can be loaded at runtime, we distinguish two modes of subject labeling:
Before the module is loaded, no labels are attached to any subject or resource in the system. At module initialization time, all the running tasks are scanned, and labels are attached to them.
When a new process is created after the security module is loaded, the security hooks do the labeling.
Since Linux stores the process descriptor and the kernel mode process stack in a single 8KB memory area, we can use this fact and avoid allocating memory for labeling the subjects (Figure 5).
The other labels are attached to the resources at runtime, which implies that the module checks if the label is there. If the label is not attached, a new label will be created.
Because access in the cluster can be achieved from a subject located on one node to a resource located on another node (Figure 3), additionally there is a need to control such accesses.
When a process on one node accesses a resource on another node, local access to the communications resources (socket, network interface) is checked first. When local access is granted, the message can be sent to the remote location.
In order to identify the sending subject, the Security Node ID (security node identifier) and the Security ID of the subject (security subject identifier) are added to the IP packet. For this implementation, we use the IP protocol for the security information transfer. A new option based on the hooks in the IP protocol stack is added after the IP header. On the receiving side, this information (Security Node ID and Security SID) is extracted (based on the hooks in the IP stack) and used to build the network security ID (NSID). The build equation of this is NSID = Function (SnID, SID).
This function can be specified by the security server in the form of a conversion table (for the current implementation a simple mathematical function is used). The receiving side extracts the Security Network ID by looking in the table and specifying SnID and SID. Now the security network ID can be used as a local label to all of the access controls.
You need to follow several steps to compile, load and experiment with DSM. For illustration purposes, we assume that your machine runs Red Hat 7.2 with Linux kernel 2.4.17 (from kernel.org).
Here are the main steps involved (they are explained in detail in the following sections):
Apply the LSM patch for kernel 2.4.17.
Modify the kernel options and rebuild the kernel with the new options.
Update the boot options in /etc/lilo.conf.
Reboot the machine with the new kernel.
Compile and load the security module.
Perform some testing to validate that the module works correctly.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space