MinorFs is a set of cooperating user-space filesystems that work with AppArmor to provide a flexible form of discretionary access control that operates at the process level. This type of process-level authority restriction is roughly equivalent to that seen in object-oriented programming, providing least-authority restrictions by parameter passing without requiring the administrative overhead of policy controls seen in mechanisms like SELinux. Least authority also is known as least privilege or POLA (Principle Of Least Authority).
In Linux, access to filesystem data is managed by two different access-control mechanisms. First, there is the basic and familiar UNIX discretionary access-control system. The DoD document “Trusted Computer System Evaluation Criteria” (aka the “Orange Book”) defines discretionary access control as “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control)”.
Linux also provides access control through the Linux Security Module (LSM) interface. LSM provides hooks for additional access-control mechanisms, such as mandatory access controls, while leaving the base UNIX discretionary access-control mechanisms untouched. The Orange Book defines mandatory access controls as “a means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (i.e., clearance) of subjects to access information of such sensitivity”.
These two constructs are combined restrictively, which means if either one denies access, access is denied. Well known users of the LSM interface are Security-Enhanced Linux (SELinux), used in Debian and Red Hat, and AppArmor used in SUSE and Ubuntu.
Although the UNIX discretionary access control for filesystem access has remained at the same (simple user level) granularity for decades, mandatory access control has become more fine-grained (process level). This granularity, however, comes at relatively large administrative costs. SELinux, for example, is known among many administrators for the large amount of overhead that comes with maintaining profiles.
When designing and writing object-oriented (OO) programs, avoiding global variables, using data hiding, passing references between objects and using established design patterns (like proxies and factories) are concepts we are used to and comfortable with, and most of us have come to appreciate the many advantages these techniques offer. What many of us fail to realize when working with these concepts, however, is the fact that part of what we are doing can be considered access control.
If we look at the OO paradigms from an access-control viewpoint, it is easy to see that the model used by OO programs is both discretionary and suitable for the highest granularity. Therefore, you could say that OO programs internally use an extremely fine-grained form of discretionary access control. We must note, however, that this form of access control is actually older than the whole concept of object-oriented programming. The access-control mechanism used implicitly by OO programmers is, in fact, to a large extent equivalent to the access-control mechanisms in use in so-called capability-based systems. Capabilities, often called keys, are an unforgeable authority token that can be passed between programs. In capability-based systems, having a capability gives you the right to use the referenced object within the boundaries specified by the rights associated with the capability. With capabilities, there is no need to check other access-control mechanisms (for example, ACLs); the capability itself contains all the necessary information.
So, why not use this same form of discretionary access control at a slightly coarser level of granularity for access to files and directories by processes? MinorFs aims to do just that, with a lot of help from AppArmor.
First, let's look at how classes, objects and member data, as used in OO design and programming, compare to programs, processes and filesystem data. There are clear indications that we could be dealing with the same set of abstractions at a different granularity level.
You could look at a program the same way you look at a class. A process is an instance of a program (the disk image), the same way that an object is an instance of a class. Most objects have state, in the same way that most processes have state. You could say the same abstractions are there both at the object level of granularity and at the process level of granularity.
Next, we need to map the persistent on-disk directory structures to the same OO model that we just used to model programs and processes. A couple hurdles need to be overcome to accomplish this. First, there is process persistence, which is to say that processes are “not” persistent, so how do they fit the model?
Second, there is pass by reference. If an object wants to share part of its private state with another object that it knows, the object can pass either a copy of or a reference to a part of its internal state. Processes, however, to a great extent are confined to passing copies, not references.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)