PAM—Securing Linux Boxes Everywhere
For each service (such as login or SSH), you must define which checks will be done for each group. That list of actions is called a stack. Depending on the results of the actions in each stack, users will succeed or fail, and whatever they attempted to do will be allowed or rejected. You can specify each action in the stack for each service using a specific file at /etc/pam.d (the more current method) or by editing the single, catchall file /etc/pam.conf (the older method); in this article, we use the former method.
Remember that playing with configuration files can be dangerous to your health! A particularly nasty thing to do is remove all configuration files accidentally, because then you won't be able to log back in again. Make sure to back up all files before you start experimenting and have a live CD available just in case.
Each stack is built out of modules, executed sequentially in the given order. For each module, you can specify whether it's necessary (failure automatically denies access), sufficient (success automatically grants access) or optative (allows for alternative checks). Table 2 shows the actual control flags. The file for each service consists of a list of rules, each on its own line. (Longer lines can be split by ending with a \, but this is seldom required.) Lines that start with a hash character (#) are considered to be comments and, thus, are ignored. Each rule contains three fields: the context area (Table 1), the control flag (Table 2) and the module that will be run, along with possible (optional) extra parameters. Thus, the specification for the PAM checks for login would be found in the /etc/pam.d/login file.
The control flag field actually can be more complicated, but I won't cover all the details here. See Resources if you are interested. Also, you can use include, as in auth include common-account, which means to include rules from other files.
There is a special, catchall service called other, that is used for services without specific rules. A good start from a security point of view would be creating /etc/pam.d/other, as shown in Listing 2. All attempts are denied, and a warning is sent to the administrator. If you want to be more forgiving, substitute pam_unix2.so for pam_deny.so, and then the standard Linux authentication method will be used, although a warning will still be sent (Listing 3). If you don't care about security, substitute pam_permit.so instead, which allows entry to everybody, but don't say I didn't warn you.
Finally, give the files in /etc/pam.d a quick once-over. If you find configuration files for applications you don't use, simply rename the files, so PAM will fall back to your “other” configuration. Should you discover later that you really needed the application, change the configuration file back to its original name, and everything will be okay again.
Listing 2. A safe “other” definition forbids all generic access in absence of specific rules. The pam_deny.so module always returns failure, so all access attempts will be rejected, and pam_warn.so sends a warning to the sysadmin.
# # default; deny all accesses # auth required pam_deny.so auth required pam_warn.so account required pam_deny.so password required pam_deny.so password required pam_warn.so session required pam_deny.so
Listing 3. A PAM definition, equivalent to the standard UNIX security rules. Note: on some distributions, you might need to use pam_unix.so instead.
# # standard UNIX minimalistic rules # auth required pam_unix2.so account required pam_unix2.so password required pam_unix2.so session required pam_unix2.so
Listing 4. The /etc/pam.d/sshd specifies security rules for SSH connections. The pam_access.so module was added to the standard configuration to provide further checks.
auth required pam_unix2.so auth required pam_nologin.so account required pam_unix2.so account required pam_access.so session required pam_limits.so session required pam_unix2.so session optional pam_umask.so password requisite pam_pwcheck.so cracklib password required pam_unix2.so use_authtok
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- New Products
- OpenOffice.org Off-the-Wall: ToCs, Indexes and Bibliographies in OOo Writer
- Dart: a New Web Programming Experience
- Mediated Reality: University of Toronto RWM Project
- Kinect with Linux
- Power Management in Linux-Based Systems
- A Topic for Discussion - Open Source Feature-Richness?
40 min 1 sec ago
- Kernel Problem
10 hours 42 min ago
- BASH script to log IPs on public web server
15 hours 9 min ago
18 hours 45 min ago
- Reply to comment | Linux Journal
19 hours 17 min ago
- All the articles you talked
21 hours 41 min ago
- All the articles you talked
21 hours 44 min ago
- All the articles you talked
21 hours 46 min ago
1 day 2 hours ago
- Keeping track of IP address
1 day 4 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?