Paranoid Penguin - Single Sign-on and the Corporate Directory, Part I
Even though passwords aren't stored in the LDAP directory, a lot of sensitive information is. Your users probably don't want the whole Internet to know their phone numbers, e-mail addresses or employee IDs. Once you've read “OpenLDAP Everywhere” and have a working LDAP server, you need to secure the information transportation and access to the directory.
The first step is to secure the data transport using OpenSSL. First, let's copy our certificate and key we signed previously to /etc/openldap/ssl/slapd-cert.pem and /etc/openldap/ssl/slapd-key.pem, respectively. We need to provide five options in slapd.conf: TLSCipherSuite (optional), TLSCACertificatePath, TLSCertificateFile, TLSCertificateKeyFile and TLSVerifyClient. The slapd.conf(5) man page has good definitions of these options.
Having secured the data on the wire, we now secure authentication using the Kerberos KDC. OpenLDAP is Kerberized and uses SASL for authentication negotiation. We first must tell slapd how to find its Kerberos keytab file. We do this by editing /etc/conf.d/slapd or by defining KRB5_KTNAME prior to starting slapd in its init script. Two options in slapd.conf also must be defined: sasl-secprops and sasl-regexp.
Right now, TLS and SASL can be used but aren't required. Two more options in slapd.conf, security and allow, are used to specify the security methods and encryption strength needed for certain operations to take place. And, be sure to set up access control lists (ACLs) properly—refer to slapd.access(5).
We start by replicating our Kerberos database from kdc.example.com to ldap.example.com, so that if kdc.example.com fails, ldap.example.com will pick up the slack. One important fact to remember is that only one kadmin server can be on the network for a realm at any time. Otherwise, there is no authoritative source for updates to the database. Kerberos comes with kprop and kpropd to propagate the Kerberos database securely. First we must identify kpropd as a known service. Add the following to /etc/services:
We need to define an ACL file, /etc/krb5kdc/kpropd.acl, that tells kpropd what hosts are allowed to propagate. All that is really needed in this file is the master KDC's principal name, but it doesn't hurt to have all KDCs in here so that if a failure occurs, we can choose a new master, start the kadmin service on it and propagate from that host to the other slaves.
We now create an xinetd service definition, /etc/xinetd.d/kpropd, on our slaves; (re)start xinetd; dump the database on kdc.example.com; and propagate it to the slaves so they have an initial configuration:
# /usr/sbin/kdb5_util dump /etc/krb5kdc/slavedump # /usr/sbin/kprop -f /etc/krb5kdc/slavedump \ ldap.example.com
Finally, we create a stash file on each slave using the master key defined when setting up kdc.example.com's database, and then start the kdc service:
# /usr/sbin/kdb5_util stash # /etc/init.d/mit-krb5kdc start
To propagate out the KDC database periodically, we define a cron job on kdc.example.com. Thanks to Jason Garman and the O'Reilly book Kerberos: The Definitive Guide for the original cron job.
A sensible time frame to run this script is hourly or from /etc/cron.hourly. Our Kerberos database is now being replicated securely from the master to any number of slaves. If the master fails, we have a way to switch to a slave machine quickly and with minimal data loss, if any. Now that we're propagating Kerberos changes, we can add the slave server to the krb5.conf file as a valid KDC.
Enough critical information will be stored in your LDAP directory that you probably don't want a single point of failure. After all, if your LDAP directory is unavailable, your users won't be able to login, check e-mail or do numerous other daily tasks. Replicating your LDAP directory helps ensure there is no single point of failure.
Let's replicate the LDAP directory from ldap.example.com to kdc.example.com. OpenLDAP has a dæmon called slurpd that is responsible for this. Unfortunately, slurpd has no configuration directive telling it which Kerberos keytab to use, so there's a bit of work required. First, we edit slapd.conf on ldap.example.com, adding the options replogfile and replica, and then we restart slapd.
We need to create a Kerberos ldap service principal and SSL certificate and key for kdc.example.com, as we did for ldap.example.com. We also must create a slapd.conf file for kdc.example.com. This file is almost identical to the one on ldap.example.com, with a few key differences. For the same reason we have only one Kerberos admin server, we want only one LDAP directory being updated and changed. The only user who should be able to write to the slaves' directory should be uid=host/ldap.example.com,cn=GSSAPI,cn=auth or the Kerberos principal of the master, so our ACLs on the slaves are much more restrictive. Also, slapd needs to know who will be sending updates via slurp as defined by the updatedn and updateref options.
Now we switch our focus back to ldap.example.com for a bit. We need to create an /etc/conf.d/slurpd or make sure that KRB5CCNAME is set before slurpd is started from the init script.
Next, we get some initial Kerberos credentials:
# KRB5CCNAME=/var/run/slurpd.krb5cache /usr/bin/kinit -k
And then we dump the directory to a file:
ldap# /etc/init.d/slapd stop ldap# /usr/sbin/slapcat -l /tmp/slavedump.ldif ldap# /etc/init.d/slurpd start
Because slurpd transfers changes only in the master directory, we need to populate the slave directory with the current state of the master directory. We do this by copying a dump of the master we created above, /tmp/slavedump.ldif, to kdc.example.com and import the dumped directory and start slapd:
kdc# /usr/sbin/slapadd -l slavedump.ldif kdc# /etc/init.d/slapd start ldap# /etc/init.d/slapd start
We need to test that the slave has a sane directory:
# ldapsearch -H ldap://kdc.example.com -ZZ
To test that replication is happening, we can make a modification or addition to the directory on ldap.example.com and then search on kdc.example.com to make sure that change propagated.
Once we've verified that slurpd is working, we create a cron job on ldap.example.com to keep the credentials from expiring. The default time limit for credential validity is ten hours, so if we define a cron job to run every eight hours, we should be safe.
Last, we add kdc.example.com into our rotation of valid LDAP servers for nss_ldap. That is, we append kdc.example.com to the list of servers specified by the host option in /etc/ldap.conf.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide