LDAP for Security, Part I
Suppose you have an Internet Mail Access Protocol (IMAP) server and a bunch of users, but you don't want to give each user a shell account on the server. You'd rather use some sort of central user-authentication service that can be used for other tasks as well. While you're at it, you also need an on-line address book for your organization's e-mail and groupware applications. And suppose, in addition to all that, you also need to provide your users with encryption tools that use X.509 certificates and then manage digital certificates for your entire organization.
Would you believe that one service can address all four scenarios? The Lightweight Directory Access Protocol (LDAP) does all of this and more. And wouldn't you know it, the Open Source community is blessed with a free, stable and fully functional LDAP server and client package that is already part of most Linux distributions: OpenLDAP.
The only catch is LDAP is a complicated beast. To make sense of it, you're going to have to add still more acronyms and some heavy-duty abstractions to your bag of UNIX tricks. But armed with the next few months' Paranoid Penguin columns and a little determination, you'll have the mighty LDAP burro pulling several large plows simultaneously, making your network both more secure and easier to use. In my experience, “more secure” and “simpler for end users” rarely go hand in hand, so I'm excited finally to be covering OpenLDAP in this column.
In a nutshell, LDAP provides directory services, a centralized database of essential information about the people, groups and other entities that comprise an organization. As every organization's structure and its precise definition of essential information may be different, a directory service must be highly flexible and customizable. It's therefore an inherently complex undertaking.
The X.500 protocol for directory services is a case in point. It was designed to provide large-scale directory services for large and complex organizations. Accordingly, X.500 is itself a large and complex protocol, so much so that a lightweight version of it was created: the Lightweight Directory Access Protocol. LDAP, described in RFC 1777, is essentially a subset of the X.500 protocol, and it's been implemented far more widely than X.500 itself has been.
X.500 and LDAP are open protocols, like TCP/IP; neither is a standalone product. A protocol has to be implemented in some sort of software, such as a kernel module, a server dæmon or a client program. Also like TCP/IP, not all implementations of LDAP are alike or even completely interoperable (without modification). The particular LDAP implementation we cover here is OpenLDAP, but you should be aware that other software products provide alternative implementations. These include Netscape Directory Server, Sun ONE Directory Server and even, in a limited way, Microsoft Active Directory in Windows 2000 Server.
Luckily, LDAP is designed to be extensible. Creating an LDAP database on one platform that is compatible with other LDAP implementations is usually a simple matter of adjusting the database's record formats, or schema, which we'll discuss next month. Therefore, it's no problem to run an OpenLDAP server on a Linux system that can provide address book functionality to users running, say, Netscape Communicator on Macs.
Being such a useful and important tool, OpenLDAP is included in most major Linux distributions. Generally, it's split across multiple packages: server dæmons in one package, client programs in another, development libraries in still another. This article is about building an LDAP server, so naturally you'll want to install your distribution's OpenLDAP server package, plus OpenLDAP runtime libraries if they aren't included in the server package.
You might be tempted to forego installing the OpenLDAP client commands on your server if no local user accounts will be on it and you expect all LDAP transactions to occur over the network. However, these client commands are useful for testing and troubleshooting, so I strongly recommend you install them.
The specific packages comprising OpenLDAP in Red Hat are openldap (OpenLDAP libraries, configuration files and documentation); openldap-clients (OpenLDAP client software/commands); openldap-servers (OpenLDAP server programs); and openldap-devel (headers and libraries for developers). Although these packages have a number of fairly mundane dependencies, including glibc, two are required packages you may not have installed already: cyrus-sasl and cyrus-sasl-md5, which help broker authentication transactions with OpenLDAP.
In SuSE, OpenLDAP is provided in the following RPMs: openldap2-client (in section n1 of SuSE versions 7.3 and 8.0); openldap2 (includes both the OpenLDAP libraries and server dæmons and is found in section n2); and openldap2-devel (found in section n2 for SuSE 7.3 and n4 for SuSE 8.0). As with Red Hat, be sure to install the package cyrus-sasl, located in SuSE's sec1 directory.
In both the 7.3 and 8.0 distributions, SuSE provides packages for OpenLDAP versions 1.2 and 2.0. Be sure to install the newer 2.0 packages listed in the previous paragraph, unless you have a specific reason to run OpenLDAP 1.2. This guideline does not apply to Red Hat or Debian, both of which are standardized on OpenLDAP 2.0 in their current distributions.
For Debian 3.0 (Woody), the equivalent deb packages are: libldap2 (OpenLDAP libraries, in Debian's libs directory); slapd (the OpenLDAP server package, found in the net directory); and ldap-utils (OpenLDAP client commands, also found in the net directory). You'll also need libsasl7 from the Debian libs directory.
If your distribution of choice doesn't have binary packages for OpenLDAP, or if a specific feature of the latest version of OpenLDAP is lacking in your distribution's OpenLDAP packages or if you need to customize OpenLDAP at the binary level, you always can compile it yourself from source you've downloaded from the official OpenLDAP web site at www.openldap.org.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- The Secret Password Is...
- RSS Feeds
- New Products
4 hours 17 min ago
- Keeping track of IP address
6 hours 8 min ago
- Roll your own dynamic dns
11 hours 21 min ago
- Please correct the URL for Salt Stack's web site
14 hours 33 min ago
- Android is Linux -- why no better inter-operation
16 hours 48 min ago
- Connecting Android device to desktop Linux via USB
17 hours 17 min ago
- Find new cell phone and tablet pc
18 hours 15 min ago
19 hours 44 min ago
- Automatically updating Guest Additions
20 hours 52 min ago
- I like your topic on android
21 hours 39 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?