If you have followed my articles on LDAP, you know we began looking at objectClasses in the last installment back in March. Since that time, I haven't written much more about directory servers. I began contemplating whether or not to continue the LDAP series because things have changed. Let me explain:
When I began this series in September 2006, I wanted to convey the approach I used in Linux System Administration (LSA). As O'Reilly, our book publisher, stated:
The ingredients for this book had been scattered throughout mailing lists, forums, and discussion groups, as well as books, periodicals, and the experiences of colleagues.
In September, LDAP documentation, though some what scattered and in need of a fix, existed more than the other topics in LSA. As the lead author, I wanted LDAP included in our book. My editor thought differently.
The book addressed experienced system administrators of any operating system and seasoned Linux power users needing complete documentation to advance to sysadmins.
LDAP did not seem like a subject for the faint of heart. I felt like traditional authors and LDAP team members refused to address beginners. The series, we began last September, addressed beginners. It filled the hole I saw in the existing documentation out there.
As one of the authors of the Book of Postfix suggested, one needed a deep understanding of LDAP to build a company mail server. I wondered why. Then I remembered how I struggled with the subject myself as I began working with directories. Beginners need help. They need a decent introduction or their eyes glaze over and they conclude that LDAP isn't for them.
After consider thought about the subject of LDAP today, I believe you can pick up Gerald Carter's book, LDAP Administration and it can take you the rest of the way. Aside from that, the Fedora Directory Server documentation project now does a first class job of getting you over the LDAP hump.
Have you worked in an environment where directory services exist? Then you, more like than not, understand how LDAP makes the IT world a better place. I have worked with infrastructures where Novell's e-Directory and Identity Management System, OpenLDAP, and Active Directory existed. Currently, my employer uses Active Directory.
I do not think LDAP is an intuitive technology. You need to focus and read repetitively to grasp the subject matter. If you want to become proficient, then lower your time expectations.
I have gone from working as a system administrator to working as a full-time technical writer and system analyst. I no longer build web sites, commerce enable them, build complex networks or lead development projects. I document development processes, watch the customer service department, test products,write Sarbox documents and user manuals. It's a complete change from my previous life.
Still, I continue to study the writing craft. I read "The Elements of Style" repetitively. I have not memorized it, but I may have accomplished that in the near future. Why would I do that?
Regardless of one's discipline, he or she needs to keep after it from a theoretical through practical application. With LDAP, if you want to master it, then read about it, practice it and put it to work for you even in a home network. You'll find ways to deploy it.
I doubt that you will see much about LDAP from me in the future. I might gig you every once in a while to remind you to keep your eye on the ball, but it's time to set out on your own.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Mediated Reality: University of Toronto RWM Project
- New Products
- Using Salt Stack and Vagrant for Drupal Development
- Dart: a New Web Programming Experience
- OpenOffice.org Off-the-Wall: ToCs, Indexes and Bibliographies in OOo Writer
38 min 56 sec ago
- Kernel Problem
10 hours 41 min ago
- BASH script to log IPs on public web server
15 hours 8 min ago
18 hours 44 min ago
- Reply to comment | Linux Journal
19 hours 16 min ago
- All the articles you talked
21 hours 40 min ago
- All the articles you talked
21 hours 43 min ago
- All the articles you talked
21 hours 44 min ago
1 day 2 hours ago
- Keeping track of IP address
1 day 4 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?