Fedora Directory Server: the Evolution of Linux Authentication
Looking at the Administration console, you will see server information on the Default tab (Figure 2) and a search utility on the second tab. On the Servers and Applications tab, expand your server name to display the two server consoles that are accessible: the Administration Server and the Directory Server. Double-click the Directory Server icon, and you are taken to the Tasks tab of the Directory Server (Figure 3). From here, you can start and stop directory services, manage certificates, import/export directory databases and back up your server. The backup feature is one of the highlights of the FDS console. It lets you back up any directory database on your server without any knowledge of the CLI. The only caveat is that you still need to use the CLI to schedule backups.
On the Status tab, you can view the appropriate logs to monitor activity or diagnose problems with FDS (Figure 4). Simply expand one of the logs in the left pane to display its output in the right pane. Use the Continuous Refresh option to view logs in real time.
From the Configuration tab, you can define replicas (copies of the directory) and synchronization options with other servers, edit schema, initialize databases and define options for logs and plugins. The Directory tab is laid out by the domains for which the server hosts information. The Netscape folder is created automatically by the installation and stores information about your directory. The config folder is used by FDS to store information about the server's operation. In most cases, it should be left alone, but we will use it when we set up replication.
Before creating your directory structure, you should carefully consider how you want to organize your users in the directory. There is not enough space here for a decent primer on directory planning, but I typically use Organizational Units (OUs) to segment people grouped together by common security needs, such as by department (for example, Accounting). Then, within an OU, I create users and groups for simplifying permissions or creating e-mail address lists (for example, Account Managers). FDS also supports roles, which essentially are templates for new users. This strategy is not hard and fast, and you certainly can use the default domain directory structure created during installation for most of your needs (Figure 5). Whatever strategy you choose, you should consult the Red Hat documentation prior to deployment.
To create a new user, right-click an OU under your domain and click on New→User to bring up the New User screen (Figure 5). Fill in the appropriate entries. At minimum, you need a First Name, Last Name and Common Name (cn), which is created from your first and last name. Your login ID is created automatically from your first initial and your last name. You also can enter an e-mail address and a password. From the options in the left pane of the User window, you can add NT User information, Posix Account information and activate/de-activate the account. You can implement a Password Policy from the Directory tab by right-clicking a domain or OU and selecting Manage Password Policy. However, I could not get this feature to work properly.
Setting up replication in FDS is a relatively painless process. In our scenario, we configure two-way multi-master replication, but Red Hat supports up to four-way. Because we already have one server (server one) in operation, we need another system (server two) configured the same way (Fedora, FDS) with which to replicate. Use the same settings used on server one (other than hostname) to install and configure Fedora 6/FDS. With both servers up, verify time synchronization and name resolution between them. The servers must have synced time or be relatively close in their offset, and they must be able to resolve the other's hostname for replication to work. You can use IP addresses, configure /etc/hosts or use DNS. I recommend having DNS and NTP in place already to make life easier.
The next step is creating a Replication Manager account to bind one server's directory to the other and vice versa. On the Directory tab of the Directory Server console, create a user account under the config folder (off the main root, not your domain), and give it the name Replication Manager, which should create a login ID (uid) of RManager. Use the same password for the RManager on both servers/directories. On server one, click the Configuration tab and then the Replication folder. In the right pane, click Enable Changelog. Click the Use Default button to fill in the default path under your slapd-servername directory. Click Save. Next, expand the Replication folder and click on the userroot database. In the right pane, click on enable replica, and select Multiple Master under the Replica Role section (Figure 6). Enter a unique Replica ID. This number must be different on each server involved in replication. Scroll down to the update section below, and in the Enter a new supplier DN: textbox, enter the following:
Click Add, and then the Save button at the bottom. On server two, perform the same steps as just completed for creating the RManager account in the config folder, enabling changelogs and creating a Multiple Master Replica (with a different Replica ID).
Now, you need to set up a replication agreement back on server one. From the Configuration tab of the Directory Server, right-click the userroot database, and select New Replication Agreement. A wizard then guides you through the process. Enter a name and description for the agreement (Figure 7). On the Source and Destination screen under Consumer, click the Other button to add server two as a consumer. You must use the FQDN and the correct port (3891 in our case). Use Simple Authentication in the Connection box, and enter the full context (uid=Rmanager,cn=config) of the RManager account and the password for the account (Figure 8). On the next screen, check off the box to enable Fractional Replication to send only deltas rather than the entire directory database when changes occur, which is very useful over WAN connections. On the Schedule screen, select Always keep directories in sync to keep. You also could choose to schedule your updates. On the Initialize Consumer screen, use the Initialize Now option. If you experience any difficulty in initializing a consumer (server two), you can use the Create consumer initialization file option and copy the resulting ldif folder to server two and import and initialize it from the Directory Server Tasks pane. When you click next, you will see the replication taking place. A summary appears at the end of the process. To verify replication took place, check the Replication and Error logs on the first server for success messages. To complete MMR, set up a replication agreement on the server, listing server one as the consumer, but do not initialize at the end of the wizard. Doing so would overwrite the directory on server one with the directory on server two. With our setup complete, we now have a redundant authentication infrastructure in place, and if we choose, we can add another two Read-Write replicas/Masters.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide