Using the Amd Automounter

How to use the Amd automounter to provide uniform and easily controlled access to all your file servers from anywhere.
Amd Startup Configuration File

Amd uses a configuration file, often stored in /etc/amd.conf. The syntax of this file is similar to the smb.conf configuration file. Consider:

[global]
log_file = /var/log/amd
debug_options = all,noreaddir

[/net]
map_type = file
map_name = /etc/amd.net
mount_type = nfs

[/home]
map_type = nis
map_name = amd.users
mount_type = autofs

This amd.conf file first specifies global options that are applicable to all automounted directories. All options are simple key=value pairs. The first global option (log_file) specifies the pathname to a file for Amd to log information such as errors and trace activity. The second global option (debug_options) asks to turn on all verbose debugging other than the debugging associated with directory-reading operations. Next, we define two automounted directories. Here, Amd attaches and manages the directory /net, the entries for which come from the file /etc/amd.net. Amd also manages a /home automounted directory whose entries are read from the site's NIS (YP) server.

The mount_type parameter requires some background explanation. By default, Amd appears to the kernel as a user-level NFSv2/UDP server. That is, when the kernel has to inform Amd that a user has asked to look up an entry (for example, /src/kernel), the kernel sends RPC messages to Amd, encoding the NFS_LOOKUP request in the same manner that the kernel would contact any other remote NFS server. The only differences here are that Amd is a user-level process not a kernel-based NFS server, and Amd runs on the local host, so the kernel sends its NFS RPCs to 127.0.0.1. As a user-level NFS server, Amd is portable and works the same on every UNIX host. However, user-level NFS servers incur extra context switches and data copies with the kernel, slowing performance. Worse, if the Amd process were to die unexpectedly—which never happens, as our code is 100% bug-free—it can hang every process on the host that accesses an automounted directory, sometimes requiring a system reboot to clear.

A decade ago, Sun Microsystems realized these automounter deficiencies and devised a special in-kernel automounter helper filesystem called Autofs. Autofs provides most of the critical functionality that an automounter needs in the kernel, where the work can be done more reliably and quickly. Autofs often works in conjunction with a user-level automounter whose job is reduced to map lookup and parsing. Amd is flexible enough, as you can see from the above amd.conf example, to work concurrently as both a user-level NFS server and an Autofs-compliant automounter. All you have to do is set the mount_type parameter to the right value. So why not use Autofs all the time? Autofs unfortunately is not available on all operating systems, and on those systems where it is available (Linux, Solaris and a handful of others), it uses incompatible implementations that behave differently. For those reasons, not all administrators like to use Autofs. Nevertheless, with Amd you have the choice of which one to use.

User Home Directories

In almost every large site, user home directories are distributed over multiple file servers. Users find it particularly annoying when their home directories first exist in, say, /u/staff/serv1/ezk, and then—when new file servers are installed or data is migrated—the directories are moved to, say, /u/newraid3/ezk. A much better approach is to provide a uniform naming convention for all home directories, such that /home/ezk always points to the most current location of the user's home directory. Administrators could migrate a user's home directory to a new, larger file server and simply change the definition of the ezk entry in the amd.users map. Here's an example of a small amd.users map that mounts three users' home directories from two different servers:

#comment: amd.home map
/defaults  type:=nfs
ezk  rhost:=serv1;rfs:=/staffdisk/ezk
joe  rhost:=raid3;rfs:=/newdisk/joe
dan  rhost:=raid3;rfs:=/newdisk/dan

This example starts with a special entry called /defaults that defines values common to all entries in the map; here, all mounts in this map are NFS mounts. The subsequent three lines specify the user's name, plus the remote host and partition to mount to resolve the user's home directory. Although the pathname for each user's home directory, such as /home/joe, can remain fixed for a long time, the actual remote host and remote filesystem for Joe's home directory can change often without inconveniencing Joe.

As with Perl, there are several ways in Amd to achieve the same goal, and some ways are better than others. The above map is not the most optimal map for several reasons. So here are a couple of tips for optimizing Amd maps. First, consider what happens when you access /home/dan and are running on host raid3: Amd tries to perform an NFS mount of raid3 (as an NFS client) from raid3 as an NFS server. This is rather silly, going through the entire networking stack and the overhead of the NFS protocol, simply to get to a pathname local to the host. For that reason, Amd defines a different type of mount, a link type (using a symlink). Dan's map entry thus can be rewritten as:

dan  -rhost:=raid3;rfs:=/newdisk/dan \
      host!=${rhost};type:=nfs \
      host==${rhost};type:=link

This revised map entry introduces several new features of Amd maps. First, the back slashes are preceded by whitespace. Amd ignores whitespace after the back slash but not before; Dan's map entry essentially is broken into three distinct whitespace-delimited components called locations. The first location starts with the hyphen character and defines defaults for the map entry itself, overriding anything in /defaults. The second and third locations start with selectors. Amd map selectors are dynamic variables whose values could be compared at runtime by Amd. As you might expect from the mother of all automounters, Amd supports dozens of selectors. Amd evaluates Dan's map entry one location at a time until it finds one for which the selectors evaluate to true; Amd then mounts the given location. In order, Amd first compares whether the current running host's name does not equal the predefined value of rhost. On any host other than raid3, then, Amd performs an NFS-type mount. On raid3 itself, Amd uses a faster and simpler symlink-type mount.

The amd.home map contains a second inefficiency: it mounts /newdisk/joe and /newdisk/dan from the same NFS server, although they most likely are subdirectories of the same physically exported filesystem. This is slow and wastes kernel resources. A better way uses the same rfs but returns pathnames that are subdirectories of actual mountpoints (sublink is appended to returned pathnames automatically):

/defaults  type:=nfs;sublink:=${key}
joe  rhost:=raid3;rfs:=/newdisk
dan  rhost:=raid3;rfs:=/newdisk

______________________

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix