The standard protocol for sharing files between Linux boxes is the Network File System (NFS). This protocol, which originated with Sun in the mid 80s, does the job, but it has many deficiencies that can cause trouble for a systems administrator. Though there are alternatives, such as the Andrew File System (AFS) that are much nicer, most of us are stuck with NFS at this time—it is standard, available on every platform under the sun and free. Fortunately, the program AMD (AutoMount Daemon) exists to make living with NFS much easier.
AMD is an automounter—i.e., it maintains a cache of mounted file systems. At a minimum, AMD should be used wherever you use a normal NFS mount, since AMD makes your network more reliable. Because of the stateless design of NFS, any process trying to access data on an NFS partition will blocked if the partition's server goes down. AMD improves the situation by keeping track of which machines are down and which are inaccessible. Since AMD doesn't mount every partition immediately or keep them mounted, as does NFS, you save overhead that otherwise would be used for kernel and network traffic from the unused partitions, and thus improve machine performance.
Configuration and administration become much easier with AMD. Instead of requiring a different fstab file on each host, you can have a single, centrally maintained AMD map which can be distributed as a file with rdist or NIS maps or even Hesiod. As an example, we have over 100 machines with one centrally maintained AMD map. One map file is certainly easier to edit than 100.
Another convenient feature of AMD is dynamic maps that change depending on any number of criteria. A single map can point to multiple places, allowing you to do operations unavailable with normal NFS. For instance, if you have multiple replicated servers, you can set up a map so that if one server goes down, AMD will automatically mount files using one of the others.
AMD operates by mimicking an NFS server. When a file is accessed, AMD uses its map to decide where that file actually resides. It then mounts that partition, if necessary, using regular NFS, and mimics a symlink to the actual location. All AMD actions are done transparently, so that from the user's point of view she is simply accessing a regular Unix symlink that points to a regular user's file. AMD maintains its NFS mounts beneath a temporary directory, by default called “a”, a name choice that can cause problems. For example, the actual physical path of the directory /home/crosby is /a/home/crosby, but /a/home/crosby exists only if someone has recently accessed /home/crosby (or some other path on the same partition). Therefore, users should never explicitly access files through /a.
Diagram 1 demonstrates the three types of mounts involved: the native partition, the AMD pseudo partition and the behind-the-scenes NFS partition.
.................. . NFS Partition . \-+-a---home . | .....^............ +-bin : | : |......... |. AMD . +-home . .........
AMD does a few other things behind the scenes to keep operations healthy. First of all, it sends out rpc requests at regular intervals to all the servers it knows to see if they are alive. If one isn't, AMD will not try to mount it. This checking also allows AMD to offer access to replicated file systems; that is, you can set up multiple redundant servers, and if one goes down, AMD will try to mount another one.
To use AMD, you must first of all build one or more AMD maps. These maps are the configuration files that tell AMD exactly what to do. Many tasks can be done from an AMD map, and documenting them all would take more than one article. Listing 1 provides a sample AMD map with some common tasks, and with comments under each entry to explain it. In general, a map consists of two fields: the name, which is translated to the path name underneath the AMD mount point, and the options, which specify what to do with this path name.
I have merely touched the surface of AMD features in Listing 1. The uses of AMD are almost endless—as the man page says, “A weird imagination is most useful to gain full advantage of all the features.” The documentation that comes with the package gives complete instructions for writing a map.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Interview with Patrick Volkerding
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide