Part III: AFS—A Secure Distributed Filesystem

Make your single sign-on infrastructure complete using a secure cross-platform distributed filesystem.

The Andrew File System (AFS) is a secure distributed global filesystem that provides location independence, scalability and transparent migration capabilities for data. AFS works across a multitude of operating systems and is used at many large sites that have been in production for many years.

AFS provides unique features that are not available with other distributed filesystems, even though AFS is almost 20 years old. This age might make it less appealing to some, but with IBM making AFS available as open source in 2000, new interest sparked in its use and development. This article discusses the rich features AFS offers and invites readers to play with it.

Features and Benefits of AFS

AFS client software is available for Linux and for UNIX flavors from HP, Compaq, IBM, Sun and SGI. It also is available for Microsoft Windows and Apple's Mac OS X. This makes AFS the ideal filesystem for data sharing between platforms across local and wide area networks.

All AFS client machines have a local cache. A cache manager keeps track of users on a machine and handles the data requests coming from them. Data caching happens in chunks of files, which are copied from an AFS file server to local disk. The cache is shared between all users of a machine and persists over reboots. This local caching reduces network traffic and makes subsequent access to cached data much faster.

AFS is organized in a globally unique namespace. A global view of the AFS file space is shown in Figure 1. Pathnames leading to files are not only the same wherever the data is accessed, the pathnames do not contain any server information. In other words, the AFS user does not know on which file server the data is located. To make this work, AFS has a replicated data location database that a client has to contact in order to find data. This is unlike the Network File System (NFS), in which the client has the information about the file server hosting a particular part of the NFS filesystem.

Figure 1. The AFS file space is the same anywhere and does not require clients to know which directory is on which server.

The different independent AFS domains are called cells and correspond to Kerberos realms. A typical AFS pathname looks like this: /afs/cern.ch/user/a/alf/Projects/. This pathname contains the AFS cell name but not the file server name.

This location independence allows AFS administrators to move data from one AFS server to another without any visible changes to users. It also makes AFS easily scalable. If you run out of space or network capacity on your AFS file servers, simply add another one and migrate data to the new server. Clients do not notice this location change. AFS also scales well in terms of the number of clients per file server. On modern hardware, one AFS file server can serve up to about 1,000 clients without any problems.

For users, the AFS file space looks like any other filesystem they have used. With the proper Kerberos credentials, they can access their AFS data from all over the world, facilitating the globally unique namespace. Here is an example: to be able to copy data from my home directory at CERN in Switzerland to my home directory at SLAC in California, I first need to authenticate myself against the two different AFS cells:

% kinit --afslog alfw@ir.stanford.edu
alfw@ir.stanford.edu's Password:
% kinit -c /tmp/krb5cc_5828_1 --afslog alf@cern.ch
alf@cern.ch's Password:

AFS comes with a command, tokens, to show AFS credentials:

% tokens Tokens held by the Cache Manager:

User's (AFS ID 388) tokens for afs@cern.ch [Expires Apr  2 10:30]
User's (AFS ID 10214) tokens for afs@ir.stanford.edu [Expires Apr  2 09:49]
--End of list--

Now that I am authenticated, I can access my two AFS home directories:


% cp /afs/cern.ch/user/a/alf/Projects/X/src/hello.c \
   /afs/ir.stanford.edu/users/a/l/alfw/Projects/Y/src/.

On an AFS file server, the AFS data is stored on special partitions, called /vicepXX, with XX ranging from a–zz, allowing for a total of 256 partitions per server. Each of these partitions can hold data containers called volumes. Volumes are the smallest entity that can be moved around, replicated or backed up. Volumes then contain the directories and files. Volumes need to be mounted inside the AFS file space to make them visible. These mount points look exactly like directories.

AFS is particularly well suited to serve read-only data such as the /usr/local/ tree because AFS clients cache accessed data. To make this work even better and more robustly, AFS allows for read-only clones of data on different AFS file servers. If one server hosting such a clone goes down, the clients transparently fail-over to another server hosting another read-only copy of the same data. This replication technique also can be used to clone data across servers that are geographically far apart. Clients can be configured to prefer to use the close-by copy and use the more distant copy as a fallback. The openafs.org AFS cell, for example, is hosted on a server at Carnegie Mellon University in Pittsburgh, Pennsylvania, and on a server at the Royal Institute of Technology (KTH) in Stockholm, Sweden.

AFS provides a snapshot mechanism to provide backups. These snapshots are generated at a configurable time, say 2am, and work on a per-volume basis. The snapshots then can be backed up to tape without interfering with user activities. They also can be provided to users by way of a simple mount point in their respective AFS home directories. This simple trick eliminates many user backup/restore requests, because the files in last night's snapshot still are visible in this special subdirectory—the mount point to the backup volume—in users' home directories.

The AFS communication protocol was designed for wide area networking. It uses its own remote procedure call (RPC) implementation, called Rx, which works over UDP. The protocol retransmits only the single bad packet on a batch of packets, and it allows a higher number of unacknowledged packets as compared to what other protocols allow.

AFS administration can be done from any AFS client; there is no need to log on to an AFS server. This allows administrators to lock down the AFS server tightly, which is a big security plus. The location independence of AFS data also improves manageability. An AFS file server can be evacuated completely by moving all volumes to other AFS file servers. These moves are not visible to users. The empty file server then can undergo its maintenance, such as an OS upgrade or a hardware repair. Afterward, all volumes can be moved transparently back to the server.

Internally, AFS makes use of Kerberos to authenticate users. Out of the box this is Kerberos 4, but all major Kerberos 5 implementations are able to serve as a more secure substitute. AFS provides access control lists (ACLs) to restrict access to directories. Only Kerberos principals or groups of those can be put in ACLs. This is unlike NFS, in which only the UNIX user IDs are used for authorization. An additional authorization service, the protection service (PTS), is used to keep track of individual Kerberos principals and groups of principals.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState