Part III: AFS—A Secure Distributed Filesystem
The -noauth option is used because this command is run without any credentials for this cell.
Special administrative privileges are necessary to explore the authentication part of AFS, which is standard Kerberos, so I skip it here.
Now, find out where the current directory physically is located:
% fs whereis . File . is on hosts andrew.e.kth.se VIRTUE.OPENAFS.ORG
This shows that two copies of this directory are available, one from andrew.e.kth.se and one from VIRTUE.OPENAFS.ORG.
% fs lsmount /afs/openafs.org/software/openafs ↪/v1.2/1.2.10/binary/fedora-1.0 /afs/openafs.org/software/openafs/v1.2/1.2.10/binary/fedora-1.0 ↪ is a mount point for volume #openafs.1210.f10
shows that this directory actually is a mount point for an AFS volume named openafs.1210.f10.
Another AFS command allows us to inspect volumes:
% vos examine openafs.1210.f10 -cell openafs.org -noauth
This command examines the read-write version of volume openafs.1210.f10 in AFS cell openafs.org. The output should look like this:
openafs.1210.f10 536871770 RW 25680 K On-line VIRTUE.OPENAFS.ORG /vicepb RWrite 536871770 ROnly 536871771 Backup 0 MaxQuota 0 K Creation Fri Nov 21 17:56:28 2003 Last Update Fri Nov 21 18:05:30 2003 0 accesses in the past day (i.e., vnode references) RWrite: 536871770 ROnly: 536871771 number of sites -> 3 server VIRTUE.OPENAFS.ORG partition /vicepb RW Site server VIRTUE.OPENAFS.ORG partition /vicepb RO Site server andrew.e.kth.se partition /vicepb RO Site
The output shows that this volume is hosted on server VIRTUE.OPENAFS.ORG in disk partition /vicepb. The next line shows the numeric volume IDs for the read-write and the read-only volumes. It also shows some statistics. The last three lines show where the one read-write (RW Site) and the two read-only (RO Site) copies of this volume are located.
To find out how many other AFS disk partitions are on the server VIRTUE.OPENAFS.ORG, use the command:
% vos listpart VIRTUE.OPENAFS.ORG -noauth
We learn that the partitions on the server are:
/vicepa /vicepb /vicepc Total: 3
which show a total of three /vicep partitions. To see what volumes are located in partition /vicepa on this server, execute:
% vos listvol VIRTUE.OPENAFS.ORG /vicepa -noauth
This command takes a while and eventually returns a list of 275 volumes. The first few lines of output look like this:
Total number of volumes on server VIRTUE.OPENAFS.ORG partition /vicepa: 275 openafs.10.src 536870975 RW 11407 K On-line openafs.10.src.backup 536870977 BK 11407 K On-line openafs.10.src.readonly 536870976 RO 11407 K On-line openafs.101.src 536870972 RW 11442 K On-line openafs.101.src.backup 536870974 BK 11442 K On-line openafs.101.src.readonly 536870973 RO 11442 K On-line
Another command, bos, communicates with a cell's basic overseer server and finds out the status of that cell's AFS server processes. Many more subcommands are available for the fs, pts, vos and bos commands. All of these AFS commands understand the help option (no dash in front of help) to show all available subcommands. Use fs <subcommand> -help (with the dash) to look at the syntax for a specific subcommand.
Several enhancement projects for AFS currently are underway. The most important project right now is to make AFS work with the 2.6 Linux kernels. These kernels no longer export their syscall table. Another project is to provide a disconnected mode that allows AFS clients to go off the network and continue to use AFS. Once they reconnect, the content of files in AFS space is re-synchronized.
Although all the different aspects of AFS can be overwhelming at first and the learning curve for setting up your own AFS cell is steep, the reward for using AFS in your infrastructure can be significant. Secure, platform-independent world-wide file sharing is a concept as attractive as serving your /usr/local/ area and all your UNIX home directories. And, all this comes with only minimal long-term administrative costs.
Resources for this article: /article/8079.
Alf Wachsmann, PhD, has been at the Stanford Linear Accelerator Center (SLAC) since 1999. He is responsible for all areas of automated Linux installation, including farm nodes, servers and desktops. His work focuses on AFS support, migration to Kerberos 5, a user registry project and user consultants.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space