Managing AFS: Andrew File System
Author: Richard Campbell
Publisher: Prentice Hall
Price: $45 US
Reviewer: Daniel Lazenby
Managing AFS: Andrew File System provides a practical UNIX system administration view of the AFS file system. From this book, the reader may gain an appreciation of the technical issues, skills and knowledge required to install, configure and manage an AFS environment.
Most vendor documentation focuses on how to install and configure the product and spends little time explaining the “why,” “when,” “where” or “what” of the product. A third-party book is never meant to replace the vendor's documentation. Still, a third-party book can often fill in many of the gaps in the vendor's documentation. Managing AFS spends considerable time describing “why one would want to use AFS”; “what benefits can be derived from using AFS”; “where might one use AFS”; “what components comprise an AFS file system and the relationship of those components” and “what must be done to install, configure and manage an AFS Cell”. Advanced AFS administration and how to debug an AFS installation are also addressed by Mr. Campbell.
Managing AFS is divided into 12 chapters and an appendix of AFS commands. The first two chapters provide an architectural and technical overview of AFS. Chapter 11 provides several AFS implementation case studies. A strategy and some tips for making a business case to support the use of AFS are provided in Chapter 12. The 50 or so AFS commands are briefly described in the Appendix. The sections in between the first two and last two chapters discuss setting up and managing an AFS Cell.
Chapters 3, 4 and 5 provide an introduction to AFS. These chapters cover setting up an AFS server, performing AFS operations on volumes and files, and setting up and administering an AFS client platform.
The focus of Chapters 6 and 7 shifts from system administration to AFS user account administration and security. Chapter 6, “Managing Users”, describes how to establish AFS user accounts using Transarc's implementation of Kerberos. Administration of Transarc's Kerberos database is also discussed. AFS user login, authentication, groups and directory/file access controls are addressed in Chapter 7, “Using AFS”. Transarc includes their implementation of some conventional UNIX user commands, programming commands and programs with AFS. Examples of UNIX commands and programs that have been modified by Transarc include chmod, df, close, lockf, ftpd, login and inetd. Differences between the two implementations are described.
Chapter 8, “Archiving Data”, provides a momentary break from the other AFS administration concepts and tasks. As stated earlier, AFS supports the global distribution of files. With global distribution comes the challenge of file restoration. In addition to the user's data, data describing the AFS implementation and configuration must also be backed up. Challenges, tools and strategies used to back up and restore an AFS file system are presented here.
With the basics behind you, Chapter 9, “More AFS Administration”, explores the finer details of AFS administration. Server management, updating AFS binaries, job notification, changing the cell name, adding and removing database servers, adding and removing file servers, multi-homed servers and NFS-AFS gateways are just a few of the topics discussed.
Even a well-designed and implemented product will have problems. Chapter 10, “Debugging Problems”, offers a set of strategies for debugging an AFS installation. An explanation about when and how to use the available debugging tools is provided. This chapter also offers a set of typical AFS administration tasks that should be regularly performed and tested.
The original Andrew File System was created by a group of researchers at Carnegie Mellon University (CMU). They were striving to overcome the challenges associated with providing centralized file services in a distributed environment. Their AFS solution worked so well that many of the original researchers left CMU and formed the Transarc Corporation. AFS is now a registered trademark used by the Transarc Corporation to identify the commercial packaging of the Andrew File System. The AFS model was used as the basis for the Open Software Foundation's (OSF) Distributed File System (DFS) specification. Transarc has ensured there is a migration path from AFS to DFS.
A small shop with few workstations or shared files may have little need for AFS, whereas a large shop with many workstations, servers and the need to globally share files may have a greater need for it. In addition to being able to manage AFS servers and clients from a single workstation, AFS reportedly provides several other performance and financial benefits.
The book refers to “published” data on how AFS can support five to ten times more end users per server than other file systems. This increased user-to-server ratio translates into a need for fewer servers and fewer file storage administrators. An AFS file system can be made highly available using two or more AFS servers. This means that the loss of a server will not translate into a user being denied access to the file system.
One set of tests cited for an organization using NFS file sharing found that switching to AFS resulted in several performance improvements. For the same NFS type of workload, AFS resulted in a 60% decrease in network traffic. The server's load was decreased by 80%, and task execution time was reduced by 30%.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
- BASH script to log IPs on public web server
1 hour 41 min ago
5 hours 16 min ago
- Reply to comment | Linux Journal
5 hours 49 min ago
- All the articles you talked
8 hours 12 min ago
- All the articles you talked
8 hours 16 min ago
- All the articles you talked
8 hours 17 min ago
12 hours 42 min ago
- Keeping track of IP address
14 hours 33 min ago
- Roll your own dynamic dns
19 hours 46 min ago
- Please correct the URL for Salt Stack's web site
22 hours 57 min ago