Managing AFS: Andrew File System
Author: Richard Campbell
Publisher: Prentice Hall
Price: $45 US
Reviewer: Daniel Lazenby
Managing AFS: Andrew File System provides a practical UNIX system administration view of the AFS file system. From this book, the reader may gain an appreciation of the technical issues, skills and knowledge required to install, configure and manage an AFS environment.
Most vendor documentation focuses on how to install and configure the product and spends little time explaining the “why,” “when,” “where” or “what” of the product. A third-party book is never meant to replace the vendor's documentation. Still, a third-party book can often fill in many of the gaps in the vendor's documentation. Managing AFS spends considerable time describing “why one would want to use AFS”; “what benefits can be derived from using AFS”; “where might one use AFS”; “what components comprise an AFS file system and the relationship of those components” and “what must be done to install, configure and manage an AFS Cell”. Advanced AFS administration and how to debug an AFS installation are also addressed by Mr. Campbell.
Managing AFS is divided into 12 chapters and an appendix of AFS commands. The first two chapters provide an architectural and technical overview of AFS. Chapter 11 provides several AFS implementation case studies. A strategy and some tips for making a business case to support the use of AFS are provided in Chapter 12. The 50 or so AFS commands are briefly described in the Appendix. The sections in between the first two and last two chapters discuss setting up and managing an AFS Cell.
Chapters 3, 4 and 5 provide an introduction to AFS. These chapters cover setting up an AFS server, performing AFS operations on volumes and files, and setting up and administering an AFS client platform.
The focus of Chapters 6 and 7 shifts from system administration to AFS user account administration and security. Chapter 6, “Managing Users”, describes how to establish AFS user accounts using Transarc's implementation of Kerberos. Administration of Transarc's Kerberos database is also discussed. AFS user login, authentication, groups and directory/file access controls are addressed in Chapter 7, “Using AFS”. Transarc includes their implementation of some conventional UNIX user commands, programming commands and programs with AFS. Examples of UNIX commands and programs that have been modified by Transarc include chmod, df, close, lockf, ftpd, login and inetd. Differences between the two implementations are described.
Chapter 8, “Archiving Data”, provides a momentary break from the other AFS administration concepts and tasks. As stated earlier, AFS supports the global distribution of files. With global distribution comes the challenge of file restoration. In addition to the user's data, data describing the AFS implementation and configuration must also be backed up. Challenges, tools and strategies used to back up and restore an AFS file system are presented here.
With the basics behind you, Chapter 9, “More AFS Administration”, explores the finer details of AFS administration. Server management, updating AFS binaries, job notification, changing the cell name, adding and removing database servers, adding and removing file servers, multi-homed servers and NFS-AFS gateways are just a few of the topics discussed.
Even a well-designed and implemented product will have problems. Chapter 10, “Debugging Problems”, offers a set of strategies for debugging an AFS installation. An explanation about when and how to use the available debugging tools is provided. This chapter also offers a set of typical AFS administration tasks that should be regularly performed and tested.
The original Andrew File System was created by a group of researchers at Carnegie Mellon University (CMU). They were striving to overcome the challenges associated with providing centralized file services in a distributed environment. Their AFS solution worked so well that many of the original researchers left CMU and formed the Transarc Corporation. AFS is now a registered trademark used by the Transarc Corporation to identify the commercial packaging of the Andrew File System. The AFS model was used as the basis for the Open Software Foundation's (OSF) Distributed File System (DFS) specification. Transarc has ensured there is a migration path from AFS to DFS.
A small shop with few workstations or shared files may have little need for AFS, whereas a large shop with many workstations, servers and the need to globally share files may have a greater need for it. In addition to being able to manage AFS servers and clients from a single workstation, AFS reportedly provides several other performance and financial benefits.
The book refers to “published” data on how AFS can support five to ten times more end users per server than other file systems. This increased user-to-server ratio translates into a need for fewer servers and fewer file storage administrators. An AFS file system can be made highly available using two or more AFS servers. This means that the loss of a server will not translate into a user being denied access to the file system.
One set of tests cited for an organization using NFS file sharing found that switching to AFS resulted in several performance improvements. For the same NFS type of workload, AFS resulted in a 60% decrease in network traffic. The server's load was decreased by 80%, and task execution time was reduced by 30%.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide