UNIX Backup and Recovery
Author: W. Curtis Preston
Publisher: O'Reilly & Associates
Price: $36.95 US
Reviewer: Charles Curley
Buy this book. Now. Do not pass “Go”, do not let your hard drive crash. As soon as you have the book in hand (espresso optional), read Section I, which consists of Chapter 1, “Preparing for the Worst”, and Chapter 2, “Backing It All Up”. Break out the CD-ROM and see what's on it. Skim Section II, “Freely Available File System Backup & Recovery Utilities”. Read Chapter 8, “Bare-Metal Backup & Recovery Methods for Linux”. Then, do what the man says. Implement at least a minimal backup system, and build the tools you need to perform a bare metal recovery. Boot the bare metal recovery diskette, and make sure you can read the ZIP disk or whatever you used in its stead.
That was the short review. Any questions?
Most of what I know about backup and recovery I know from having done it. A stint at Colorado Memory Systems writing backup and recovery software for MS-DOS, Windows and various flavors of UNIX didn't hurt. Since then, I've been responsibile for backup and recovery at a number of shops. Not least among those shops is my own home network which consists of five computers, two of which are Linux boxes. Since my lady and I earn our livings with our computers, my ability to restore data may represent our ability to earn our livings.
The key lessons I have learned when it comes to backups are: (1) Murphy was an optimist, and (2) when you find out that Murphy was correct, it's usually too late to do anything about it.
Apparently, I'm not the only person who has learned these two lessons. Some people learn them the hard way, and Curtis Preston provides us with plenty of anecdotes about how he and some of his colleagues learned them the hard way. Now, you can read these horror stories and learn from someone else's mistakes. Believe me, that's a much smarter way to learn.
Curtis Preston knows the subject matter at hand with expertise that comes from years of experience in shops large and small. According to the biography on O'Reilly's web site:
The first environment that Curtis was responsible for went from seven small servers to 250 large servers in just over two years, running Oracle, Informix, and Sybase databases and five versions of UNIX. He started managing this environment with homegrown utilities and eventually installed the first of many commercial backup utilities. His passion for backup and recovery began with managing the data growth of this 24 x 7, mission-critical environment.
This book also draws on a network of about 400 experienced consultants called the“Collective Intellect<\#174>” to fill in gaps in Preston's own knowledge. The result is an excellent book that seamlessly covers several major UNIX versions, including Linux.
The writing style is informal, nonacademic and results-oriented. For example, if you want to use the find command to specify files to tar for backup, you can do this via a named pipe. Preston not only tells you this but shows you the commands necessary to make the pipe and use it. He then gives you the three variants in syntax for the various UNIX systems the book covers. It is detail like this that makes the book an excellent reference work as well as a textbook on the subject.
Some of the subheadings indicate the informal style: “My Dad Was Right”; “Test, Test, Test”; “Don't Skip This Chapter!”; “The Muck Stops Here: Databases in Plain English”; “Trust Me about the Backups”; and (unfortunately inevitable in a book on backup and recovery) “An Ounce of Prevention...”.
The book is divided into six sections, each containing one or more of the 19 chapters. The Introduction contains chapters 1 and 2 and deals with general thinking about backups, such as how to organize your backup and recovery plans, what to back up and what not to back up. The introductory chapters emphasize planning and documenting, and rightly so.
The second section concentrates on backup and recovery tools you may already have, like tar and cpio; and tools you can readily get, like amanda. amanda may be worth the price of admission all by itself. amanda is designed for backing up and restoring multiple hosts over a TCP/IP network and has provisions for defining data sets and scheduling backups. It is in the same league as Arkeia and Quick Restore, but comes with source and is free of charge.
Section III delves into commercial backup utilities. As there are many excellent backup tools available for Linux, this is worth a read. Preston does not recommend any specific programs but does give an excellent overview of features to look for and “features” to avoid. The next chapter deals with High Availability. It starts with a definition of the term and then explains why you should be backing it up.
Then, we get into the really fun stuff: bare metal backup and recovery. You've just had a fire, your computer now looks like something Salvador Dalí would paint and runs about as well as it looks. Now what?
There are chapters on SunOS/Solaris, Compaq Tru64 UNIX, HP-UX, IRIX, AIX, and, of course, Linux. The Linux chapter uses the tomsrtbt mini-distribution of Linux and an Omega parallel port ZIP drive to recover an Intel architecture system. Preston also gives pointers on how to handle SPARC and Alpha systems. Like the man says, make sure you test this procedure before you use it. I'll add, don't try this on your production computer the first time. Use a sacrificial machine. But do it.
The next section is "Backing up Databases". The first chapter is an overview and general discussion. You can apply this to your mySQL or PostgreSQL installation. If you are running Informix, Oracle or Sybase, mine the appropriate chapter for useful information.
The final section, "Backup & Recovery Potpourri", covers ClearCase backup and recovery and miscellanea. In between those is a guided tour of backup hardware, which you should read when contemplating buying new hardware. However, you should also read the comments on media life and the care and feeding of backup media.
While Preston's experience is mostly in medium to large shops, and the book has a wealth of information for such shops, rest assured you can use this book even if all you have to back up is your own desktop computer. The basic concepts and techniques are the same regardless of the size of the shop.
Highly recommended. The job you save may be your own.
Charles Curley (firstname.lastname@example.org) lives in Wyoming, where he rides horses and herds cattle, cats and electrons. Only the last of those pays well, so he also writes documentation for a small software company headquartered in Redmond, Washington.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- The Secret Password Is...
- New Products
3 hours 31 min ago
- Keeping track of IP address
5 hours 22 min ago
- Roll your own dynamic dns
10 hours 35 min ago
- Please correct the URL for Salt Stack's web site
13 hours 47 min ago
- Android is Linux -- why no better inter-operation
16 hours 2 min ago
- Connecting Android device to desktop Linux via USB
16 hours 31 min ago
- Find new cell phone and tablet pc
17 hours 29 min ago
18 hours 58 min ago
- Automatically updating Guest Additions
20 hours 6 min ago
- I like your topic on android
20 hours 53 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?