Bare Metal Recovery, Revisited
Imagine your disk drive has just become a very expensive hockey puck. Imagine you have had a fire, and your computer case now looks like something Salvador Dali would like to paint. Now what?
That's the way I started an article on this subject in the November 2000 issue of Linux Journal. The article described a process for backing up a computer and subsequently restoring it to the bare metal. The article described a suite of scripts that were part of both the backup process and the recovery process. Readers can find the article at www.linuxjournal.com/article/4175.
Since then I have added some scripts to the suite. Most of the new scripts are designed for network backups and take advantage of Secure Shell (SSH). (For more information on SSH, see Mick Bauer's “The 101 Uses of OpenSSH” in the January and February 2001 issues of LJ.) I've also made some changes to the scripts introduced in the original article. The suite of revised scripts is available at my home page (see Resources).
The biggest problem with my November 2000 article and the process it described is that the process required a lot of typing at the beginning of the recovery process. You have to enter partition boundaries and other data into fdisk manually, then check the results against your printout. (Printout!? for Murphy's sake!) Then you manually create the appropriate filesystems for each partition. Then you get to mount them, again manually.
This is a lot of typing. I don't know how many times I did test backups and restores on my test computer while I was writing the article. More than I ever want to do again, that's for sure. It's also error prone. After a while all those numbers start to blur together.
The obvious solution is a script or two. What we need is a script that will restore the partition information to a hard drive, then build the filesystems and mount them so that you can run the first stage restoration.
My first pass at this script is the script make.partitions, which is available in the tarball of scripts on my home page. It has two problems: first, it does not rebuild the partitions, so you still have to run fdisk manually; and second, it has to be created by hand for each computer. Add, delete, reformat or otherwise modify a partition, and you have to edit the script. That's not good enough. The script, which is GPLed, should look somewhat like Listing 1.
The second solution is a lot smarter. Why not automate the process? We use gcc to compile gcc. Heck, you can use gcc to compile Perl. Why not a script that creates the script that make.partitions should be? Why not a script-writing script?
make.fdisk parses the output from fdisk -l and mount -l and creates a new script for restoring a given hard drive.
The first problem we face is one I mentioned in the original article: fdisk does not export partition information in a manner that allows it to be re-imported later on. While other versions of fdisk do allow exporting, tomsrtbt (the floppy-based distribution I recommend for bare metal restore) comes with fdisk, and I don't want to rebuild the tomsrtbt disk. We can handle this with something all well-behaved Linux programs have: I/O redirection. Given a program, foo and a file of commands for foo called bar, we can feed the commands to foo by redirecting foo's input from the keyboard to bar, like this:
foo < bar
So what we want to be able to do is:
fdisk /dev/x < dev.xwhere x is the name of the hard drive to be rebuilt.
make.fdisk creates two files. One is an executable script, called make.dev.x, like Listing 1. The other, dev.x, contains the commands necessary for fdisk to build the partitions. You specify which hard drive you want to build scripts for (and so the filenames) by naming the associated device file as the argument to make.fdisk. For example, on a typical IDE system,
spits out the make.dev.hda script and the input file for fdisk, dev.hda.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Home, My Backup Data Center
- Tech Tip: Really Simple HTTP Server with Python
- Please correct the URL for Salt Stack's web site
1 hour 23 min ago
- Android is Linux -- why no better inter-operation
3 hours 38 min ago
- Connecting Android device to desktop Linux via USB
4 hours 7 min ago
- Find new cell phone and tablet pc
5 hours 5 min ago
6 hours 34 min ago
- Automatically updating Guest Additions
7 hours 42 min ago
- I like your topic on android
8 hours 29 min ago
- This is the easiest tutorial
15 hours 5 min ago
- Ahh, the Koolaid.
20 hours 43 min ago
- git-annex assistant
1 day 2 hours ago
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?