Bare Metal Recovery

Most us don't take the time to plan for disaster recovery. One excuse is not wanting to figure out what to do. One excuse down—this article gives you the step-by-step.

Imagine your disk drive has just become a very expensive hockey puck. Imagine you have had a fire, and your computer case now looks like something Salvador Dalí would like to paint. Now what?

Bare metal recovery is the process of rebuilding a computer after a catastrophic failure. This article is a step-by-step tutorial on how to back up a Linux computer to be able to make a bare metal recovery, and how to make that bare metal recovery.

The normal bare metal restoration process is: install the operating system from the product disks, install the backup software (so you can restore your data), and then restore your data. Then, you get to restore functionality by verifying your configuration files, permissions, etc.

The process here will save installing the operating system product disk. It will also restore only the files that were backed up from the production computer, so your configuration will be intact when you restore the system. This should save you hours of verifying configurations and data.

The target computer for this article is a Pentium computer with a Red Hat 5.2 Linux server installation on one IDE hard drive. It does not have vast amounts of data because the computer was set up as a “sacrificial” test bed. That is, I did not want to test this process with a production computer and production data. Also, I did a fresh “server” install before I started the testing so that I could always reinstall if I needed to revert to a known configuration.

The target computer does not have any other operating systems on it. While it simplifies the exercise at hand, it also means if you have a dual boot system, you will have to experiment to get the non-Linux OS to restore.

The process shown below is not easy. Practice it before you need it! Do as I did, and practice on a sacrificial computer.

Nota Bene: The sample commands will show, in most cases, what I had to type to recover the target system. You may have to use similar commands, but with different parameters. For example, below we show how to make a swap device on /dev/hda9. It is up to you to be sure you duplicate your setup, and not the test computer's setup.

The basic procedure is set out by W. Curtis Preston in Unix Backup & Recovery, ( http://www.ora.com/, http://www.oreilly.com/catalog/unixbr/), which I favorably reviewed in Linux Journal, October 2000. However, the book is a bit thin on the ground. For example, exactly which files do you back up? What metadata do you need to preserve, and how?

We will start with the assumption that you have backed up your system with a typical backup tool such as Amanda, Bru, tar, Arkeia or cpio. The question, then, is how to get from toasted hardware to the point where you can run the restoration tool that will restore your data.

Users of Red Hat Package Manager (RPM)-based Linux distributions should also save RPM metadata as part of their normal backups. Something like:

rpm -Va > /etc/rpmVa.txt

in your backup script will give you a basis for comparison after a bare metal restoration.

To get to this point, you need to have:

  • Your hardware up and running again, with replacement components as needed. The BIOS should be correctly configured, including time, date and hard drive parameters.

  • A parallel port Iomega Zip drive or equivalent. You will need at least 30MB of space.

  • Your backup media.

  • A minimal Linux system that will allow you to run the restoration software.

To get there, you need at least two stages of backup, and possibly three. Exactly what you back up and in which stage you back it up is determined by your restoration process. For example, if you are restoring a tape server, you may not need networking during the restoration process, so only back up networking in your regular backups.

You will restore in stages as well. In stage one, we build partitions, file systems, etc., and restore a minimal file system from the Zip disk. The goal of stage one is to be able to boot a running computer with a network connection, tape drives, restoration software or whatever we need for stage two.

The second stage, if it is necessary, consists of restoring backup software and any relevant databases. For example, suppose you use Arkeia and build a bare metal recovery Zip disk for your backup server. Arkeia keeps a huge database on the server's hard drives. You can recover the database from the tapes, if you want. Instead, why not tar and gzip the whole Arkeia directory (at /usr/knox), and save that to another computer over nfs? Stage one, as we have defined it, does not include X, so you will have some experimenting to do to back up X as well as your backup program.

Of course, if you are using some other backup program, you may have some work to do to. You will have to find out which directories and files it needs to run. If you use tar, gzip, cpio, mt or dd for your backup and recovery tools, they will be saved to and restored from our Zip disk as part of the stage one process described below.

The last stage is a total restoration from tape or other media. After you have done that last stage, you should be able to boot to a fully restored and operational system.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState