Bare Metal Recovery
Imagine your disk drive has just become a very expensive hockey puck. Imagine you have had a fire, and your computer case now looks like something Salvador Dalí would like to paint. Now what?
Bare metal recovery is the process of rebuilding a computer after a catastrophic failure. This article is a step-by-step tutorial on how to back up a Linux computer to be able to make a bare metal recovery, and how to make that bare metal recovery.
The normal bare metal restoration process is: install the operating system from the product disks, install the backup software (so you can restore your data), and then restore your data. Then, you get to restore functionality by verifying your configuration files, permissions, etc.
The process here will save installing the operating system product disk. It will also restore only the files that were backed up from the production computer, so your configuration will be intact when you restore the system. This should save you hours of verifying configurations and data.
The target computer for this article is a Pentium computer with a Red Hat 5.2 Linux server installation on one IDE hard drive. It does not have vast amounts of data because the computer was set up as a “sacrificial” test bed. That is, I did not want to test this process with a production computer and production data. Also, I did a fresh “server” install before I started the testing so that I could always reinstall if I needed to revert to a known configuration.
The target computer does not have any other operating systems on it. While it simplifies the exercise at hand, it also means if you have a dual boot system, you will have to experiment to get the non-Linux OS to restore.
The process shown below is not easy. Practice it before you need it! Do as I did, and practice on a sacrificial computer.
Nota Bene: The sample commands will show, in most cases, what I had to type to recover the target system. You may have to use similar commands, but with different parameters. For example, below we show how to make a swap device on /dev/hda9. It is up to you to be sure you duplicate your setup, and not the test computer's setup.
The basic procedure is set out by W. Curtis Preston in Unix Backup & Recovery, ( http://www.ora.com/, http://www.oreilly.com/catalog/unixbr/), which I favorably reviewed in Linux Journal, October 2000. However, the book is a bit thin on the ground. For example, exactly which files do you back up? What metadata do you need to preserve, and how?
We will start with the assumption that you have backed up your system with a typical backup tool such as Amanda, Bru, tar, Arkeia or cpio. The question, then, is how to get from toasted hardware to the point where you can run the restoration tool that will restore your data.
Users of Red Hat Package Manager (RPM)-based Linux distributions should also save RPM metadata as part of their normal backups. Something like:
rpm -Va > /etc/rpmVa.txt
in your backup script will give you a basis for comparison after a bare metal restoration.
To get to this point, you need to have:
Your hardware up and running again, with replacement components as needed. The BIOS should be correctly configured, including time, date and hard drive parameters.
A parallel port Iomega Zip drive or equivalent. You will need at least 30MB of space.
Your backup media.
A minimal Linux system that will allow you to run the restoration software.
To get there, you need at least two stages of backup, and possibly three. Exactly what you back up and in which stage you back it up is determined by your restoration process. For example, if you are restoring a tape server, you may not need networking during the restoration process, so only back up networking in your regular backups.
You will restore in stages as well. In stage one, we build partitions, file systems, etc., and restore a minimal file system from the Zip disk. The goal of stage one is to be able to boot a running computer with a network connection, tape drives, restoration software or whatever we need for stage two.
The second stage, if it is necessary, consists of restoring backup software and any relevant databases. For example, suppose you use Arkeia and build a bare metal recovery Zip disk for your backup server. Arkeia keeps a huge database on the server's hard drives. You can recover the database from the tapes, if you want. Instead, why not tar and gzip the whole Arkeia directory (at /usr/knox), and save that to another computer over nfs? Stage one, as we have defined it, does not include X, so you will have some experimenting to do to back up X as well as your backup program.
Of course, if you are using some other backup program, you may have some work to do to. You will have to find out which directories and files it needs to run. If you use tar, gzip, cpio, mt or dd for your backup and recovery tools, they will be saved to and restored from our Zip disk as part of the stage one process described below.
The last stage is a total restoration from tape or other media. After you have done that last stage, you should be able to boot to a fully restored and operational system.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server