Bare Metal Recovery, Revisited
As you look at the script make.fdisk shown in Listing 2 [available at ftp.linuxjournal.com/pub/lj/listings/issue100/5484.tgz], keep in mind what happens at what time. Like C source code, some things happen later on, at runtime. Others happen at the time the program is compiled, like evaluation of defines and inclusion of header files.
On examining make.fdisk, the first thing we see is that it is a Perl script. Next is a brief description of what the script does. This is followed by a timestamp and two copyright statements. Then we see the usual announcement that the code is free software and distributed under the General Public License. Next is a detailed description of the problem with fdisk we've already seen—and the solution. It is good coding practice to document a program in this manner; it makes the program almost self-documenting.
Now we get to actual Perl code. The subroutine cut2fmt takes a series of column numbers and calculates a format string for later use with unpack. Right after the subroutine we use it to create a format string to unpack the output from fdisk.
After that is a series of definitions of the columns in fdisk's output. With these, we can index into the array created with unpack by name rather than by column number. This should make the script easier to read and more maintainable.
The directory where the rebuilt hard drive will be mounted is named $target so that the first stage restore can find it. Make sure this agrees with the definition of $target in your copy of the script restore.metadata.
Next, the code massages the device name to produce the filenames where we will send our output. Then we define the path to the directory where we will place the output files.
Labels are tools that Linux uses to abstract partitions. The problem with using device filenames in fstab is that if you add or remove a hard drive you may affect which device file another partition shows up under. Labels travel with the partition, so that with mounting by label you always get the correct partition. They are a problem for us because tomsrtbt doesn't handle labels.
The next section of code executes mount with a command-line switch to make it show the labels. If there is a label in any given line, we save the label and the device filename in a hash. That way, later on when we make the filesystem in the partition, we can assign the label. Also, we need to mount the partition by a device filename so that we can restore to it. We make a hash mapping from device filename to mountpoint so that later on we can build the mountpoint directories and mount the partitions.
Next is a typical Perl command to spawn a process and put the results into a filehandle, in this case FDISK. It is complete with error checking. Then we open our output file, which will eventually be redirected as input to fdisk.
Now we begin a loop to parse each line of the output from the system call to fdisk. We are interested in any line that has the device in it. If we find one, we massage it a bit, unpack it into the array @_ and further massage the array members.
If a partition number is less than five, it is either a primary partition, meaning it can have a filesystem in it, or an extended partition, meaning it can have a number of logical partitions in it. In either case, we write the commands to build the partition to the output file. If it is a Linux swap partition, we have to tell fdisk to change its partition type.
If we see a primary partition that is either FAT (but, for now, not FAT32), Linux or Linux swap, we append the appropriate command to the $format to make the partition a FAT, ext2 filesystem or a swap partition. Later on, we'll use $format to create the output script.
A partition number of five or greater only can be a logical partition, that is, one contained within an extended partition. As far as we are concerned, these are either Linux ext3fs, Linux swap partitions, FAT or anything else. As above, appropriate fdisk commands are sent to the output file and appropriate commands to create filesystems are appended to $format.
We look to see if there is a label for each ext2 partition. If there is, we use a command that will recreate that label on the new partition, otherwise we use the same command without a label.
You will notice that there are two commands to make each filesystem, with one commented out. The one commented out makes the filesystem with no bad-block checking. If I were installing to a brand-new hard drive, I would consider using this. The other does bad-block checking. I prefer to check for bad blocks when reusing a hard drive. The bad-block check is a simple read-only test, which is reasonable most of the time. You can add a write test, which is much more thorough but takes longer, by adding -w to the command-line options for bad blocks. The write test is destructive, but since you will be building a new filesystem in the partition, you don't care.
At the end of our line-parsing loop, if any partition is marked “bootable” (typically a MS-DOS, Windows or Windows NT partition because LILO ignores the bootable flag), we send the commands to make it bootable to the command file.
The last thing we do for the command file is send a “v”, which will have fdisk verify the newly built partition table. Then we send a “w”, which will cause fdisk to write the partition table to the hard drive and then exit. We then close our two files.
Next, we open the file that will become our script and send an appropriate header to the script, similar to the header for this script. The first thing the script actually will do is use dd to write zeros over the first 1,024 bytes of the hard drive. This will clobber any existing master boot record (MBR) so that we don't have to worry about deleting partitions before creating the new ones.
The next step is to create the command that will partition the hard drive, using the command file we've already created. Then the code walks through the hash of mountpoints, creating a comment line, a command to create the directory and then a command to mount the device filename to the directory.
We have to mount starting at the root partition so that mountpoints are created in the correct partition. For example, suppose /usr/local is on its own partition; we have to mount /usr before we build /usr/local. To ensure that is done, we sort the keys of the hash and process the hash in that order.
The last thing we do is change the mode of the files we've just created. Since paranoids live longer, we disallow anyone but root from even reading the script, and make it executable.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- New Products
- OpenOffice.org Off-the-Wall: ToCs, Indexes and Bibliographies in OOo Writer
- Dart: a New Web Programming Experience
- Mediated Reality: University of Toronto RWM Project
- Kinect with Linux
- Power Management in Linux-Based Systems
- A Topic for Discussion - Open Source Feature-Richness?
45 min 57 sec ago
- Kernel Problem
10 hours 48 min ago
- BASH script to log IPs on public web server
15 hours 15 min ago
18 hours 51 min ago
- Reply to comment | Linux Journal
19 hours 23 min ago
- All the articles you talked
21 hours 47 min ago
- All the articles you talked
21 hours 50 min ago
- All the articles you talked
21 hours 51 min ago
1 day 2 hours ago
- Keeping track of IP address
1 day 4 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?