Recovery of RAID and LVM2 Volumes
The combination of Linux software RAID (Redundant Array of Inexpensive Disks) and LVM2 (Logical Volume Manager, version 2) offered in modern Linux operating systems offers both robustness and flexibility, but at the cost of complexity should you ever need to recover data from a drive formatted with software RAID and LVM2 partitions. I found this out the hard way when I recently tried to mount a system disk created with RAID and LVM2 on a different computer. The first attempts to read the filesystems on the disk failed in a frustrating manner.
I had attempted to put two hard disks into a small-form-factor computer that was really only designed to hold only one hard disk, running the disks as a mirrored RAID 1 volume. (I refer to that system as raidbox for the remainder of this article.) This attempt did not work, alas. After running for a few hours, it would power-off with an automatic thermal shutdown failure. I already had taken the system apart and started re-installing with only one disk when I realized there were some files on the old RAID volume that I wanted to retrieve.
Recovering the data would have been easy if the system did not use RAID or LVM2. The steps would have been to connect the old drive to another computer, mount the filesystem and copy the files from the failed volume. I first attempted to do so, using a computer I refer to as recoverybox, but this attempt met with frustration.
Getting to the data proved challenging, both because the data was on a logical volume hidden inside a RAID device, and because the volume group on the RAID device had the same name as the volume group on the recovery system.
Some popular modern operating systems (for example, Red Hat Enterprise Linux 4, CentOS 4 and Fedora Core 4) can partition the disk automatically at install time, setting up the partitions using LVM for the root device. Generally, they set up a volume group called VolGroup00, with two logical volumes, LogVol00 and LogVol01, the first for the root directory and the second for swap, as shown in Listing 1.
Listing 1. Typical LVM Disk Configuration
[root@recoverybox ~]# /sbin/sfdisk -l /dev/hda Disk /dev/hda: 39560 cylinders, 16 heads, 63 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 39560/16/63). For this listing I'll assume that geometry. Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 83 Linux /dev/hda2 13 2481 2469 19832242+ 8e Linux LVM /dev/hda3 0 - 0 0 0 Empty /dev/hda4 0 - 0 0 0 Empty [root@recoverybox ~]# /sbin/pvscan PV /dev/hda2 VG VolGroup00 lvm2 [18.91 GB / 32.00 MB free] Total: 1 [18.91 GB] / in use: 1 [18.91 GB] / in no VG: 0 [0 ] [root@recoverybox ~]# /usr/sbin/lvscan ACTIVE '/dev/VolGroup00/LogVol00' [18.38 GB] inherit ACTIVE '/dev/VolGroup00/LogVol01' [512.00 MB] inherit
The original configuration for the software RAID device had three RAID 1 devices: md0, md1 and md2, for /boot, swap and /, respectively. The LVM2 volume group was on the biggest RAID device, md2. The volume group was named VolGroup00. This seemed like a good idea at the time, because it meant that the partitioning configuration for this box looked similar to how the distribution does things by default. Listing 2 shows how the software RAID array looked while it was operational.
Listing 2. Software RAID Disk Configuration
[root@raidbox ~]# /sbin/sfdisk -l /dev/hda Disk /dev/hda: 9729 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 fd Linux raid autodetect /dev/hda2 13 77 65 522112+ fd Linux raid autodetect /dev/hda3 78 9728 9651 77521657+ fd Linux raid autodetect /dev/hda4 0 - 0 0 0 Empty [root@raidbox ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc3 hda3 77521536 blocks [2/2] [UU] md1 : active raid1 hdc2 hda2 522048 blocks [2/2] [UU] md0 : active raid1 hdc1 hda1 104320 blocks [2/2] [UU]
If you ever name two volume groups the same thing, and something goes wrong, you may be faced with the same problem. Creating conflicting names is easy to do, unfortunately, as the operating system has a default primary volume group name of VolGroup00.
- Machine Learning Everywhere
- Own Your DNS Data
- Natalie Rusk's Scratch Coding Cards (No Starch Press)
- Understanding OpenStack's Success
- Simple Server Hardening
- Bash Shell Script: Building a Better March Madness Bracket
- Returning Values from Bash Functions
- Understanding Firewalld in Multi-Zone Configurations
- From vs. to + for Microsoft and Linux
- Ensono M.O.