Recovery of RAID and LVM2 Volumes
The combination of Linux software RAID (Redundant Array of Inexpensive Disks) and LVM2 (Logical Volume Manager, version 2) offered in modern Linux operating systems offers both robustness and flexibility, but at the cost of complexity should you ever need to recover data from a drive formatted with software RAID and LVM2 partitions. I found this out the hard way when I recently tried to mount a system disk created with RAID and LVM2 on a different computer. The first attempts to read the filesystems on the disk failed in a frustrating manner.
I had attempted to put two hard disks into a small-form-factor computer that was really only designed to hold only one hard disk, running the disks as a mirrored RAID 1 volume. (I refer to that system as raidbox for the remainder of this article.) This attempt did not work, alas. After running for a few hours, it would power-off with an automatic thermal shutdown failure. I already had taken the system apart and started re-installing with only one disk when I realized there were some files on the old RAID volume that I wanted to retrieve.
Recovering the data would have been easy if the system did not use RAID or LVM2. The steps would have been to connect the old drive to another computer, mount the filesystem and copy the files from the failed volume. I first attempted to do so, using a computer I refer to as recoverybox, but this attempt met with frustration.
Getting to the data proved challenging, both because the data was on a logical volume hidden inside a RAID device, and because the volume group on the RAID device had the same name as the volume group on the recovery system.
Some popular modern operating systems (for example, Red Hat Enterprise Linux 4, CentOS 4 and Fedora Core 4) can partition the disk automatically at install time, setting up the partitions using LVM for the root device. Generally, they set up a volume group called VolGroup00, with two logical volumes, LogVol00 and LogVol01, the first for the root directory and the second for swap, as shown in Listing 1.
Listing 1. Typical LVM Disk Configuration
[root@recoverybox ~]# /sbin/sfdisk -l /dev/hda Disk /dev/hda: 39560 cylinders, 16 heads, 63 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 39560/16/63). For this listing I'll assume that geometry. Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 83 Linux /dev/hda2 13 2481 2469 19832242+ 8e Linux LVM /dev/hda3 0 - 0 0 0 Empty /dev/hda4 0 - 0 0 0 Empty [root@recoverybox ~]# /sbin/pvscan PV /dev/hda2 VG VolGroup00 lvm2 [18.91 GB / 32.00 MB free] Total: 1 [18.91 GB] / in use: 1 [18.91 GB] / in no VG: 0 [0 ] [root@recoverybox ~]# /usr/sbin/lvscan ACTIVE '/dev/VolGroup00/LogVol00' [18.38 GB] inherit ACTIVE '/dev/VolGroup00/LogVol01' [512.00 MB] inherit
The original configuration for the software RAID device had three RAID 1 devices: md0, md1 and md2, for /boot, swap and /, respectively. The LVM2 volume group was on the biggest RAID device, md2. The volume group was named VolGroup00. This seemed like a good idea at the time, because it meant that the partitioning configuration for this box looked similar to how the distribution does things by default. Listing 2 shows how the software RAID array looked while it was operational.
Listing 2. Software RAID Disk Configuration
[root@raidbox ~]# /sbin/sfdisk -l /dev/hda Disk /dev/hda: 9729 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/hda1 * 0+ 12 13- 104391 fd Linux raid autodetect /dev/hda2 13 77 65 522112+ fd Linux raid autodetect /dev/hda3 78 9728 9651 77521657+ fd Linux raid autodetect /dev/hda4 0 - 0 0 0 Empty [root@raidbox ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 hdc3 hda3 77521536 blocks [2/2] [UU] md1 : active raid1 hdc2 hda2 522048 blocks [2/2] [UU] md0 : active raid1 hdc1 hda1 104320 blocks [2/2] [UU]
If you ever name two volume groups the same thing, and something goes wrong, you may be faced with the same problem. Creating conflicting names is easy to do, unfortunately, as the operating system has a default primary volume group name of VolGroup00.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
3 hours 6 min ago
- Reply to comment | Linux Journal
3 hours 38 min ago
- All the articles you talked
6 hours 1 min ago
- All the articles you talked
6 hours 5 min ago
- All the articles you talked
6 hours 6 min ago
10 hours 31 min ago
- Keeping track of IP address
12 hours 22 min ago
- Roll your own dynamic dns
17 hours 35 min ago
- Please correct the URL for Salt Stack's web site
20 hours 47 min ago
- Android is Linux -- why no better inter-operation
23 hours 2 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?