Recovery of RAID and LVM2 Volumes

Raid and Logical Volume Managers are great, until you lose data.

The combination of Linux software RAID (Redundant Array of Inexpensive Disks) and LVM2 (Logical Volume Manager, version 2) offered in modern Linux operating systems offers both robustness and flexibility, but at the cost of complexity should you ever need to recover data from a drive formatted with software RAID and LVM2 partitions. I found this out the hard way when I recently tried to mount a system disk created with RAID and LVM2 on a different computer. The first attempts to read the filesystems on the disk failed in a frustrating manner.

I had attempted to put two hard disks into a small-form-factor computer that was really only designed to hold only one hard disk, running the disks as a mirrored RAID 1 volume. (I refer to that system as raidbox for the remainder of this article.) This attempt did not work, alas. After running for a few hours, it would power-off with an automatic thermal shutdown failure. I already had taken the system apart and started re-installing with only one disk when I realized there were some files on the old RAID volume that I wanted to retrieve.

Recovering the data would have been easy if the system did not use RAID or LVM2. The steps would have been to connect the old drive to another computer, mount the filesystem and copy the files from the failed volume. I first attempted to do so, using a computer I refer to as recoverybox, but this attempt met with frustration.

Why Was This So Hard?

Getting to the data proved challenging, both because the data was on a logical volume hidden inside a RAID device, and because the volume group on the RAID device had the same name as the volume group on the recovery system.

Some popular modern operating systems (for example, Red Hat Enterprise Linux 4, CentOS 4 and Fedora Core 4) can partition the disk automatically at install time, setting up the partitions using LVM for the root device. Generally, they set up a volume group called VolGroup00, with two logical volumes, LogVol00 and LogVol01, the first for the root directory and the second for swap, as shown in Listing 1.

The original configuration for the software RAID device had three RAID 1 devices: md0, md1 and md2, for /boot, swap and /, respectively. The LVM2 volume group was on the biggest RAID device, md2. The volume group was named VolGroup00. This seemed like a good idea at the time, because it meant that the partitioning configuration for this box looked similar to how the distribution does things by default. Listing 2 shows how the software RAID array looked while it was operational.

If you ever name two volume groups the same thing, and something goes wrong, you may be faced with the same problem. Creating conflicting names is easy to do, unfortunately, as the operating system has a default primary volume group name of VolGroup00.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Cannot Mount it

SirLouen's picture

I'm trying to mount it with the message:

mount: wrong fs type, bad option, bad superblock on /dev/mapper/datavg-datalv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try

Any ideas??

Regards!

RAID1 overwritten by LVM ubunbu 10.04 alternate installer

Andrew A's picture

Help, Help

I reinstalled my linux box using ubunbu 10.04 alternate install as I setup LVM & raid.
Before the upgrade I copied data to a degraded RAID1 500GB with ext4 drive all working fine. Had even run fsck before the upgrade. Data was there.
I should have unplugged this drive, before the reinstall.
I manually partition drives, but did not change the drive with my data on it. I did not add this raid/drive to any LVM groups.
But when I rebooted I when looked at degraded RAID1 (yes it did mount) there is no partition! I installed and ran gparted (GUI) to find the drive marked as lvm flag!

Is there anyway to rebuild the partition on the degraded RAID1 drive
I have been using clonezilla CD using to image the disk to another one the same size, to try various things.
I have tried testdisk no luck, it can see the partition but not the files.
I tried photorec but all got I was files with ramdom file names and no directories.
What does LVM write to a volume? Does it destroy the whole FAT?

some more info.

Anonymous's picture

This is very interesting. However I have seen that fdisk or sfdisk was used, I would strongly suggest to only use gnu parted. such as "parted -l" so it will produce the same output than "fdisk -l" or "sfdisk -l" but parted does also fully support Intel EFI/GPT partition tables.

If I may I can also suggest some reading that I wrote about similar issue.
It's an howto on how to Create a Raid5 under Linux RHEL5.4 using md, lvm and ext4 filesystem. (look at http://panoramicsolution.com/blog/?p=92 )

and I also wrote the experience on Testing for a Raid5 failure (with LVM and MD) on Linux RHEL (http://panoramicsolution.com/blog/?p=118)

Rejean

Too fancy for our own good?

Anonymous's picture

In an enterprise production environment, perhaps there are situations where LVM over Raid XX is desireable, even essential. But a lot of the posts here seem to reference home users trying to recover their personal photos, and stuff like that.

The redundancy of Raid is only a benefit if you can figure out how to get your data when a drive fails. The flexibility of LVM has to be weighed against the increased complexity of recovering from a failure.

I guess my bottomline is, don't make things more complicated than they need to be.

And when you're building your new system, before you put any vital data on it, break it and see if you can recover. Takes notes about how you did it, and save those notes somewhere OTHER than on the system you're working on! You'll thank yourself later. Perhaps you've already found some useful links to help on the subject. But in 5 years, when a drive fails, will those links still be active? Don't count on it. Make a recovery plan, and keep it somewhere safe.

If you break it and you can't fix it, then you either need to learn more or simplify your system configuration.

And, of course, back up your data. And back it up onto more than one device (you can alternate daily/weekly, whatever is comfortable for your situation). If the lightning strike picks the moment you're doing backups, and it hoses the "if" AND the "of", what are you going to do then?

If possible, for critical situations, in addition to backups for the data, have backups for the HARDWARE as well. These fancy NAS RAID devices are great, until they fail. If all you have is the NAS box, and some consumer type PCs, exactly what are you going to stick those 3, 4, or 5 drives into? Rest assured that by the time your NAS box fails, you won't be able to buy an exact replica and just stick the drives into it -- that model will be in the pages of history. Then you will get to have a "learning experience".

Good luck to all.

Nice Work!

Scott Benninghoff's picture

Worked great. Save me a bunch of time and frustration. Thanks.

Thanks! A few odds and ends

abject's picture

Just had to recover and/or reorganize a bunch of RAID (but not LVM) drives from a dead box myself. Your post was freaking invaluable.

To older sibling:

what if the machine your using for recovery has raid itself?? when you append to mdadm.conf can md0,1,2 be renumbered to 3,4,5?

Yes, exactly. For example, my box already has /dev/md0 through /dev/md3.
Scanning the raid drive to be repaired/recovered/reorganized gives:

# mdadm --examine --scan  /dev/sdc1 /dev/sdc2 /dev/sdc5 /dev/sdc6 /dev/sdc7 /dev/sdc8 /dev/sdc9
ARRAY /dev/md4 UUID=e79bfe6d:cd4689cc:506f7400:8254c421                                                           
ARRAY /dev/md3 UUID=18710190:b3bff9bb:0cea5505:846c322f                                                           
ARRAY /dev/md2 UUID=0373bb2d:c659367c:ca5df4d6:18fa15ee                                                           
ARRAY /dev/md1 UUID=ead88911:87e5b554:4dafeabe:7cd1121f                                                           
ARRAY /dev/md0 UUID=6be22f49:27fcc412:519c776e:f901e6f9

So, when editing /etc/mdadm/mdadm.com (that's where Debian keeps it), I actually added:

ARRAY /dev/md8 UUID=e79bfe6d:cd4689cc:506f7400:8254c421
ARRAY /dev/md7 UUID=18710190:b3bff9bb:0cea5505:846c322f
ARRAY /dev/md6 UUID=0373bb2d:c659367c:ca5df4d6:18fa15ee
ARRAY /dev/md5 UUID=ead88911:87e5b554:4dafeabe:7cd1121f
ARRAY /dev/md4 UUID=6be22f49:27fcc412:519c776e:f901e6f9

Also, when you're finished with the "visiting" RAID devices, dismount them, of course, and then:

# mdadm --stop /dev/md8
mdadm: stopped /dev/md8

... and so on, until you've stopped all the visiting RAID drives (careful not to stop your real local RAID devices :o

Thanks again!

You people are gods

Dash Rendar's picture

Hi All,

Thanks to both the author and this commentor I was able to sucessfully mount one of my old Raid Drives in degraded mode, and copy across the data I needed.

Just needed to edit the mdX number in mdadm.conf to something the host system was not using

Also, in the LVM Config, had to change the physical volume device field to the same, and change the section name to something unique (i.e. I changed "main{" to "oldmain{" )

Once I'd done this I was able to run though the rest of the process without any trouble at all!!

Thank you very much!

Dash

Fantastic, thank you so

Anonymous's picture

Fantastic, thank you so much. Failed upgrade resulted in my raid array not restarting and this article pointed me just right to sort the problem. Saved me much head scratching.

Greaaaaaat !!

gaiol's picture

Hey man, you saved my life ! I love you !

Stuck at Listing 4

Anonymous's picture

Hello, I got to listing 4 and am now stuck. It might have to do with my output from listing 3...:

sfdisk -l /dev/sdb
/dev/sdb1 Linux raid autodetect
/dev/sdb2 Linux LVM
/dev/sdb3 Linux raid autodetect
/dev/sdb4 empty

mdadm --examine --scan /dev/sdb1 /dev/sdb2 /dev/sdb3
ARRAY /dev/md1 level=raid1 num-devices=4 UUID=76a49610:8be3458e:6a4c59a2:58407b7e

mdadm --examine --scan /dev/sdb1 /dev/sdb2 /dev/sdb3 >> /etc/mdadm.conf

When I get to the next line, I noticed that there is nothing inside the mdadm.conf file EXCEPT the output of the mdadm command. So now my mdadm.conf file contains ONLY the following:
ARRAY /dev/md1 level=raid1 num-devices=4 UUID=76a49610:8be3458e:6a4c59a2:58407b7e devices=/dev/sdb1,missing

Wehn I try the next two commands, I get:
mdadm -A -s
mdadm: no devices found for /dev/md1

cat /proc/mdstat
Personalities:
md_d1: inactive sdb1[3](S)
500352 blocks

unused devices:

Any ideas on what's wrong? Please let me know if I need to provide additional info. Thanks.

mdadm.conf

Mitch Frazier's picture

What's in your mdadm.conf?

Did you do this part:

  mdadm --examine --scan  /dev/sda1 /dev/sda2 /dev/sda3 >> /etc/mdadm.conf
  vi /etc/mdadm.conf

Edit /etc/mdadm.conf so that the devices statements are on the same lines as the ARRAY statements, as they are in Listing 4...

The error appears to indicate that there's no useful information in your mdadm.conf file.

Mitch Frazier is an Associate Editor for Linux Journal.

Thanks!

ephemere's picture

You are the man. Thank you so much. I searched for hours for a way to recover the data from my lvm2/raid1 disks and this worked perfectly. Cheers!

w00t!

chunkhead's picture

My main (4 year old) desktop computer popped it's motherboard and I'd just received the machine I bought to replace it.

There wasn't a spare power connector in a location where I could put my old drive in but I figured no great loss - the new drive was larger and I'd just rely on the AMANDA backups I'd been running to an outboard USB drive.

Loaded up CentOS, attached the USB drive I'd been using for backups, manually restored the Amanda configuration, started extracting the data I needed and - poof. After a few hours of messing about determined that my backup drive picked that time to die and it wasn't just a matter of it spinning down when it wasn't supposed to (Seagate FreeAgent ICAC).

So it was out to the local store to pick up an external SATA dock to put the old drive in and I was presented with the problem of getting access to my data. Thank you, you just saved me untold hours of sussing all that out via the man pages!

Linux Recovery

Arik's picture

Hey guys Stellar Phoenix Linux Recovery software is a must to use recovery tool when ever you encountered with a Linux data loss situation...Good recovery tool
http://www.data-recovery-linux.com/

When I go to "Recovering and

csyckad's picture

When I go to "Recovering and Renaming the LVM2 Volume"
and do "dd if=/dev/md0 bs=512 count=255 skip=1 of=/tmp/md2-raw-start"

I cannot find the information about volumngroup,
even i pvscan and lvscan, it shows nth.

Any ideas? Thanks

Worked!

Anonymous's picture

Thanks for the tutorial - worked great

Recovering array after computer dies

maxxjr's picture

My computer was zapped by a lightning strike. It was powered off at the time, and the surge protector apparently just wasn't up to task.

I have confirmed that the motherboard and power supply died. I am unsure of the hard disks. I had one system drive and 4 drives in a software raid5.

I have another PC with another HDD that I can use for recovery. How do I do this?

Do I plug in the old system drive and the raid5 drives into the new system, and boot from the old system drive, and let the existing configuration 'rediscover' where the raid5 drives have gone?

Can I just plug in the raid5 drives into the new pc, do a new linux installation, and it will automatically recognize an existing raid5 array, even if the SATA controller hardware may be different?

Do I plug the raid5 drives into an existing linux installation, and work by command line to figure out which drive is which array member and get things going? Any pointers on this route?

Can I use an Ubuntu live cd on the spare PC with the drives connected to recover?

I have done "tests" to recover from a single drive failure, but haven't even thought about how to move the drives on a dead pc to a new one.

Thanks!

Raid Recovery sans md device? UGH!

denali206's picture

So I've been trying to recover my LVM2 over RAID 5 off and on for about a year now, yeah tell me about it! It seems when I decided to upgrade my Ubuntu distro, I didn't think to backup, of course, I completely lost my LVM2/RAID 5. I've tried many processes and software to recover but nothing has proved worthy. I landed on this great tutorial in hopes of recovery only to be stopped dead in my tracks, as it seems I no longer have an md RAID device (dev/md0). I followed this tut diligently and stopped at Listing 3. for obvious reasons. To sum it up, I'm hoping I can receive some direction as to how I may be able to recover my data with my particular situation.

The following is the output of the commands leading up to my dilemma as well as a few more that may help. A huge thanks in advance!

Listing 1. LVM Disk Configuration

ubuntu@ubuntu:~$ sudo /sbin/sfdisk -l /dev/sdb

Disk /dev/sdb: 30401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0+ 30400 30401- 244196001 8e Linux LVM
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
ubuntu@ubuntu:~$ sudo /sbin/pvscan
PV /dev/sdb1 VG fileserver lvm2 [232.88 GB / 61.53 GB free]
PV /dev/sdc1 VG fileserver lvm2 [232.88 GB / 172.88 GB free]
PV /dev/sdd1 VG fileserver lvm2 [232.88 GB / 0 free]
PV /dev/sde1 VG fileserver lvm2 [232.88 GB / 0 free]
PV /dev/sdf1 VG fileserver lvm2 [232.88 GB / 0 free]
Total: 5 [1.14 TB] / in use: 5 [1.14 TB] / in no VG: 0 [0 ]
ubuntu@ubuntu:~$ sudo lvscan
inactive '/dev/fileserver/share' [40.00 GB] inherit
inactive '/dev/fileserver/backup' [60.00 GB] inherit
inactive '/dev/fileserver/media' [830.00 GB] inherit

Listing 2. RAID Disk Configuration

ubuntu@ubuntu:~$ sudo /sbin/sfdisk -l /dev/sdb

Disk /dev/sdb: 30401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0+ 30400 30401- 244196001 8e Linux LVM
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
ubuntu@ubuntu:~$ sudo /sbin/sfdisk -l /dev/sdc

Disk /dev/sdc: 30401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdc1 0+ 30400 30401- 244196001 8e Linux LVM
/dev/sdc2 0 - 0 0 0 Empty
/dev/sdc3 0 - 0 0 0 Empty
/dev/sdc4 0 - 0 0 0 Empty
ubuntu@ubuntu:~$ sudo /sbin/sfdisk -l /dev/sdd

Disk /dev/sdd: 30401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdd1 0+ 30400 30401- 244196001 8e Linux LVM
/dev/sdd2 0 - 0 0 0 Empty
/dev/sdd3 0 - 0 0 0 Empty
/dev/sdd4 0 - 0 0 0 Empty
ubuntu@ubuntu:~$ sudo /sbin/sfdisk -l /dev/sde

Disk /dev/sde: 30401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sde1 0+ 30400 30401- 244196001 8e Linux LVM
/dev/sde2 0 - 0 0 0 Empty
/dev/sde3 0 - 0 0 0 Empty
/dev/sde4 0 - 0 0 0 Empty
ubuntu@ubuntu:~$ sudo /sbin/sfdisk -l /dev/sdf

Disk /dev/sdf: 30401 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdf1 0+ 30400 30401- 244196001 8e Linux LVM
/dev/sdf2 0 - 0 0 0 Empty
/dev/sdf3 0 - 0 0 0 Empty
/dev/sdf4 0 - 0 0 0 Empty
ubuntu@ubuntu:~$ cat /proc/mdstat
Personalities :
unused devices:

Listing 3. Scanning a disk for RAID array members

ubuntu@ubuntu:~$ mdadm --examine --scan /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
ubuntu@ubuntu:~$

ubuntu@ubuntu:~$ sudo mdadm -E /dev/sdb
mdadm: No md superblock detected on /dev/sdb.

ubuntu@ubuntu:~$ sudo blkid
/dev/sdb1: UUID="Jppgxo-zjR2-IgZD-qD7T-800y-y46t-g0vyU8" TYPE="lvm2pv"
/dev/sdc1: UUID="3eVlZN-PoI1-H8yB-aSGt-k3z2-LYwx-1bHu55" TYPE="lvm2pv"
/dev/sdd1: UUID="cJEYms-03vV-agy8-JWWh-5kCH-Y61W-s9LFse" TYPE="lvm2pv"
/dev/sde1: UUID="67i1d6-B6LC-YGdy-6Zcv-aBJq-K0UY-Op1PWB" TYPE="lvm2pv"
/dev/sdf1: UUID="X6I6jt-A3R7-Qbcu-rzjk-aH4t-IRBw-mwcOux" TYPE="lvm2pv"

ubuntu@ubuntu:~$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name fileserver
PV Size 232.88 GB / not usable 673.00 KB
Allocatable yes
PE Size (KByte) 4096
Total PE 59618
Free PE 15752
Allocated PE 43866
PV UUID Jppgxo-zjR2-IgZD-qD7T-800y-y46t-g0vyU8

--- Physical volume ---
PV Name /dev/sdc1
VG Name fileserver
PV Size 232.88 GB / not usable 673.00 KB
Allocatable yes
PE Size (KByte) 4096
Total PE 59618
Free PE 44258
Allocated PE 15360
PV UUID 3eVlZN-PoI1-H8yB-aSGt-k3z2-LYwx-1bHu55

--- Physical volume ---
PV Name /dev/sdd1
VG Name fileserver
PV Size 232.88 GB / not usable 673.00 KB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 59618
Free PE 0
Allocated PE 59618
PV UUID cJEYms-03vV-agy8-JWWh-5kCH-Y61W-s9LFse

--- Physical volume ---
PV Name /dev/sde1
VG Name fileserver
PV Size 232.88 GB / not usable 673.00 KB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 59618
Free PE 0
Allocated PE 59618
PV UUID 67i1d6-B6LC-YGdy-6Zcv-aBJq-K0UY-Op1PWB

--- Physical volume ---
PV Name /dev/sdf1
VG Name fileserver
PV Size 232.88 GB / not usable 673.00 KB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 59618
Free PE 0
Allocated PE 59618
PV UUID X6I6jt-A3R7-Qbcu-rzjk-aH4t-IRBw-mwcOux

ubuntu@ubuntu:~$ sudo vgdisplay
--- Volume group ---
VG Name fileserver
System ID
Format lvm2
Metadata Areas 5
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 0
Max PV 0
Cur PV 5
Act PV 5
VG Size 1.14 TB
PE Size 4.00 MB
Total PE 298090
Alloc PE / Size 238080 / 930.00 GB
Free PE / Size 60010 / 234.41 GB
VG UUID kkDoou-3xxF-P1zZ-vAKX-3rlp-PZ2j-Rph3th

ubuntu@ubuntu:~$ sudo lvdisplay
--- Logical volume ---
LV Name /dev/fileserver/share
VG Name fileserver
LV UUID aNKpVU-0TPC-vTvP-Cf4e-ZqNh-ILPY-ZgaTBF
LV Write Access read/write
LV Status NOT available
LV Size 40.00 GB
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/fileserver/backup
VG Name fileserver
LV UUID 18TTgL-TGz2-rxgP-MyT5-dVsD-mylO-is7VKv
LV Write Access read/write
LV Status NOT available
LV Size 60.00 GB
Current LE 15360
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/fileserver/media
VG Name fileserver
LV UUID VC4KPD-Z4O5-rSGM-P3LR-S1Xu-Ho1Y-9l1dO1
LV Write Access read/write
LV Status NOT available
LV Size 830.00 GB
Current LE 212480
Segments 4
Allocation inherit
Read ahead sectors auto

Did you LVM the RAID? Or RAID the LVM?

abject's picture

IIRC (and I might not), I think you can either:

  1. Make RAID devices (/dev/mdn's)
  2. Use the RAID devices as physical volumes to use with LVM

OR

  1. Use real hard drives/partitions as the physical volumes for LVM
  2. Slice, dice, mix and match these into LVM logical volumes
  3. Combine the logical volumes into RAID sets, making /dev/mdn's out of 2 or more LVM logical volumes.

So, is it LVM2 over RAID5? Or maybe RAID5 over LVM2 (with maybe some LVM2 on top...)

HTH.

- Ab.

p.s. LVM is great and all, but I really like the idea of being able to pull a physical disk drive out of the wreckage and having it full of complete files and stuff. So, for me, it's RAID1 and no LVM. I guess it's 'cause I'm old school and never really trust my backups (As if!) to be current at the moment of truth. Or something.

need some help

joenoob's picture

Hi. I'm a linux noob trying to recover from fatal yum updates on an RHEL 5 box with 2 disk intel matrix raid 1. I'm stuck at "Listing 3. Scanning a Disk for RAID Array Members"

sh-3.2# mdadm --examine --scan /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 ARRAY /dev/md2 level=raid1 num-devices=2
does nothing

sh-3.2# mdadm --examine --scan /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 ARRAY /dev/md2 --level=raid1 --num-devices=2
says "mdadm:option --level not valid in misc mode

Also, how/why would I know to try md2?

Please help!

Your command should stop

spong's picture

Your command should stop before ARRAY.

Your command can be :
mdadm --examine --scan /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5

Thats all.

Best regard

Thanks! I'll try again

joenoob's picture

Thanks! I'll try again shortly.

Include field(s) after the timestamp

Alex Thomas's picture

Very useful, thanks a lot!

I did a dd copy of the start of the physical volume as suggested and found that some identification fields after the timestamp were needed to make a workable config file for vgcfgrestore, i.e. I needed the descriptive lines from:

MyVolName{
id = "xxxx..."
seqno = ...

down to

# Generated by ...

followed by

contents = ...
version = ...

(I also included description, creation_host and creation_time - these fields probably aren't required).

Thanks

lsdhfkgoadh's picture

Add one more to the Thanks, you saved me! pile.

Get these pages formatted

Anonymous's picture

Get these pages formatted with fully BLACK text, not gray. Why reduce the contrast and make the text hard to read?

Linux data recovery help

Linux data recovery 's picture

I have used Stellar Phoenix linux data recovery software to recover my lost data.Check this software capability by downloading free demo version which scans your hard drive for lost data.

Thanks! I had just installed

Gr8ful's picture

Thanks! I had just installed RAID 1 but before putting my data on I wanted to do a recovery test to make sure everything is working. I had no clue and your guide helped me all the way.

People like you make the world a better place, thanks man!

Thank You

John Doyle's picture

Thank you for this incredibly helpful post. With 6 years of kids pictures locked up in this situation, you can count me as another marriage saved.

Thanks

LVM recovery on LaCie ethernet disk 1T

Carlo's picture

My lacie ethernet disk (with disks in raid 5 configuration) doesn't boot anymore. I need to recover some data, so I connected the three hard disks that form the array to my PC. booting from a System Rescue live CD and following the guidelines of this article I was able to rebuild the array, but I had to stop at point 6, when trying to recover LVM volumes, as I get the following string from the first sectors of the disk and I don't know how to proceed:

[IPStorPartition version="3.0" size="592" owner="INFMNetDisk" checksum="" signature="IpStOrDyNaMiCdIsK" dataStartAtSectorNo="16128" logvol="0" category="Virtual Device"/]
[PhysicalDev guid="0181916a-a8bc-4d21-d1d3-000046239e03" Comment="" WorldWideID="FALCON LVMDISK-M09N01 v1.0-0-0-00"/]

[DynamicDiskSegment guid="0d947a14-47de-5ef8-0f5c-000046239e1b" firstSector="16128" lastSector="22271" owner="INFMNetDisk" dataset="1176739355" seqNo="0" isLastSegment="true" sectorSize="512" type="Umap" lunType="0" timestamp="1176739355" umapTimestamp="0" deviceName="NASDisk-00002" fileSystem="XFS"/]

[DynamicDiskSegment guid="0d947a14-47de-5ef8-0f5c-000046239e1b" firstSector="22272" lastSector="629167871" owner="INFMNetDisk" dataset="1176739355" seqNo="1" isLastSegment="true" sectorSize="512" type="NAS" lunType="0" timestamp="1176739355" umapTimestamp="0" deviceName="NASDisk-00002"/]

Please help me
Thank you very much

Recovering data from an LaCie Ethernet Disk

Flavio's picture

Today a customer brought here a LaCie Ethernet Disk 2G that didn't boot, just like yours. I connecter the disks to a spare computer and booted with a Knoppix CD, and after some hacking I finally managed to get to the data. Not sure if it actually uses LVM or what, but I couldn't recover a valid LVM configuration so I had to do without.

The XML-like data at the beginning of the volume is actually very useful because it tells you where the data partition is located within the md device. The first sector is 22272, i.e. 11403264 bytes from the beginning. To reach it, create a loop device:

# losetup -o 11403264 -r /dev/loop1 /dev/md1

(I assume you already assembled the RAID device)

Then just mount the loop device:

# mount -o ro -t xfs /dev/loop1 /mnt/

(note that eveything is mounted read-only, just in case)

That's it, mount a USB disk or a network share and get that stuff to a safe place!

Lacie Raid1 recovery

Christian Mozetic's picture

I'm having similar trouble with my Lacie 2Big Network Drive which was configured as Raid1.
It would be wonderful if the loop device solution were to work for me.
However, my understanding of linux is very limited and, apparently, just copy pasting the above into my terminal window and editing (the obvious) volume name is not enough.

sfdisk -l shows:

Disk /dev/sda: 60801 cylinders, 255 heads, 63 sectors/track
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sda1 0+ 124 125- 1004031 5 Extended
/dev/sda2 125 60800 60676 487379970 83 Linux
/dev/sda3 0 - 0 0 0 Empty
/dev/sda4 0 - 0 0 0 Empty
/dev/sda5 0+ 15 16- 128457 82 Linux swap / Solaris
/dev/sda6 16+ 16 1- 8001 83 Linux
/dev/sda7 17+ 17 1- 8001 83 Linux
/dev/sda8 18+ 39 22- 176683+ 83 Linux
/dev/sda9 40+ 123 84- 674698+ 83 Linux
/dev/sda10 124+ 124 1- 8001 83 Linux

I'm entering the following but I get the error message you see after I try to mount:

losetup -o 11403264 -r /dev/loop1 /dev/sda

mount -o ro -t xfs /dev/loop1 /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/loop1,
missing codepage or helper program, or other error

Also tried sda1 and sda2 which is the main data partition

Questions:
Should I be using sda, sda1, sda2 or some other sda?
How do I access the xml like data you mentioned to check which is the first sector on my drive?

Thanks,
Christian.

Solution Found

Christian Mozetic's picture

Ok, the solution to my Lacie Raid failure can be found here:

http://www.linuxforums.org/forum/peripherals-hardware/126831-reseting-la...

It consists of:

1) installing mdadm to administrate the linux Raid partitions
2) Executing these commands to mount the data partition

mdadm --assemble --run /dev/md0 /dev/sda2
mount /dev/md0 /mnt/lacie2

wherw sda is the drive name I get and 2 is the data partition.

What the linked post doesn't tell you is you need to create the /mnt/lacie2 directory beforehand. It also doesn't tell you how to install mdadm.

Thanks, this helped a lot.

cactuz's picture

The system-disk in my file-server just crashed the other day and as I was installing it all over again tonight I had no problem figuring out how to mount my old LVM's that wasn't in RAID-configuration.... but then my precious backup-volume for documents/photos I was using RAID and couldn't figure out how to get it back online until I googled and found this article.

So thank you for saving me many hours of work!

Excellent Article - Another system saved!

Anonymous's picture

Excellent stuff! I had attached two VolGroup00s to one system. Realizing that I could not access the data on the second one, I removed it following instructions at http://www.linuxtopia.org/online_books/linux_lvm_guide/removepvsfromvg.html. OOPS! This article saved me.

Another "life" saved!

Michael Hill's picture

While juggling drives and trying to fix an annoying boot problem, I managed to overwrite the MBR of one of the drives. I had unwisely chosen to use the entire device as an LVM PV (instead of a partition spanning the whole drive), so that whacked the PV metadata. Many thanks to Richard for writing the original article, and in particular to Toby Fruth, whose reply led me through the steps to recover my PV and all the LVs on it. I was fortunate in not having to reconstruct the VG config file from raw sectors; LVM made backup copies of the VG configs every time I made a change, so I had a recent backup copy at hand.

Thanks again for helping me recover access to my data!

vgrename with UUID?

Elrond's picture

Many thanks for the insight into LVM2's internal working for metadata. I always like to have an idea, how stuff is layed out on disk, so I can *worst case* do dmsetup myself.

The subject mainly gives my question:

I found in vgrename(8), that it seems to support vgrename VG-UUID NewName. This looks like the perfect way to rename conflicting VGnames. Did anyone try this?
(Yes, modulo all the MD-trouble)

vgrename UUID NewVolumeGroup

Zultron's picture

Works great, and easier and less dangerous than editing the header block.

Thanks so much

JR Peck's picture

I've spent this morning trying to mount a 2.5" drive from a failed laptop that I had place in a USB enclosure. No joy until this article got me going and I can't say how much I appreciate it.

You are welcome

Richard Bullington-McGuire's picture

I am glad you were able to retrieve your data.

Many of the people who left comments on this article had helpful suggestions that are even more simple than the methods I outlined in the article.

THANKS!

Anonymous's picture

I ran into this issue while trying to recover my original drive for my home server, which in turn of google searches I found this thread. Than k you so much for the excellent explanation of this issue!

I took a differn't approach however once I understood what was in conflict. I popped in a new drive, that I wanted to recover the data too, and just reinstalled my OS but did NOT use LVM this time. Just a good old fashioned swap/boot/and root partition scheme and then re-ran linux rescue which mounted the old filesystem easily, and was very easy for me to mount the secondary disk. Copied all files over, and put back my configs. All said and done, just a couple hours and my system is back up to normal after a drive failure. AWESOME!

Understanding is half the battle

Richard Bullington-McGuire's picture

It sounds as if you have another good solution to the basic problem, as long as RAID is not involved.

LVM recovery

srinivas Chamarthi's picture

hey! thanks a lot for clearing the confusion regarding recovery of LVM2 volumes. I got a successful recovery! hats off for u

Am i missing something?

Anonymous's picture

After you installed the failed raid disk into the recovery box (or hooked it up via usb), couldn't you have booted the recovery box with a Live CD and simply mounted only the drive partitions you needed?

In otherwords, just don't mount the drive in the recovery box that had the equivalent vol group. That way there would have been no conflict right?

If i understand the problem correctly, the problem is NOT that the raid drive does NOT HAVE AN LVM CONFIG (or that it was damaged), it's just that it's the SAME as the recovery boxes LVM config (e.g. has the same volgroup name) which prevents it from being seen (i think?)

Another way of asking the question is this. If the recovery box did NOT have any LVM partitions or LVM config native to it.. could i simply plug the raid drive in and the recovery box would automagically find the raid LVM partitions or would I still have to something else to make it work? If I have to do something else to make it work, i'd totally appreciate it if you could explain what i would need to do (either a subset of the above article steps or just a streamlined set of guidelines).

That would fully help me understand this topic completely because i imagine at some point, if i have a system just like this, i'm going to need to recover it some day. And it would be pretty easy for me to NOT use LVM on the target recovery box.

thanks

Missing bits found

Richard Bullington-McGuire's picture

> ... couldn't you have booted the recovery box with a Live CD and simply mounted

only the drive partitions you needed?

That was what I was originally hoping to do, but that did not work automatically. RAID arrays on USB-connected drives are not available to the system when it does its first scan for RAID arrays. Also, if the recovery box has a volume group with the same name, it will not recognize the newly-attached volume group.

I have used USB RAID arrays in production, and you have to take some extra steps to activate them late in the boot process. I typically use a script similar to this to do the job:


#!/bin/sh
#
# Mount a USB raid array
#
# Call from /etc/rc.d/rc.local

DEVICE=/dev/ExampleVolGroup/ExampleVol00
MOUNTPOINT=/mnt/ExampleVol00

# Activate the array. This assumes that /etc/mdadm.conf has an entry for it already
/sbin/mdadm -A -s
# Look for LVM2 volume groups on all connected partitions, including the array
/sbin/vgscan --mknodes
# Activate all LVM partitions, including that on the array
/sbin/vgchange -a y
# Make sure to fsck the device so it stays healthy long-term
fsck -T -a $DEVICE
mount $DEVICE $MOUNTPOINT

> In otherwords, just don't mount the drive in the recovery box that had the equivalent vol group. That way there would have been no conflict right?

That's mostly right. You'd still need to scan for the RAID arrays with 'mdadm --examine --scan $MYDEVICENAME' , then activate them after creating /etc/mdadm.conf.

If you had other md software RAID devices on the system, you might have to fix up the device numbering on the md devices.

> If the recovery box did NOT have any LVM partitions or LVM config native to it.. could i simply plug the raid drive in and the recovery box would automagically find the raid LVM partitions or would I still have to something else to make it work?

On a recovery box without any software RAID or LVM configuration, if you plugged the RAID drive directly into the IDE or SATA connector, it might automagically find the RAID array and LVM volume. I have not done that particular experiment, you might try it and let me know how it goes.

If the drive was attached to the recovery box using a USB enclosure, the RAID and LVM configurations probably won't be autodetected during the early boot stages, and you'll almost certainly have to do a scan / activate procedure on both the RAID and LVM layers.

You might have to scan for RAID partitions, build an /etc/mdadm.conf file, and then scan for volume groups and activate them in either case.

The most difficult part of the recovery outlined in the article was pulling the LVM configuration out of the on-disk ring buffer. You can avoid that by making sure you have a backup of the LVM configuration for that machine stored elsewhere.

LVM, This Article, the Author, and Success!

Toby Fruth's picture

I emailed Mr. Bullington-McGuire, for I had created a self-inflicted dilemma. I had run the following command:

pvremove /dev/sdb2 -f

Why? Because I thought I needed to remove LVM data from a drive in order to mount it under a new install, which had been on a different drive. I could have done it this way:

mount /dev/VolGroup00/LogVol00 /mnt

assuming that another LogVol00 was not already mounted and that a /dev/VolGroup00/LogVol00 did not already exist. Of course, they originally did exist under the new install on the new drive, so I did another new install, using different LVM names on the new drive.

So, I managed to recover from the pvremove by doing a pvcreate, using a restore file created with the instructions in this article.

lvm> pvcreate --restorefile /tmp/VolGroup00 --uuid O3tLZO-ZvUq-oggv-yuIZ-kEtv-eAMi-zgN0aB /dev/sdb2
Couldn't find device with uuid 'O3tLZO-ZvUq-oggv-yuIZ-kEtv-eAMi-zgN0aB'.
Physical volume "/dev/sdb2" successfully created

lvm> vgcfgrestore --file /tmp/VolGroup00 VolGroup00
Restored volume group VolGroup00

Once this was done, I was able to use the mount command I listed earlier in this post to mount up my old drive's LVM group.

After the mount command, I issued the following commands:

df -h

ls -l /mnt

I can now see all my old data, which I am promptly copying to the new drive, as soon as I make a backup of the LVM data!

Glad to help

Richard Bulington-McGuire's picture

Thank you for contacting me regarding your problem. I am glad you managed to recover your data. It looks as if the procedure I sent you worked.

:0) Saved my marrage!

Anonymous's picture

just making a backup and poof the power goes :( on reboot i can't get to my lvm and my 50gig backup is awol!!
your ickle guide saved my life as the missus sims2 data was on there and its more then my life is worth to lose that

Another day, another marriage saved

Richard Bulington-McGuire's picture

Thank you for your kind words. I am glad you were able to recover.

Bacon saved!

Jason's picture

Just another saved me comment! Thanks! I thought I was hosed, but this article pointed me in the direction I needed to go to recover my essential data. Yes, yes, I do backups, monthly and archive media every 6 months, but now, I have learned: always RAID1 or RAID5, no LVM, test UPS control regularly, and invest in large external eSATA/USB/Firewire drives to do nightly incrementals and keep them unmounted when not in use.

Oh, and avoid drawing power from BG&E if at all possible...they suck.

You are welcome

Richard Bulington-McGuire's picture

If you live in the Mid-Atlantic as I do, your real enemy may be the trees.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState