Guard Against Data Loss with Mondo Rescue
To install the program, go to www.microwerks.net/~hugo and download Mondo and Mindi. The latter is part of the former but was forked because Mindi also creates standalone boot disks based on your kernel, modules, tools and libraries. Installation instructions for both tools are provided on the Download web page.
RPM users have it easy; they simply need to download Mindi's RPM to /tmp, download Mondo's RPM to /tmp and then type the following:
rpm -Uvh /tmp/mondo-1.13-1.i386.rpm /tmp/mindi-0.39-1.i386.rpm
Tarball users have a slightly harder time; they must download Mindi's tarball to /tmp, download Mondo's tarball to /tmp and then type the following:
cd /tmp tar -zxvf mindi-0.39.tgz cd mindi-0.39 ./install.sh cd .. tar -zxvf mondo-1.13.tgz cd mondo-1.12 ./install.sh
Some distributions lack certain crucial packages. The packages most often missing are afio, cdrecord, bzip2, libnewt0.50, libslang1 and mkisofs. Some users may have to create a gawk-to-awk shortcut. You can find these tools on the web site of the vendor of your distribution.
Making a test CD is a good idea because the new user can try it and not mess up his or her system. First, be sure that Linux knows how to use your CD writer. Then, run mondo-archive.
To find your CD writer, type
dmesg | grep CD
If your CD writer is an IDE device, it will show up here as /dev/hdX, X being a letter between a and h. If SCSI emulation is properly configured, you will see your CD writer listed when you type
cdrecord -scanbusIf your CD writer is properly installed, you will see
0,0,0 --- JoeCamel 4x CD Writeror something similar. The 0,0,0 number to the left of the device description is the SCSI device where the writer can be found. Write this number down.
If you want your rescue CD to include certain special programs, e.g., your copy of BRU, add the file and its config files to /usr/share/mindi/deplist.txt by hand. Mindi will find the libraries and add them for you.
Run Mindi to create some boot disks just to make sure Mindi works properly on your system. Type
cd /usr/share/mindi ./mindi
If your kernel is too large (more than about 900KB), you cannot make boot floppies, although you still can make a bootable CD image. The easiest way to test Mindi in either case is to press N to “Create boot floppies?” and Y to “Create iso image?” Then use cdrecord to make a bootable CD-R or CD-RW. Type
cd /root/images/mindiThen choose one of the following calls to write the CD, depending on whether the disk in the drive is a CD-R or a CD-RW. Please replace x,x,x with your writer's SCSI node. For CD-RW, type
cdrecord blank=fast dev=x,x,x speed=2 mindi.isoand for CD-R, type
cdrecord dev=x,x,x speed=2 mindi.isoClose all applications; reboot from the CD instead of the hard disk. (You may have to edit your BIOS settings to make your computer try to boot from the CD before the hard drive.) If your computer boots okay from the CD then you know Mondo also will generate a bootable rescue CD reliably. Of course, an ideal rescue CD will use your own kernel. I recommend that you use your own kernel if possible, to minimize the risk that the boot CD won't support your hardware or filesystems, etc.
Finally, to do a complete backup, type
cd /home mondo-archive --burn-cds 2 0,0,0 --comp-level 9
The 2 indicates that you are writing at 2x speed. If you are burning to CD-RWs, type
mondo-archive --burn-cds 2 0,0,0 cdrw --comp-level 9After running the command, insert a blank CD-R(W) into the drive and leave the PC running. That's all.
I always choose the maximum compression level (9) because I start Mondo and then go to work. When I come home, I insert the second CD-RW and wait a half an hour. That is a day's backup.
The default compression level is 3. If you are in a hurry, use --comp-level 1 to speed up the backup process. You will use more CDs that way, but it should take less time to run.
If Mondo does not find a CD in the drive when it tries to write files to the CD, it will pause with a Retry/Fail/Abort message. If you insert a CD and choose Retry, it will retry as if nothing had gone wrong. If you choose Abort, the program will stop. If you choose Fail, the program will skip that CD but continue the backup process. Mostly, you should choose Retry.
If there are specific paths that you do not wish to backup, you may exclude them with
--exclude-paths /foo /bar /xanadu
If you want to include only certain paths, use --bkpath /home. So, if you want to backup only your home and boot directories but exclude the communal MP3 folder, use this:
mondo-archive --burn-cds 2 0,0,0 cdrw --exclude-paths /home/MP3s /home/WAVs /home/secretIf you do not want to burn the CDs immediately but would rather create ISO disk images to be burned later, do this:
mondo-archive --isodir /root --bkpath /home /boot --exclude-paths /home/MP3s /home/WAVs /home/secretThis will create 1.ISO, 2.ISO, etc., and save the files to the /root directory.
Before running Mondo-Archive, be sure to add some files to /usr/share /mindi/deplist.txt, run mount to make sure you have mounted the partitions that you want be backed up and run df to determine backup size/compression/CDs needed.
You can speed up the compare process by switching to another terminal after booting and running ide-opt. Use Alt-left cursor key and type ide-opt. This enables DMA and other good stuff.
To compare the backup against your live filesystem, please boot from the CD and choose compare mode (type compare and then press Enter). Check /tmp/mondo-restore.log after the compare cycle to see which files do not match. Aside from the initial teething troubles you might encounter with making boot disks from your kernel (some kernels are not appropriate for boot disks and have to be recompiled), you are likely to discover that Mondo is quite boring. It does what it says it does. It squeezes all your files onto your CD-R(W)s, and it restores them again if necessary. It partitions your drives, formats them, restores the data and runs LILO to set up your boot sector.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Using Salt Stack and Vagrant for Drupal Development
- Validate an E-Mail Address with PHP, the Right Way
- Build a Skype Server for Your Home Phone System
- A Topic for Discussion - Open Source Feature-Richness?
- Why Python?
- Tech Tip: Really Simple HTTP Server with Python
- Not free anymore
1 hour 34 min ago
5 hours 21 min ago
- Reply to comment | Linux Journal
5 hours 29 min ago
- Understanding the Linux Kernel
7 hours 44 min ago
10 hours 13 min ago
- Kernel Problem
20 hours 16 min ago
- BASH script to log IPs on public web server
1 day 43 min ago
1 day 4 hours ago
- Reply to comment | Linux Journal
1 day 4 hours ago
- All the articles you talked
1 day 7 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?