Bootable Restoration CDs with Mondo
Many different types of systems can be backed up in various ways with mondoarchive. Here we describe only the situations we laid out earlier—to back up our servers and create clone systems.
In our environment, many servers perform various tasks, and each of them is configured differently. Some have multiple IDE or SCSI hard disks for massive storage, and some have only one IDE or SCSI drive. There even are some RAID systems. On some servers the data is ever-changing, and on some servers it hardly ever changes. It is possible to use mondoarchive to clone all of these systems.
First of all, it may be a good idea to look at your disk usage on a per-server basis. Pay close attention to what is being mounted, where and when. There is no need to back up noncritical information if it can be avoided; if you have large directories that do not contain critical data, you should think about excluding them. For example, we share data between servers over NFS and automount, and we have many of the same shares mounted on each server. What you don't want to do is ignore these mounted shares and have mondoarchive back up that data too. After you have identified the unnecessary mounted partitions or shares, you have the ability to exclude them with the -E option. The format of that option should be as follows: -E /a /b /c where /a /b and /c are directories. This will ensure exclusion of that data.
Now that you know exactly what you want to back up, let's examine the mondoarchive command and a few of its options. You have the ability to back up to CD, ISO images and an NFS share. In this article we discuss only how to back up to ISO images for burning to CD at a later time. For complete details on mondoarchive and its usage, read its man page.
Before you run the mondoarchive command, choose a place on your drive that has a lot of room to store a large ISO image file. Say we pick /home/mondo, and home is a 6GB partition. The command to use looks like this:
# mondoarchive -Oi -d /home/mondo -E "/home/mondo"
The -Oi option tells mondoarchive to back up the filesystem to an ISO image. Next, -d /home/mondo lets mondoarchive know the resulting ISO images should be put in the /home/mondo directory. Depending on the size of your system, you may have multiple ISO images created. Finally, -E /home/mondo excludes unnecessary directories. Here, we told it to exclude /home/mondo, which could contain other massive ISO images and cause your backup to grow unnecessarily.
In cases where disk space is low, you need to specify a scratch directory. This is a temporary directory that mondoarchive uses to build its ISO images before they are archived. In this situation, it is wise to tell mondoarchive to put its scratch directory in a large partition. Otherwise, mondoarchive most likely will fail when it runs out of room. In the example below, pretend /var/local/data is a large partition on your disk. To specify the scratch directory, run the mondoarchive command adding an -S option:
# mondoarchive -Oi -d /home/mondo \ -S /var/local/data -E "/home/mondo"
After running the command, mondoarchive checks your system, makes sure everything is okay and begins its backup process. It continuously shows its progress during the process (Figure 1) and may take a while to complete. When it finally is finished, it asks if you want to create boot disks. You can answer no, because the CD you burn will be bootable. If you want or need the disks, say yes.
When it's complete, you'll have ISO images in /home/mondo or wherever you specified, from which you can burn CDs. You can burn them in many different ways, including using Xcdroast, Webmin or the cdrecord command. To do this quickly, run cdrecord -scanbus. This discovers your CD writer's bus, target and logical unit number (LUN), which is usually 0,0,0:
# cdrecord dev=0,0,0 speed=xx /home/mondo/1.iso
mondoarchive also can be run automatically at a time of your choosing by setting it up as a cron job. To set this up, first create a script similar to the following and place it in /etc/cron.daily/:
#!/bin/sh mkdir -p /home/mondo/`date +%A` && \ mondoarchive -Oi -d /home/mondo/`date +%A` \ -E /home/mondo
When placed in /etc/cron.daily/, this script runs every day at the same time. Upon execution, it creates a folder in /home/mondo corresponding to the day. If you run the cron job seven days a week, there will be seven folders in /home/mondo, each named for a day of the week and containing the ISO images for that day's backup. Of course, if you want to have these on CD, you can use the cdrecord command again.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Build a Skype Server for Your Home Phone System
- Why Python?
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Understanding the Linux Kernel
1 hour 1 min ago
3 hours 31 min ago
- Kernel Problem
13 hours 34 min ago
- BASH script to log IPs on public web server
18 hours 1 min ago
21 hours 37 min ago
- Reply to comment | Linux Journal
22 hours 9 min ago
- All the articles you talked
1 day 33 min ago
- All the articles you talked
1 day 36 min ago
- All the articles you talked
1 day 37 min ago
1 day 5 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?