Reliable, Inexpensive RAID Backup
As a topic, backups is one of those subject likely to elicit as many answers as people you ask about it. It is as personal a choice as your desktop configuration or your operating system. So in this article I am not even going to attempt to cover all the options. Instead I describe the methods I use for building a reliable, useful backup system. This solution is not the right answer for everyone, but it works well for my situation.
Everyone knows they should be doing backups. But do you? How many times have you started a backup schedule only to let it slide after a few weeks? Sounds a bit like an exercise or diet regime, doesn't it?
I had several goals when designing a new backup system for my home and colocated web server: reliability of stored data, automation of the backup process and relative low cost. Human error is the weakest element of any backup system, so a 100% hands-off system was my goal.
In "Scary Backup Stories", Paul Barry discusses failed backups. The common thread of his stories was somewhere in the chain of events a person had forgotten a very important step. The first story he tells highlights how one team forgot to format the tapes. They had religiously followed their backup plan, backing up onto the unformatted tapes, only to discover the tapes were useless.
I did some reading and settled on a RAID-5 array of hard drives as the most reliable way to store data. It can survive a single drive failure and recover from it when you replace the failed drive. Unlike tape, CDR or DVD backups, it doesn't need someone to swap media or format and rotate tapes. None of the RAID methods can survive a two-drive failure, so RAID-5 is as good as it gets.
RAID-5 achieves its reliability by writing the data across a number of disks, along with error detection information. The information is spread in such a way that no single-disk failure can destroy the archive. And when you replace the failed drive it automatically rebuilds the data that was on that section of the RAID.
The base system would be my recently retired colocated web server box. It has a nice rackmount case, a 400MHz AMD processor and 768MB of RAM. I added a beefier power supply (Antec 350W from Best Buy) to replace the 250W unit that came with the case. The system already had a SCSI controller and a 5GB SCSI drive that I'd be using for the root filesystem. Yes, 5GB is small by today's standards, but this system was built and installed in 1999. It ran without failure until it was removed in December 2002, because the ISP went out of business. The minimal install of Red Hat 8 takes about 400MB, so this drive works just fine for its new purpose.
SCSI usually is the first choice for reliable RAID hardware, but it is expensive--not only the drives but the controllers, too. Also important reason is speed: SCSI handles multiple accesses to the drive more efficiently than IDE drives. But for my application speed wasn't a deciding factor.
IDE RAID controllers are becoming more affordable but are still in the $200+ price range as of this writing. A less expensive alternative is to add several IDE controller cards to the system and put one drive per channel (2 drives per card) on them. These PCI IDE cards are less than $25 each, and they support the newer 133MHz IDE bus speeds.
I chose to install two PCI cards for use as RAID controllers. This left the IDE controllers on the motherboard free for adding other drives at a later time. They also could be used to quickly back up a drive that I didn't want to copy over the network.
There are two good reasons for limiting backup to a single drive per channel. First, if one drive fails it can disrupt the other drive on the channel, causing a catastrophic two-drive failure. The other reason is speed. With two drives on an IDE chain, the throughput is halved, as I understand it, so it makes sense to use only a single drive. An argument also can be made for using only one drive per controller card. At that point, though, you might as well invest in a dedicated RAID card.
My drive choice had already been made. For some time, I'd been using a second Maxtor drive in each of my systems as a backup drive, mirroring the live filesystem to it with rsync. And I have been using Maxtor drives for years without a single failure, unlike Fujitsu drives, which seem to drop dead within a year (I have three of them in the junk box). I suppose this means that as soon as this article is published, all of my reliable drives will fail at the same time.
You need to have three drives for a minimum RAID-5 system. The drives all should be the same size, because the total size is calculated using the smallest drive size, multiplied by 1-number of drives. So, three 30GB drives yield a RAID-5 of about 60GB of storage. At the time, I had two 40GB and one 30GB drives on hand. So I wasted about 20GB of space in building this system in the interest of getting it up and running as quickly as possible.
It may be possible to resize the array by adding more drives at a later time, but unless you have a second backup of the data, you probably don't want to try this. Instead I'd recommend buying a larger drive, copying the RAID to it and rebuilding the RAID filesystem from scratch.
- Secret Agent Man
- Bash Shell Script: Building a Better March Madness Bracket
- Own Your DNS Data
- Tech Tip: Really Simple HTTP Server with Python
- Returning Values from Bash Functions
- Brent Laster's Professional Git (Wrox)
- Smoothwall Express
- Machine Learning Everywhere
- Understanding OpenStack's Success
- Simple Server Hardening