High Availability Linux with Software RAID
RAID, redundant array of independent (or inexpensive) disks, is a system that employs two or more disk drives in combination, through hardware or software, for performance and fault tolerance. RAID has a number of different configurations referred to as levels. The most common RAID levels and their functions are:
Level 0: data striping, no redundancy.
Level 1: disk mirroring.
Level 3: similar to 0 but one specific disk is used to stripe data.
Level 5: low-level data striping across all disks with stripe error correction.
For more information, refer to www.acnc.com/04_01_00.html for a thorough discussion on the various RAID levels.
Data striping is the ability to spread disk writes across multiple disks. This alone can result in improved performance as well the ability to create one large volume from multiple disks. For instance, if you had nine 6GB drives, you ordinarily would be forced to create at least nine partitions when configuring your system. This partitioning scheme, however, may not make sense for your situation. If you created a RAID 0 out of the nine drives, it would appear to the system as one 54GB drive, which you could then partition as you saw fit. In this scenario, though, if one disk fails then the entire array would fail.
Disk mirroring uses two drives at a time and duplicates exactly one drive to the other. This duplication provides hardware redundancy; if one drive fails, the other can continue to operate independently. Software errors can propagate across the mirror, however, corrupting both disks.
Level 3 assigns one disk in the array to be used for parity (error correction), and the data is then striped across all the other disks in the array. The advantage here is any one disk in the array can fail without any data loss. However, you must give up one disk's worth of space for error correction. Level 3 does not work well with a software RAID solution, and it also has performance drawbacks, as compared to Level 5.
Level 5 stripes data and the error correcting parity data across all the disks in the array. As a result, one disk can fail without loss of data. When this happens, the RAID is said to be operating in degraded mode. If more than one disk fails at the same time, though, the entire array will fail.
This article focuses on using software RAID Level 5 under a fresh installation of Red Hat 8.0 and testing the fault tolerance of the RAID. RAID support for Linux has matured over the years, and the ability to install a system that can boot into a RAID-configured set of disks is standard.
Before actually rebuilding my server with RAID 5, I wanted to be able to test out the installation, tools and failure modes in a safe environment. I also wanted the tests to be as close as possible to the real configuration of my physical hardware.
I have been using VMware (www.vmware.com) since its first beta release in the late 1990s. I highly recommend it for anyone who has to develop on multiple platforms or who needs to do any type of testing on multiple platforms. Using VMware, I was able to set up a Linux virtual machine with six 9GB SCSI drives (as are found on my server) on a machine with only one real physical IDE drive.
As we will see, creating a high availability (HA) Linux server using RAID 5 is a pretty straightforward process. There is one catch, however; you must have at least one native partition that contains the /boot directory. This has to do with the kernel needing to load the drivers that support RAID from a native disk before it can actually mount the RAID. This little detail makes things interesting. Namely, affects the way the drives in the RAID are partitioned and how you recover from a failure of the particular drive that contains the native partition.
In order to configure my Linux VM to match my physical machine as closely as possible, I created six 9GB SCSI drives (Figure 1). One of the nice things about creating the virtual drives is they do not initially take up as much space as you assign to them. Instead, the files grow to accommodate data placed on them inside the VM. So, as far as the VM is concerned, it has 54GB at its disposal. But the complete test installation takes up about 1GB of physical space from my actual hard drive.
After configuring VMware to reference the physical CD-ROM drive for the VM's CD-ROM drive, I placed the first Red Hat 8.0 disk into the drive and powered on the VM. There are a few partitioning requirements at this step. First, each partition used in a RAID volume should be the same size. Second, one partition on one of the drives should be native and should mount at /boot. Third, for RAID 5, one partition's worth of space in a RAID volume needs to be "sacrificed" to account for parity (error correction) data. Because my physical machine has 512MB of RAM, I wanted to have 1GB of swap space. Table 1 shows my partitioning scheme.
|Huge Package Overhaul for Debian and Ubuntu||Jul 23, 2015|
|diff -u: What's New in Kernel Development||Jul 22, 2015|
|Shashlik - a Tasty New Android Simulator||Jul 21, 2015|
|Embed Linux in Monitoring and Control Systems||Jul 20, 2015|
|The Controversy Behind Canonical's Intellectual Property Policy||Jul 17, 2015|
|Non-Linux FOSS: Portable Apps, in the Cloud!||Jul 15, 2015|
- Huge Package Overhaul for Debian and Ubuntu
- Shashlik - a Tasty New Android Simulator
- diff -u: What's New in Kernel Development
- The Controversy Behind Canonical's Intellectual Property Policy
- Home Automation with Raspberry Pi
- Embed Linux in Monitoring and Control Systems
- Purism Librem 13 Review
- One Port to Rule Them All!
- Privacy Is Personal
- General Relativity in Python