High Availability Linux with Software RAID

Turn your machine into an HA server after you test it out on a VMware setup.
Table 1. Partitioning Scheme for VM Test

Partition

Size (MB)

Type (hex value)

/dev/sda1

250

Linux (83)

/dev/sda2

8750

RAID autodetect (fd)

/dev/sdb1

250

RAID autodetect (fd)

/dev/sdb2

8750

RAID autodetect (fd)

/dev/sdc1

250

RAID autodetect (fd)

/dev/sdc2

8750

RAID autodetect (fd)

/dev/sdd1

250

RAID autodetect (fd)

/dev/sde1

250

RAID autodetect (fd)

/dev/sde2

8750

RAID autodetect (fd)

/dev/sdf1

250

RAID autodetect (fd)

/dev/sdf2

8750

RAID autodetect (fd)

Once the drives are partitioned in this way, I can create the RAID 5 volumes. Table 2 shows the RAID volume configuration.

Table 2. VM RAID Volume Configuration

RAID volume

Partitions

Size

md0

/dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, /dev/sdf1

1GB

md1

/dev/sda2, /dev/sdb2, /dev/sdc2, /dev/sdd2, /dev/sde2, /dev/sdf2

43.75GB

Because of parity, each RAID volume of five partitions has a size of four times the partition size. Table 3 shows my final partition table.

Table 3. Final VM Partition Table

Volume

Mount Point

FS Type

/dev/md1

/

ext2

/dev/sda1

/boot

ext2

/dev/md0

swap

 

The filesystem type used for the volumes is EXT2. By default, though, Red Hat 8.0 wants to create EXT3 journaling filesystems. At this time, the combination of a journaling filesystem and software RAID makes for very poor performance. There is a lot of talk about working on these performance issues, but for now, EXT2 is the way to go.

During the Red Hat 8.0 install, I used Disk Druid to set up the partitions as outlined above and illustrated in Figures 2 and 3. I used the GRUB boot loader and installed the boot image on /dev/sda. For testing purposes, I installed only about 500MB worth of packages on the VM.

Figure 2. Partitioning with Disk Druid

Figure 3. Setting Up the RAID Device

After the installation has completed, inspection of the files /etc/fstab and /etc/raidtab reflects the partitioning scheme and RAID configuration outlined above.

Executing cat /proc/mdstat as root displays information about the RAID configuration. Here is sample output:

Personalities : [raid5]
read_ahead 1024 sectors
md0 :   active raid5 sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
        1027584 blocks level 5, 64k chunk, algorithm 0 [5/5] [UUUUU]
md1:    active raid5 sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
        44780800 blocks level5, 64k chunk, algorithm 0 [6/6] [UUUUUU]

This output shows us each of the partitions participating in the RAID volumes and its status. The last two columns in the second line displays important information for each RAID volume. Specifically, it shows the the total drives and active drives (for example, [5/5]) and the status of each drive (U for up? The documentation is unclear).

Using this configuration, if any one of the drives from /dev/sdb through /dev/sdf fails, both RAID volumes /dev/md0 and /dev/md1 would be running in degraded mode but without any data loss. If the /dev/sda drive fails, the RAID volume /dev/md1 would be running in degraded mode without any data loss. In this scenario, however, our /boot partition and the master boot record on /dev/sda would be lost. This is where the creation of a bootable recovery CD comes in.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

backup of /boot partition and MBR on second disk rather than CD?

Anonymous's picture

Is there any reason that the following wouldn't work as ab
alternative to using a boot CD to back up the /boot partition and master boot record in case of a failure on the first disk:

  • Maintain a copy of the boot partition on the second disk and a copy of the boot manager MBR in the MBR area of the second disk (presumably configured to use the boot partition on the second disk).
  • In case of a failure of the first disk, use the BIOS to switch to booting from the second disk by toggling "bootable" bits on the partitions?

    (Can BIOSes typically boot from a second disk?)

edit which /etc/fstab?

Anonymous's picture

In your second test and its recovery steps, you say to edit
/etc/fstab to comment out the /boot entry.

Does the boot fail after the RAID drivers/modules are
loaded, so that the volume containing /etc/fstab is available?

Re: High Availability Linux with Software RAID

Anonymous's picture

using soft raid for swap is waist of CPU,

linux can do the same without soft raid:

just append to all swap partitions "priority=1"

and linux will use them as they were a part of striped soft raid.

Re: High Availability Linux with Software RAID

Anonymous's picture

In case of _real_ drive fail, Linux can (and, imho, will in 99.99%) panic.

Why?

In our case drive didn't responded, stupid scsi driver tried to reset scsi adapter, then kernel died...

Certanly, this is far better that lost of _full_ filesystem, but..

Hardware raid is _only_ choise for servers ...

Re: High Availability Linux with Software RAID

Anonymous's picture

I've used RAID and forced failures in dozens of ways and NEVER had a kernel panic. This is with adaptec and Sumbios controllers, and with basically unplugging the drive from a hot shoe while the server was running and serving requests (test environment, as well as actual failures in the real environment)

I did however, know a coworker using a HW RAID controller who had it mark two disks bad because the cable to them had slipped off while the server was being moved. Guess who had to rebuild and restore his whole RAID array because his $1000 RAID card wouldn't let him restart the RAID5 in place due to two bad drives.

P.s. the CPU load on my dual PIII 750 running flat out accessing it's raid arrays is about 1% of a CPU. If you have to worry about 1% of your CPU you have a lot of things on your plate ahead of that.

Re: High Availability Linux with Software RAID

Anonymous's picture

I don't think thats what the author was intending to achieve max performance. But more guarenteed availablity.

If you use the partitions directly in the fstab with priority=1 and a drive fails then the mache will probally go down since a portion of the swap space is now corrupt. However if they are on a RAID 5 setup the machine will just keep on humming. Assuming you don't have a 2nd drive failure.

Re: High Availability Linux with Software RAID

Anonymous's picture

Yes, You could do that, but then You loose HA, because swap will fail,

as soon a disk with a swap partition fails.

Performance wise it would be better to use raid 1 than raid 5 for swap.

Re: High Availability Linux with Software RAID

Anonymous's picture

Anybody has info of how to do this using User Mode Linux?

Re: High Availability Linux with Software RAID

Anonymous's picture

UML is part of the kernel, so is not affected by the RAID subsystem underneath of it. You just need to set up the RAID Disk system as explained, and then install a UML kernel, and way you go.

Re: High Availability Linux with Software RAID

Anonymous's picture

thats not totally true ...
a bug in the ubd driver in uml prevents raidhotadd from working correctly. the bug is known, and a patch is available to fix it (it will be in the next uml release)

greetings,
frank

Re: High Availability Linux with Software RAID

Anonymous's picture

If one is looking to truely run a HA server, would it not be better to make /boot a RAID-1 array, and use a Ramdisk to boot the machine and allow access to the Software RAID. Also, for better performance of the swap partition, rather than creating a software RAID disk for swap, set all the relevant partitions to swap space and set them to equal priority in /etc/fstab so that they are used as a RAID-0 array, without the overhead of the Software RAID system running.

Re: High Availability Linux with Software RAID

Anonymous's picture

Having swap on RAID is a good idea, otherwise a single disk error

can make your machine crash.

I would tend to disagree with

Anonymous's picture

I would tend to disagree with the whole concept of placing your swap on a raid partition.

See line #18 in the link below for more information:

http://linas.org/linux/Software-RAID/Software-RAID-8.html

We're not talking about strip

Anonymous's picture

We're not talking about striping though, but mirroring, so if one drive dies, all the data written to swap doesn't go down with it, as that would be double plus ungood.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState