The Linux RAID-1, 4, 5 Code

A description of the implementation of the RAID-1, RAID-4 and RAID-5 personalities of the MD device driver in the Linux kernel, providing users with high performance and reliable, secondary-storage capability using software.
The Extensions for RAID-1, 4, 5

The MD mapping function works fine for re-mapping personalities such as the linear and the striping personalities. Unfortunately, it is not enough to support the RAID-1, RAID-4 and RAID-5 personalities. The reason for this is that these modes cannot do their job by just re-mapping a given device/sector pair into new device/sector pairs; all of these new personalities require complex input/output operations to fulfill a single request.

For example, the mirroring personality (RAID-1) duplicates the information onto all of the disks in the array, so that the information on each disk is identical. The size of the disk array is the size of the smallest disk (the lowest common denominator). Therefore, it is not a good idea to make a RAID-1 with a 100MB disk and a 1GB disk, since the driver will just provide 100MB on each (and ignore the fact that you have 900MB free on the second disk).

The MD driver, when asked to write information to the device, queues a write request for all of the devices in the array with the same information. The system can continue operation regularly with all of the remaining devices (usually, just one extra device). With mirroring in place, read requests are balanced among the devices in the array. If any of the devices fail, the device driver marks it as non-operational and stops using it; the driver will continue working with the remaining disks as if nothing had happened. Figure 2 shows how this mode can be used; shadowed regions represent the redundant information.

Next we have the more complex block-interleaved parity (RAID-4) and block-interleaved distributed-parity (RAID-5) personalities. A striping unit is a set of contiguous sectors. Both the RAID-4 and RAID-5 personalities use one stripe on one of the disks in the array to store the parity information and use the remaining stripes for data.

Whenever one of the data sectors is modified, the parity sector has to be recomputed and written back to the disk. To recompute the parity sector, the device driver has to read the contents of the old parity block and recompute the new parity information. Then it writes the new contents of the data and the parity block.

The difference between RAID-4 and RAID-5 is that the former stores the parity information in a fixed device (the last one of the composing devices in our implementation), while the latter distributes the parity blocks between the composing devices.

Figure 3 shows the RAID-4 arrangement; shadowed regions are the redundant information, which in this case are the parity blocks. RAID-4 has a bottleneck due to the parity disk, since all MD activity depends on the parity bit disk to finish its operation.

Figure 4 shows the RAID-5 arrangement; shadowed regions are the redundant information (the parity blocks once again), and each disk here represents a striping unit. In RAID-5, the RAID-4 bottleneck is eliminated by distributing the work among several disks.

In the case of a disk failure while using the RAID-4 or RAID-5 personalities, the disk array can continue operation since it has enough information to reconstruct any lost disk. The MD device is slower at this point; every access to a data sector in the lost disk requires reading the information from all the sectors in the same stripe as well as the parity bit and computing the contents of the original sector based on this information.

Kernel Changes to Support the New Personalities

The new personalities, as we have implemented them require a change to the ll_rw_blk routine and some extra code in the input/output end-request notification code in the kernel.

The ll_rw_blk routine is modified to make a call to the md_make_request instead of calling the kernel's make_request when the blocks are scheduled to be sent to an MD device. The md_make_request routine in turn calls the request-making routine once for each personality to carry out its job. In the case of RAID-0 and the linear modes, md_make_request calls the regular make_request routine.

The ll_rw_blk routine in the presence of the new MD driver is shown here:

ll_rw_blk (blocks)
{
    sanity-checks ();
    for-each block in blocks {
        if (block is md-device)
            md_map (block)
    }
    for-each block in blocks {
        if (request-is-for (raid1,raid4,raid5))
            md_make_request (block);
        else
            make_request (block);
    }
}

Because of the increased complexity and error recovery capabilities of the new RAID code, the personality code must be informed of the results of its input/output operations. If a disk fails it must mark the disk as non-operational, and in the case of the RAID-4 and RAID-5 code, it must move between the different phases of the process. We modified the kernel input/output end-request routines to call the RAID personality's end_request routine to deal with the results.

A typical end_request routine is as follows:

raidx_end_request (buffer_head, uptodate)
{
    if (!uptodate)
        md_error (buffer_head)
    if (buffer_head->cmd is READ){
        if (uptodate){
            raidn_end_buffer_io (buffer_head,
                uptodate)
            return;
        /* read error */
        raid_reschedule_retry (buffer_head);
    }
    /* write case */
    if (finished-with buffer_head)
        raidn_end_buffer_io (buffer_head,
                uptodate)
}
______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

error handing in RAID

Anonymous's picture

Actually my question is how the error handing is done in RAID and how we can check it means the different ways to check how the error handling is done?
If i am making two virtual devices and making RAID1 using these devices and writing some data and then corrupting the data on 1st disk.Then how error handing is done and is there any way to check how it is done and similarly with RAID5????

raid 1

pomps's picture

Hi,

How can I do the implementation of Raid 1 on Linux.

Scenario:

2 HD scsi Seagate 36.6 ULTRA 320 ST336607LC
1 Adaptec 29320A board

Could you please help me?

Thanks a lot

Pomps

Re: Kernel Korner: The Linux RAID-1, 4, 5 Code

Anonymous's picture

I have been using Linux MD RAID-1 for some time now and have been satisfied with its performance. I've lost two drives in this time and I feel that the simple addition of a software mirror was well worth it!

I am about to try RAID-5 in a few minutes and this article has left me feeling comfortable that I know what my kernel is doing. Thanks guys!

Are you sure the drives are

Anonymous's picture

Are you sure the drives are not dead because of miss-mapping by the md_map? I don't think this would cause any crashing, but if it is a member in a system drive array then I would think there might be a possibility of corruption and loss of md-status. I dont really know any of this, but it sure does look smart from this angle.
social cos(90)

Re: Kernel Korner: The Linux RAID-1, 4, 5 Code

Anonymous's picture

Fristy Pr0st!

UR the winningest

Anonymous's picture

UR the winningest OHHHHHHHHHHHHHHHHHHHHHH

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState