ICP vortex GDT RAID Controllers
The first thing I had to do was add the cache SIMM to the controller card. ICP recommends you use 60ns FPM (Fast Page Mode) RAM or 50ns EDO RAM. I had only 60ns EDO RAM, so at first I tried setting the jumper to say I had 60ns FPM RAM. After talking with technicians, I decided to move the jumper to EDO mode. I discovered a significant performance difference between FPM and EDO mode settings. The jumper to EDO mode increased performance by approximately 20%. Using a cheap 60ns EDO SIMM definitely made the controller unreliable; in fact, it corrupted the hard drive when I tried copying multi-megabyte files to test disk write throughput. Switching to a good-quality OEM 60ns EDO SIMM solved that problem.
I tried benchmarking the GDT with both 64MB and 128MB of cache. I found no significant difference in performance, and thus recommend 64MB for cache. Given the low price of RAM today, it does not make sense to use less than 64MB for cache.
Both cards work identically when you put them into your computer. They use the same driver, the same BIOS and the same utilities. The only difference is that the BIOS utility you use to set up your RAID volumes shows more channels for the 6538RD.
The BIOS setup utility allows you to select drives and then combine them into a single RAID volume. It does not allow dividing a drive between multiple RAID volumes as is possible with the software RAID driver. The setup utility writes the resulting data into a special boot sector at the start and end of the drives. Thus, you can remove the controller, put in a different (replacement) controller, and your RAID setup remains the same.
The GDT6538RD had no trouble combining drives from multiple channels and presenting them to Linux as a single SCSI hard drive. Curious, I tried putting multiple GDT controllers into a machine to see if I could combine drives which were on entirely different controllers. This did not work, though otherwise the Linux GDT driver had no trouble with handling multiple GDT cards in the same computer.
Once the array was configured, the GDT controller started building the array, i.e., building the checksum blocks. I interrupted this process to reboot into the Red Hat 5.2 installation routine. I discovered the ICP does not present a SCSI CD-ROM hooked to its Narrow SCSI port as a bootable device to the BIOS. Swapping to an IDE CD-ROM solved that problem.
The 5.2 installer detected on my system, an ICP RAID Array Controller and the RAID array as a single hard drive. I went ahead and installed Red Hat Linux. While I was doing this, the GDT controller was continuing to build the disk array, transparently, in the background.
It can take quite some time for the arrays to build and become redundant. Note that you can go about the task of installing the OS, configuring your software, etc. while the array is building in the background.
Unfortunately, I was not able to do extensive benchmarks on the system with the 3-channel controller and 36GB drives. The command hdparm -t reported 28MB/sec throughput on “virgin” drives (where the OS had just been rebooted and the GDT controller reset). Using dd to write 100,000,000 bytes from /dev/zero to the disk array reported a write throughput of around 18MB/sec. One thing I did discover was that turning on the write caching sped up throughput considerably. Apparently this allows the controller to do write re-ordering internally and combine writes when possible. The tested 2.0.36 and 2.2.10 kernels both properly flush the cache at shutdown time, so as long as you have a UPS that is properly configured to do a clean shutdown of the system, this is fairly safe. If you don't trust the UPS software and insist on turning off the write cache, expect the write performance to be significantly impacted.
The theoretical performance of the hardware involved was somewhat higher than the numbers seen. The EXT2 file system was eliminated as a possible factor by the expedient of using dd to read and write to raw partitions. Software RAID0 was faster by about 15%, but still did not approach the theoretical performance of the hardware involved. Speculations on the cause of this slowdown would be interesting (I suspect they happen due to various factors within the Linux kernel), but are irrelevant to this article. The GDT's RAID5 performance, in any event, performed similar to the software RAID5, without the excessive CPU usage seen while running the software RAID5.
If a drive fries, RAID1 or RAID4/5/10 keeps going. The GDT then starts beeping annoyingly. It also sends a message to both syslog and the console.
If a hot spare was defined, the GDT will automatically mark the bad drive as failed and switch to using the hot spare. It will transparently rebuild the array with the hot spare. No action is needed on your part, though you will eventually want to remove the bad drive, replace it with a new drive and initialize the new drive as a hot spare. Assuming you have hot swap trays, you don't need to shut down Linux to do this. The ICP gdtmon program runs natively under Linux and will handle this situation.
If you have no hot spare, the GDT will automatically mark the failed disk, but the array will no longer be redundant. Again (gdtmon to the rescue), you can use gdtmon to swap out the bad drive and swap in a replacement. No down time is necessary, since gdtmon runs natively under Linux; the new drive will be transparently rebuilt while your system continues to run.
Free DevOps eBooks, Videos, and more!
Regardless of where you are in your DevOps process, Linux Journal can help!
We offer here the DEFINITIVE DevOps for Dummies, a mobile Application Development Primer, and advice & help from the expert sources like:
- Linux Journal
- High-Availability Storage with HA-LVM
- DNSMasq, the Pint-Sized Super Dæmon!
- Localhost DNS Cache
- Real-Time Rogue Wireless Access Point Detection with the Raspberry Pi
- Days Between Dates: the Counting
- You're the Boss with UBOS
- The Usability of GNOME
- Linux for Astronomers
- Multitenant Sites
- PostgreSQL, the NoSQL Database