ICP vortex GDT RAID Controllers
The first thing I had to do was add the cache SIMM to the controller card. ICP recommends you use 60ns FPM (Fast Page Mode) RAM or 50ns EDO RAM. I had only 60ns EDO RAM, so at first I tried setting the jumper to say I had 60ns FPM RAM. After talking with technicians, I decided to move the jumper to EDO mode. I discovered a significant performance difference between FPM and EDO mode settings. The jumper to EDO mode increased performance by approximately 20%. Using a cheap 60ns EDO SIMM definitely made the controller unreliable; in fact, it corrupted the hard drive when I tried copying multi-megabyte files to test disk write throughput. Switching to a good-quality OEM 60ns EDO SIMM solved that problem.
I tried benchmarking the GDT with both 64MB and 128MB of cache. I found no significant difference in performance, and thus recommend 64MB for cache. Given the low price of RAM today, it does not make sense to use less than 64MB for cache.
Both cards work identically when you put them into your computer. They use the same driver, the same BIOS and the same utilities. The only difference is that the BIOS utility you use to set up your RAID volumes shows more channels for the 6538RD.
The BIOS setup utility allows you to select drives and then combine them into a single RAID volume. It does not allow dividing a drive between multiple RAID volumes as is possible with the software RAID driver. The setup utility writes the resulting data into a special boot sector at the start and end of the drives. Thus, you can remove the controller, put in a different (replacement) controller, and your RAID setup remains the same.
The GDT6538RD had no trouble combining drives from multiple channels and presenting them to Linux as a single SCSI hard drive. Curious, I tried putting multiple GDT controllers into a machine to see if I could combine drives which were on entirely different controllers. This did not work, though otherwise the Linux GDT driver had no trouble with handling multiple GDT cards in the same computer.
Once the array was configured, the GDT controller started building the array, i.e., building the checksum blocks. I interrupted this process to reboot into the Red Hat 5.2 installation routine. I discovered the ICP does not present a SCSI CD-ROM hooked to its Narrow SCSI port as a bootable device to the BIOS. Swapping to an IDE CD-ROM solved that problem.
The 5.2 installer detected on my system, an ICP RAID Array Controller and the RAID array as a single hard drive. I went ahead and installed Red Hat Linux. While I was doing this, the GDT controller was continuing to build the disk array, transparently, in the background.
It can take quite some time for the arrays to build and become redundant. Note that you can go about the task of installing the OS, configuring your software, etc. while the array is building in the background.
Unfortunately, I was not able to do extensive benchmarks on the system with the 3-channel controller and 36GB drives. The command hdparm -t reported 28MB/sec throughput on “virgin” drives (where the OS had just been rebooted and the GDT controller reset). Using dd to write 100,000,000 bytes from /dev/zero to the disk array reported a write throughput of around 18MB/sec. One thing I did discover was that turning on the write caching sped up throughput considerably. Apparently this allows the controller to do write re-ordering internally and combine writes when possible. The tested 2.0.36 and 2.2.10 kernels both properly flush the cache at shutdown time, so as long as you have a UPS that is properly configured to do a clean shutdown of the system, this is fairly safe. If you don't trust the UPS software and insist on turning off the write cache, expect the write performance to be significantly impacted.
The theoretical performance of the hardware involved was somewhat higher than the numbers seen. The EXT2 file system was eliminated as a possible factor by the expedient of using dd to read and write to raw partitions. Software RAID0 was faster by about 15%, but still did not approach the theoretical performance of the hardware involved. Speculations on the cause of this slowdown would be interesting (I suspect they happen due to various factors within the Linux kernel), but are irrelevant to this article. The GDT's RAID5 performance, in any event, performed similar to the software RAID5, without the excessive CPU usage seen while running the software RAID5.
If a drive fries, RAID1 or RAID4/5/10 keeps going. The GDT then starts beeping annoyingly. It also sends a message to both syslog and the console.
If a hot spare was defined, the GDT will automatically mark the bad drive as failed and switch to using the hot spare. It will transparently rebuild the array with the hot spare. No action is needed on your part, though you will eventually want to remove the bad drive, replace it with a new drive and initialize the new drive as a hot spare. Assuming you have hot swap trays, you don't need to shut down Linux to do this. The ICP gdtmon program runs natively under Linux and will handle this situation.
If you have no hot spare, the GDT will automatically mark the failed disk, but the array will no longer be redundant. Again (gdtmon to the rescue), you can use gdtmon to swap out the bad drive and swap in a replacement. No down time is necessary, since gdtmon runs natively under Linux; the new drive will be transparently rebuilt while your system continues to run.
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- New Products
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Nice article, thanks for the
3 hours 39 min ago
- I once had a better way I
9 hours 25 min ago
- Not only you I too assumed
9 hours 42 min ago
- another very interesting
11 hours 35 min ago
- Reply to comment | Linux Journal
13 hours 29 min ago
- Reply to comment | Linux Journal
20 hours 23 min ago
- Reply to comment | Linux Journal
20 hours 39 min ago
- Favorite (and easily brute-forced) pw's
22 hours 30 min ago
- Have you tried Boxen? It's a
1 day 4 hours ago
- seo services in india
1 day 8 hours ago