PogoLinux RAID Workstation
Price: $1,499 US
Reviewer: Choong Ng
For those that are unfamiliar with RAID, RAID stands for redundant array of inexpensive disks. The idea is that one may provide enhanced reliability or performance by using an array of relatively inexpensive disk drives combined with a special hardware or software controller. The three most common RAID configurations are called RAID 0, RAID 1, and RAID 5. RAID 0, also known as striping, is the division of data between drives such that a single read or write operation may combine data from two or more hard drives, thus increasing the aggregate transfer rate. RAID 1, also known as mirroring, does the opposite of RAID 0: instead of focusing on performance, RAID 1 focuses on increasing reliability by storing multiple copies of your data on different drives, thus reducing the chance of a drive failure destroying your data. I opted to have the workstation preconfigured with RAID 0 for the enhanced performance.
Setting up the Velocity was a breeze: five minutes to get it out of the box and onto my desk and another ten to boot it up and configure it to talk to the network. Those of you who will want to add custom hardware such as a preferred graphics board, gigabit Ethernet or a DVD drive will appreciate the Velocity's maintenance-friendly case with just two thumbscrews between you and the motherboard. Red Hat 7 will recognize most third-party hardware without problems.
There are two big issues to be aware of before you set up the system. One is that the default Red Hat 7 has a long list of known problems, and because of this you may have trouble getting some applications to run properly. The other major issue that I ran into is that the kernel needs special parameters to boot properly from the Promise RAID hardware. This only becomes an issue if you want to replace the kernel or install an alternative Linux distribution; Pogo's default configuration works fine. The usual symptom is a kernel panic when the kernel attempts to mount partitions, but the fix is simply to provide the kernel with the correct I/O addresses. For a detailed explanation of how to do this see Aaron Cline's “Unofficial Asus A7V and Linux ATA100 Quasi-Mini-Howto” at http://www.geocities.com/ender7007/. Once the machine recognizes the card no further configuration is necessary to take advantage of the RAID controller, and other than the issues just mentioned, I had no problems with setup.
My overall first impression is that the Velocity is a very speedy machine—as one would expect from any 1GHz Athlon box—and fairly well put together. As someone who frequently swaps hardware out of my machines I also appreciate the ease of opening the case via the thumbscrew-attached side panels. Having set up the machine and played with it for a little bit, it is time to measure how fast the Velocity really is.
For these benchmarks, the PogoLinux RAID box in RAID 0 mode will be compared to a similarly configured non-RAID system, same motherboard and processor (well, A7Pro vs. A7V), same RAM and a similar ATA 100 drive (but just one for the non-RAID system).
Using the Bonnie benchmark suite, the PogoLinux box produced some interesting results. Tests that performed large numbers of very small disk transactions did very poorly on the PogoLinux RAID box—about half as fast as the comparison system—while tests that rely on fewer transactions involving larger amounts of data did much better, right in the middle of the 40-50MB/s transfer rate quoted by PogoLinux. This information is very important when one is considering the purchase of approximately $1,578 worth of hardware. So, on to a real-world test.
To compile Mozilla, use the time command for measurement, as shown in Table 1.
In most applications, the RAID system didn't perform noticeably differently from the single drive system. Some tasks, such as untarring Mozilla, actually came out slower on the RAID system. Compiling Mozilla was only faster by about ten seconds out of 50 minutes. This is indicative of a near-universal truth in computing: aggregation can buy you increased bandwidth, but it can't buy decreased latency. The likely explanation for why the RAID 0 system has relatively poor performance when performing many small accesses is rather lengthy, but suffice it to say that it is related to the fact that having to wait for two read/write heads instead of one increases the array's average seek time.
What this means in the real world is that having a RAID 0 disk system will only accelerate tasks where large amounts of contiguous data is transferred (where increased transfer rates help) and not tasks that require many small disk accesses (where disk latency is more important than transfer rate). One good example of this is working with applications that use large data files, such as editing high-resolution photos in the GIMP, where I did indeed notice significant improvement in loading and saving large files.
Other tasks that benefit from increased disk transfer rates include database searches, video editing or any type of high-bandwidth data capture (i.e., direct-to-disk audio and video). Tasks that won't benefit from RAID 0 (especially as opposed to having two drives operating independently) include file servers serving many concurrent users making small requests, database servers supporting similar workloads, most general-purpose applications and development software, etc.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Returning Values from Bash Functions
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide