Raising the Bar: Improving the Ultimate Linux Box
I've noted in my last two articles that the configurations I tested weren't exactly optimal. Because this is supposed to be the Ultimate Linux Box, I decided to see just how close to optimal I could get--and I'm pleased with the results.
The fine folks at Monarch Computer Systems sent me a set of four Western Digital Raptor 10kRPM serial-ATA drives, plus a set of Red Hat 8.0 CDs. The original system had three Seagate drives that spun at 7200 RPM and Red Hat 9--fewer, slower spindles for RAID 5 to work its magic and a version of XFree86 that isn't compatible with ATI's proprietary drivers. These two improvements should bump up the testbed's already nice performance to decidedly snappy.
We'll deal with the drives first. When unwrapping the Raptors, the first thing that caught my attention was the heat sink looking design of the left side of the case, as you face the business end. I don't know if this is put there to be functional or if it simply looks cool, but it certainly caught my eye. The second thing I noticed, as I considered installing four of these hotrods in what had been a three-drive system, was not only did they have the standard S-ATA power connector but an auxiliary (legacy) Molex power connector as well, right where it should be. This inclusion makes things easy. I extracted the drive cage (two screws in the Lian Li case), removed the Seagate drives with their horizontal-mount adapter and laid them aside. There is only room enough to mount three drives horizontally in the lower cage, but five can be mounted vertically. With a little fiddling, I got data and power sent to all four drives; Monarch thoughtfully included a fourth data cable.
With power on the system, I dropped into the 3Ware BIOS and built a new RAID 5 array. The array build seemed to go awfully fast. I dropped the first Red Hat 8 CD in the drive as the build neared completion, and the computer automatically dropped into boot. I selected for a nearly everything custom install, then sat back to watch the fun. The install, however, didn't go any faster than usual; I suspect I maxed out the sustained read rate on the parallel IDE controller. Half an hour later, I saw a root prompt. Now for some fun.
Tiobench reveals some surprising numbers. While the Dell SCSI system I mentioned the last time we did drives still owns the 3Ware/Raptor combo in some areas, the marked performance improvement in adding a fourth spindle and cranking things up to 10kRPM enabled the ULB to make the SCSI box look bad in the multithreaded sequential read department. Some comparisons:
Things were similar in the random read department; I ended up with a 3.70 to 2.17 advantage at 8 threads. At only 2 threads, though, the advantage wasn't much. SCSI still owns the random writes department with a steady 30something mb/sec rate--until you get to 8 threads. Then, the ULB edges out at 21.87 to 18.76; this is a drastic improvement over the 10-13mb/sec rate achieve with the old configuration. Random writes on the ULB still don't come close to SCSI, but they improved from an average 0.46 to 0.63; SCSI hovered around 4.88. Not too many applications are heavily into random writes, however. In all other areas, as you scale up, serial-ATA becomes the faster technology. Now we're getting to something we can call Ultimate. And at a street price of $159 (thanks, Froogle), perhaps you now can get good, fast and cheap in disk drives as well as you can in operating systems.
In the graphics area I discovered a new benchmark. Chromium is an OpenGL enabled scrolling space shooter game that comes with Red Hat. (It's also available for Debian.) Chromium has a handy frames-per-second display much like Quake, but Chromium's is a lot less trouble. Plus, it's Artistic licensed. Chromium with the free RADEON driver scored a painfully slow four frames per second. Let's see if we can improve that, shall we? I popped over to ATI's site, got into the drivers section, located the driver for XFree86 4.2 (which is what comes with Red Hat 8.0) and was greeted with a registration screen. I fed it what I considered appropriate data, and 5MB worth of RPM later, I was ready to rock. The usual rpm -Uvh was greeted with a conflict on the OpenGL library, but the README on the web site said to expect that, so I added --force and tried again. This time I had a successful install. The RPM's postinstall script generated a new fglrx kernel module on the fly--NVIDIA should take notes. I then ran the fglrxconfig utility, which looks a lot like xf86config, then chose the appropriate options and restarted X. The driver did have options for Xinerama (going dual-head) in the config tool, but as I noted before, Xinerama and DRI are mutually exclusive. The only way I can see to do multiple screens off the same card in accelerated mode is to make two separate X sessions, complete with mouse and keyboard. Cranking Chromium again netted a right snappy 51fps--that's more like it! For comparison, my GeForce 2 MX netted 20fps; all testing at 1024x768 in an X window with the eye candy set to high.
The fglrx package also comes with a little application called fireglcontrol that allows you to configure dual-headedness and the X Gamma of your monitor(s) from within X itself. On the other hand, redhat-config-xfree86 has no idea what you've done to your X configuration; it registers unknown driver, unknown monitor. That's okay for fglrxconfig, but it may not go over so well if you're sending a system to someone who knows just enough to be dangerous. On the other hand, because this is supposed to be a high-end workstation, that possibility may not be much of an issue. It's still something a good admin or support tech should be aware of.
So, you're probably wondering, is he ever going to do the soundproofing article? Well, yes he is. In the same box with the Raptors and the Red Hat CDs was a new fan for the testbed. The back case fan had noise issues, and Monarch was happy to replace it. Unfortunately, for both space and time issues, I can't cover it this week. By next week, though, we should have a nice, fast, quiet testbed system. And, I'm told that the Real Thing is in-house at Monarch and awaiting final configuration. I'm also told to Expect Great Things from it.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide