DEC AXP Review
The Alpha computer I played with for this review was loaned to SSC by DCG Computers, Inc. of Londenderry, NH. DCG can be reached via e-mail at firstname.lastname@example.org or by a visit to their web site at www.dcginc.com. DCG's goal is to produce affordable computers based on Digital's Alpha Technology. The loaned AXP was a model EV5-333Mhz with 64 MB of RAM, a 1 GB hard drive and a quad speed CD ROM. DCG markets it for about $7,100.
My first hands on experience with the AXP was trying to fix it—always a good way to start. The AXP EB164 had arrived, and had been mysteriously “broken” during a second boot attempt. Evidently, the first boot had worked fine, but someone in the office had changed a BIOS setting, rebooted and “Kaboom...Alpha bits.” Now, although everything appeared normal, the system would not even display the BIOS information. The hardware was functional, and the cards, SIMMs and cables were properly seated.
The system I wanted to install was Craftworks Linux, and since no one would 'fess up to changing the BIOS setting, much less to what it had been changed, I called Craftwork's Tech Support. The Tech Support people were friendly and helpful, and even gave me the answer to the problem—always a plus—the BIOS of the AXP has hard coded operating-system-dependent boot logic, i.e., the AXP only knows how to boot DEC Unix and Windows NT. The “someone” had set the BIOS switch to DEC Unix from NT, a perfectly reasonable thing to do considering the options. Wrong! For a Linux installation, the switch must start out set to NT, then you create and format a super tiny MS-DOS FAT partition (as partition 1), make it bootable and install the AXP equivalent of [cw]loadlin[ecw] in it. This hack will you get you booted okay, but it would be nice to have a Linux option to begin with.
After the initial boot problems were fixed, so was the AXP, and we started the installation of Craftworks 2.1 version of Linux. Due to the special AXP BIOS feature and the fact that MS-DOS was involved in some way, the installation was something other than “smooth and refreshing”. One unsuccessful installation was caused by veering from the suggested partition size values. The second installation went fine, as I used all the suggested sizes and followed all the steps as given.
After completing the installation, I was able to boot the AXP from its own drive and log in seconds later. It was incredibly fast—RISC fast—333Mhz fast—faster than a speeding bullet fast (well, almost)! Using the AXP was a lot of fun—no waiting on your prompt to come back, it was always there. The AXP was so fast (see below) and performed so well, it was like a dream, then our next stumbling block appeared. Following a kernel build of 4-5 minutes, I typed [cw]make clean.[ecw]. Shortly after make began execution, the script died with a segmentation fault. Following this failure, many of the regular, previously working utilities (e.g., ps, finger, telnet, inetd, init) began failing. The only fix I found was to reboot. Everything would run fine for hours, even through concurrent compiles of GhostScript and the kernel, but as soon as I would run the make clean script, everything would quit working. Through various experiments, I found that any embedded rm command would cause a segmentation fault, after which the system was totally unstable. I further found that this problem was a bug in the version of Linux that I was using. The problem went away with an upgrade of the kernel to a later version (1.3.89).
Our systems administrator, Liem Bahneman, ran a lot of benchmarks, including dhrystone 1.1, iozone (HDD performance), bonnie (HDD performance), and Byte UNIX benchmarks. Unfortunately, none yielded accurate results due to a known flaw with math in the C library (libm) as ported to Linux/AXP at that time. Another “benchmark” he ran was a render of a povray animation (i.e., ray tracing). To quote Liem, “From my estimates, the same 72-frame, 320X240 animation render that took 13 hours on a Sparc20 would have taken approximately 4 hours on the AXP as tested. This is an estimate, because the render could not complete due to math faults.”
I then installed Red Hat's version 3.0.3 of Linux finding it quite similar to Craftworks. Again, installation required the use of MILO (similar in functionality to LILO) and a DOS FAT partition. Following the guidelines carefully, I had it up and running quickly with no problems. Well after all, I was now an expert, having done it once before. Red Hat for the AXP looks and feels like Red Hat for the x86 except, of course, it's much faster. Red Hat's version of Linux did not have the embedded rm bug described above.
With Linux becoming more robust every day, the AXP running Linux will be a solid network daemon/file server. Especially since math-intensive applications are no longer flawed by a libm bug (this bug has now been fixed), and all “speed-matters” applications are math-intensive. If you are shopping for the high-end performance offered by the AXP, this is the machine for you.
Bryan Phillippe (email@example.com) is a 21-year-old Linux enthusiast who also enjoys the company of his fiancee, rollerblading and street-style snowboarding.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Control Your Linux Desktop with D-Bus
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide