VA Linux Workstation VArStation XMP
VA Linux OS 6.0 is actually a Red Hat-based CD-ROM which is customized for VA machines and their hardware. Although the systems come pre-installed, installation is quite easy and all one has to do is select the VA Workstation option from the package selection menu to get the VA wares. Obviously, you don't have to enter your hardware configurations, since either Red Hat is probing successfully or VA made sure the installer already knows.
VA Linux OS 6.0 differs from Red Hat in a number of ways, and in my mind, is an improvement. For one thing, VA's installation uses KDE instead of the other window managers, and regardless of how one feels morally about the GNOME vs. KDE debate (libQt still isn't truly free), KDE is by most accounts further developed at this point and does support several languages. I especially liked VA's account of why KDE was chosen:
The primary reasons were stability and consistency of interface. This is not a judgment against GNOME or for KDE, but merely a decision based on what is best for the novice user. It's our belief most advanced users will already have a preferred environment, which would not be GNOME or KDE in any case.
VA's variant of Red Hat looks quite nice; in fact, it's my favorite of all Red Hat variants including Red Hat itself. As far as became apparent to me, it's just a highly configured setup with a custom kernel (2.2.7-1.15smp) for the VA box. Nevertheless, the desktop looks very nice by default with a futuristic VA background and cool blue borders and buttons that look rather in tune with the whole quest to be “space age”, which comes with this millennium business.
Despite everything VA has going for it, a few small problems exist in the VA OS 6.0. Some kernel modules are compiled with an earlier kernel and require insmod -f to force them to load, which can lead to unresolved symbols. If you load the modules in the right sequence, you can usually get around this, although after I reinstalled the OS (the machine shipped with VA Linux OS 5.2) the sound modules remained dysfunctional without a kernel recompile. So, it seems to me that the weakest link in VA systems is audio. Nevertheless, it's not so hard to recompile a kernel for sound support, and these are exceedingly fast computers so kernel compilation goes quickly.
The Linux Journal benchmarks are still genuine vaporware, but as soon as they are developed, we'll print the results for this excellent VArStation. Computer systems are changing, moving from a single processor to multiple processors, supporting larger hard drives and much more RAM, and are designed for efficiency gains in areas often ignored by conventional benchmarks. In addition, greater customization of hardware can take the load off the CPU, while kernels are becoming more and more clever. As computers are whole systems rather than individual chips and pieces, the issue is not how well one of the processors performs a loop of floating-point operations, sets all the bits in memory, or scores on a bogomips test (547.23 and 545.59 for the processors on this particular VArStation), but how well the system as a whole performs. Overall system performance is dependent on many things, especially the operating system itself and how well it can manage various tasks and take advantage of hardware. Newer Linux kernels perform better.
Speaking of kernels, the most important activity of any Linux user is recompiling the kernel. Well, maybe not, when you've already got an ideal kernel as on a VA box, but it's often the biggest compilation many of us make. On a VArStation running egcs 2.91.66, my typically oversized kernel compiled with make bzImage in less than two minutes, with make modules requiring only three. make modules_install took two seconds and bzlilo ran in eleven. Although everyone's kernels are different, mine are usually too big, so yours would probably take even less time.
The machine is a true joy to use on a daily basis. There are no lags, no waiting, and not even Netscape manages to go awry. I can do neat things like play two chess engines against each other, each on a different processor—that makes for roughly 200,000 positions per second per processor. Compiling anything is a joke—it takes more time to type tar -xzf, cd and ./configure; make. In fact, although fast machines are nice as servers, the amazing compilation speed would probably save much time and money for software development firms, not to mention improve the quality of software, as one can make far more trials in a given amount of time. (Well, I for one am a practicioner of the trial-and-error approach to programming.) Besides, it boosts morale to have fast computers.
Table 1 shows the results of some common Linux benchmarks, for those interested. Keep in mind, however, that the BYTEmark tests check only a single processor, so practically speaking, the results should be doubled.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide