Letters to the Editor
I was just reading the Video section of the article “Building the Ultimate Linux Workstation” and came across a couple of quotes from Darryl Strauss that I thought shouldn't have been there. He states: “The up-and-coming board to watch is the Radeon”; and “Performance is about the same as the GeForce2, and they want to do open-source drivers. It's just not out for Linux yet”
I would like to focus on the “they want to do open-source drivers. It's just not out for Linux yet” part. Let me spare anyone from the hell that I have gone through with ATI in the past and I ask that a little research be done on ATI specifically before the readers and subscribers of Linux Journal are recalling where they got the idea to buy the damn thing to begin with.
ATI has promised 3-D drivers for their cards under Linux for almost two years now. When an actual 3-D driver that is usable (and when I say usable thats exactly what I mean; drivers that make quake3 actually run a bit faster than the couple pixels a second you get, try for yourself) has yet to be seen. UTAH-GLX is not developed by ATI or anyone they've hired. They've hired the guys/gals at Precision Insight sometime ago (http://www.precisioninsight.com/), and they (ATI) promised to have ATI 3-D drivers available in Q1 of 2000. Needless to say Q1 of 2000 came, and they had no usable drivers; Q2 came, still no drivers. Q3 came, you get the idea.
They consistently boast on their web site that they have 3-D drivers for the Rage 3-D pro cards, which is correct but it's not their drivers; and it's not supported by ATI. You can find them here (UTAH-GLX) utah-glx.sourceforge.net. The Rage 128 3-D drivers are available here at dri.sourceforge.net, which happens to be Precision Insight's DRI project. Mind the Precision Insight drivers don't do much and aren't usable. So whatever you do, PLEASE do not buy the ATI Radeon before the drivers are completed and usable. DO NOT make the same mistake I made when I bought this ATI rage fury. Personally I'll never buy another ATI card again; not because of the hardware, which is of high quality, but because they don't support their hardware with drivers.
I work with Ada quite a bit at work and enjoyed seeing the article on Ada in the latest issue of Linux Journal. I have to agree that many programmers should take a serious look at the language. It is highly structured, and I would guarantee that novice programmers will decrease their debugging time and their frustrations tremendously by using the language. Many C++ programmers could learn a great deal of discipline sorely lacking in that language by taking the time to learn Ada.
With that said, I do have to take issue with some of the statements made in the article. First, Ada does not provide a full suite of operator overloading/overriding capabilities. It does not allow overriding of the assignment operator nor the array indexing operator. Also, you cannot specify a return by reference in Ada. The latter two difficulties can be a great hassle when developing container classes, since it makes the specification used by clients clumsy. The language also does not support type promotions, which can again lead to unnecessarily clumsy interfaces.
Ada does not have inferred templates (known as “generics” in Ada). Generics must be instantiated explicitly, and each instantiation, even if based on the same parameters, represents a new class incompatible with other instantiations. This can be a roadblock to software reuse. The language also lacks a notion of “protected” in the sense that C++ and Java have; nor does the language have a construct for declaring an object's attributes constant over the lifetime of that object. (You should see how JGNAT works around these issues when providing an Ada-equivalent API to the Java core classes; it's not pretty.)
Perhaps one of the biggest drawbacks to the language is that the compilation of the code requires it to be “bound” before it is linked. The binding process creates additional code needed to elaborate software modules (in the correct order). This prevents dynamic loading of code at runtime; in other words it prevents “plugins”. I believe technologies are now coming out to allow Ada plugins, but it is not clear that they faithfully adhere to the Ada standard.
In short, the article could have done a better job at pointing out some of the disadvantages as well as some of the advantages in programming in Ada. Do you have any plans for an article on Eiffel, another highly disciplined language?
—Johnjohn.firstname.lastname@example.org San Jose, CA
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide