Letters to the Editor
The Fujitsu 400 series [Lifebook 420D Notebook Computer] that I reviewed in November has recently been discontinued in favor of the Fujitsu 700 series. I have been assured by Fujitsu Technical Support that they will continue to honor their warranties and support “legacy” systems such as these. Anyone who succeeds in picking up one extra cheap can be reasonably sure of support for the foreseeable future. Still, it's a bit alarming that a six-month-old machine can be thought of as a “legacy” system. This industry is just nuts sometimes.
—Michael Scott Shappe firstname.lastname@example.org
I read the comments about the new I2O device-driver architecture by Phil Hughes just days before my Embedded Systems Programming magazine arrived. In it was an article by Larry Mittag describing I2O in some detail. He did mention the NDA and license restrictions. He reported that the reason for these restrictions is not the I2OSIG itself, but the lawyers and owners of the software patents for this architecture. Apparently, this architecture has been used before in mainframes and the techniques were patented. Thus, it cannot be freely implemented by others unless a license fee is paid.
For this reason, an e-mail campaign to I2OSIG will not be successful. After all, they must comply with the law. However, it sure would have been nice of them to communicate with Linux Journal to clarify the situation.
The League for Programming Freedom (http://www.lpf.org/) is fighting to overturn the concept of software patents. This would not prohibit programmers from copyrighting their code; it just means that an idea—which is what an algorithm is—could not be patented. Their belief and mine is that patents are for physical devices and processes, not for mathematical algorithms.
The only way I see to create a unified driver environment is for developers to refuse to use the I2O specification.
It was nice to see an article presenting noweb, one of my favorite tools [Literate Programming with Noweb, Andrew Johnson and Brad Johnson, October 1997].
Although the article did a good job of describing the technical intricacies of creating a noweb-literate program, it did not properly present the idea behind literate programming (LP), nor did it convey the idea that LP can be used by serious software developers. Noweb does not require that the source be in a single file; my preference is to put each project component in a separate directory and to use one noweb source file per subcomponent.
One of the key LP advantages is that the documentation is next to the code, in the same file. Other LP tools that also support multiple programming languages are nuweb and FunnelWeb; check the LP FAQ.
Another noweb feature is that the tangled (extracted) code is readable, indentation and line breaks are respected; therefore, the tangled code can be distributed as if it were the actual source.
Users unfamiliar with LaTeX might be pleased to know that there are several noweb modes for Emacs (I wrote one of them), and that with color highlighting the source file becomes quite readable.
Those interested in LP applied to software engineering can check the low traffic moderated newsgroup comp.programming.literate, and the following books: Knuth's The Stanford GraphBase (Addison Wesley 1993), Fraser and Hanson's A Retargetable C Compiler: Design and Implementation (Benjamin/Cummings 1995) and Hanson's C Interfaces and Implementations (Addison Wesley 1997).
Noweb works perfectly under Linux (or any Unix variant) and there are also versions for Windows 95. Wouldn't it be nice if the full Linux kernel sources were available in noweb format and published as a book? After merging with the Kernel Hacker's Guide it could be used in Operating System courses and perhaps become the next standard format for kernel sources.
—Alexandre Valente Sousa email@example.com
I was looking forward to receiving my copy of Linux Journal Issue 43 (November 1997). After checking the LJ web site I was drooling about reading of the GIMP and faxing from Macs. However, when my copy arrived I was disappointed at seeing no examples of the GIMP in use. How on earth can you cover a graphics program without any graphics? Sadly, this article seems to have set a precedent as the later article on Linux as a Telephony Platform by David Sugar was also without any illustrations. I hope that this trend does not continue.
Our office FAX machine is antiquated and in need of replacement with something that we can use from our desktop Macs. So I was also expecting good things in Faxing From a Web Page (using HylaFAX on the Mac) by David Weis, but sadly the short article did not address the issues in any depth. An example of the web page used would have been preferable to reproducing just the HylaFAX logo.
They say that if you're going to criticize something or someone then you have to make three positive comments. First, the Linux Means Business article [Highway POS System, Marc L. Allen] was interesting to read. I've worked on EPOS systems so I know some of the pitfalls especially when trying to use MS-DOS machines. Although short, this article did capture the author's obvious enthusiasm for Linux. This series of articles has always been inspiring and thought provoking. Second, the update on IP Masquerading was very helpful. Keeping up to date with all that is happening with Linux kernel issues is not easy, so this article was a timely reminder of what is actually happening. (It also had some illustrations, examples and figures to support the text.) Third, the Take Command, ssh: Secure Shell [Alessandro Rubini] article also served as a timely reminder to be careful out there.
—Trevor Jenkins Trevor.Jenkins@suneidesis.com
Three negatives and three positives—a well-balanced letter. Michael sent in a very long article on the GIMP that just wouldn't fit in one issue, so we requested that he split it into four parts. Our November cover was built with the GIMP and used the graphic that went with this first purely introductory article. We always request graphics and images to go along with articles, but it doesn't always work out—this was the case with the two other articles you mention.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide