All Hands to the Sieve
If you want to be thoroughly depressed by the CS “state of the art”, apart from running/crashing Windows, you should check out Peter Neumann's RISKS reports and the regular lists of “vulnerabilities” issued by the SANS Institute. No OS, kernel, compiler, library, parser or command seems immune, however “mature”. Mon dieu, we are not just talking kilobillion (who knows?) spreadsheet flaws or megabillion lost spacecraft (good for some), but pandemic, apodictic, uncaught exceptions based, in spite of endless warnings, on the syntactic quirks of C arrays.
A recent FLASH alert from SANS reveals the dangers lurking in the bowels of MS Access, Word 2000 and Internet Explorer 5.00+. SANS is offering a prize for “quick” fixes, but these bugs are eminently “proprietary”. Microsoft, let's face it, has more than their unfair share of expert programmers schooled in the latest software engineering agendas. Indeed, Microsoft Press offers many bibles on how to develop provably “robust” applications. Yet, in spite of linguistic and developmental tools aimed at detecting the “obvious” errors (memory leaks and array overflows), this fine group of MS programmers lets slip the dogs of...er...dogs.
The epistemological, unanswerable challenge, of course, is to first define “bug” and then, “bug-free”. Even Stephen Hawking's “Brief...” open-ended, temporal historiography may not allow us to test every path through all the if-then-elses of our sweetly crafted code. And what does it mean to “match” a specification expressed, ultimately, in “natural” language? (As Hawking's cosmos time-reverses to the Big Crunch, will we see our GOTOs mapped as COMEFROMs?)
The success of the open-source approach depends on the critical exposure of the code to many unbiased programmers motivated only by the true target, whereas an in-house “slave”, with share options at stake, may be loath to rock the boat.
In spite of which, recent SUID security flaws in the Linux kernel indicate (more depressions) that simple code slips can escape multiple perusal. On the bright side, we have (i) prompt, open acknowledegment and fixes; (ii) no end of scapegoats (boucs emissaries)—hands up, whosoever was guilty from the 144,000 remnant. Peter Salus will record your confession/contribution.
Due soon from Prentice-Hall, Bob Toxen's Real World Linux Security (see Note 1).
Moving to another aspect of widespread open computing, GIMPS is the Great Internet Mersenne Prime Search (see Note 2). Over 8,000 individual users are engaged in finding ever-larger primes. Way back, Euclid proved that there's no end in sight, for if P is the largest prime, then either P!+1 is a bigger bugger or it must have a prime factor larger than P. QED! (Where P! means factorial P, not to be confused with the second “!” which expresses amazement and relief!)
They say if you don't see the beauty of this proof (suitably refined), you'll never be a mathematician.
And, a sure test of your sensayuma goes thus: the Sieve of Eratosthenes (another dead Greek) offers a slowish algorithm for listing each prime in ascending sequence. My own Kelly's Sieve lists both composites and primes with a single for loop.
The point is whether there are primes between P and P!+1—and in June 1999, Nayan Hajratwala of the GIMPS team hit a new (temp) world record with:
2^6,972,593 - 1
I'll spare you the seven miles of digits and commas needed to printf() this number.
Next month: how to help find alien intelligence via the SETI collaborative program at http://www.setiathome.ssl.berkeley.edu/. [Join the Linux Journal group! —Ed.] Chances are that some little green mathematicians already know the next largest prime.
Stan Kelly-Bootle (firstname.lastname@example.org) has been computing on and off since his EDSAC I (Cambridge University, UK) days in the 1950s. He has commented on the unchanging DP scene in many columns (“More than the effin' Parthenon”—Meilir Page-Jones) and books, including The Computer Contradictionary (MIT Press) and UNIX Complete (Sybex). Stan writes monthly at http://www.sarcheck.com/ and http://www.unixreview.com/.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide