e-Market e-Madness, e-Nough.
There are, they say, hard and soft sciences, not to be confused with difficult vs. easy, but more related to claims of objective, measurable, repeatable precision (we stout hardies) versus subjective, hand-waving, proofs-by-assertion (you big softies). The spectrum of fuzziness typically ranges from pure mathematics and physics at the diamond-tipped top, then descends via chemistry and biology (almost converging to synonyms), ending with a series of “life” and “social” sciences. The latter at least co-opt the intentions and vocabulary of the scientific method and rightly escape the “beyond-the-soggy-pale” category of pseudo-science (astrology, UFOlogy, pyramidiotology, hidden-Bible cryptology ... ad astra, ad nauseam). Honest “theology” wriggles through the colander by rejecting the “scientific” model.
Of course, science, life and society being what they are (send me a $1,000 check for this month's definitions), the hard/soft debate is doomed to waffle on. The paradox is that comparing the hardnesses of any two disciplines requires a valid metric—and that metric will depend on the hardness of the hardest involved domain. You can hear the magic predicate meta creeping into the equation. It reminds us that the very foundations of the purest of pure mathematics (formal set theory) were softened (nay, Osterized and Cuisinarted) by Gödel's meta-mathematical shocks in 1931.
It's still difficult to accept that the hardest queen of the sciences proved so brittle after millennia of complacency. Harder still to note that everyday mathematics and all the dependent sciences rumbled on, regardless. However, there was a clear dent in the traditional hierarchy.
We may avoid an arithmetical operator>> with a sort of “diamond-scratches-glass” ordering, but that moves us to unprovable, contentious, domain-dependent, metaphorical comparisons (e.g., in Political Science, we readily declare that Lincoln was a better president than Clinton).
There are other parameters available for comparing sciences. Some, such as “usefulness”, are possibly more determinable and worthy of discussion. Thus, cosmology (already disputed in the hard/soft science ratings since we can't, as yet or ever, repeat the “big bang” experiment, do a tachyon glide into the local wormhole, or contact our nearest parallel universe) seems to offer useless theories and predictions: whether the cosmos expands forever or collapses after 10 billion years is, to most readers of Angela's Ashes, hardly even worth mentioning.
What of our provably most useful sciences: economics and computer science? Demi-soft economics has been dubbed the “dismal” science, while semi-hard computer science must surely be the most boring (most of its formal results, dated 1935, are proofs that “this is impossible—don't bother!”) Yet hand-in-hand, both sciences have triggered the most undismal, unboring IPO stock-market episodes since the South Sea bubble. What is anything really worth? Forget the Marxian cosource-sweat-value-added axioms. If dazed bidders offer $1 million for a single Yahoo Japan share, then that is, by definition, its current “worth”. Next question? Red Hat may or may not fairly distribute its IPO gains, but they missed out 500% by not calling themselves e-Red e-Hat e-Linux. And just wait until I launch e-skb-gnu-e-free-hardware.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide