The Meaning of Open Source
Like many Linux Journal readers, I have been upgrading my Gibbons to Herons recently. And like many readers, I imagine, I have been finding a few little challenges along the way. That was no surprise, since it's pretty much par for the course when carrying out a major upgrade. But something else did surprise me, although in retrospect I see that it shouldn't have.
To begin with, I tried to fix things that the upgrade threw up on my own. I reasoned – somewhat optimistically, perhaps - that it was just a matter of applying some logic. But then I realised that this was a stupid thing to do – real closed-source thinking. After all, one of the central ideas behind open source is that if we collaborate, and build on the work of others, less effort will be required from each of us, and the end-result will be better. What was the point of re-inventing this particular wheel if others had been there before me?
So I began searching online for solutions to my various minor difficulties. And I was amazed at what I found. In every case, others had had similar problems, and in every case people had offered helpful suggestions as to how to fix them. It was a wonderful vindication of the entire open source way, of people helping each other by passing on their personal discoveries.
But as I searched around, something else became clear: that finding this information is not a trivial exercise. First, you need to formulate your search engine enquiry very tightly, otherwise hundreds of false positives are thrown up. And even then, the discussion threads that deal with your problem are long, winding and branching; following the right path through the explanatory labyrinth is a challenge, and it often took me a while to find the discussion that led to the suggestion I needed.
It's something of a truism that a strength of the free software community is that it will help people with the problems that they encounter. And as my own experiences showed, often it's not even necessary to seek that help, since others have already posed the same question in the past. But equally, it's clearly unrealistic to expect general users – especially newcomers to the world of free software – to take this path. It's easy to be overwhelmed by the information out there, and hard to find exactly what you're looking for.
That got me thinking: given the amazing resources that are available online, how could we make it easy for anyone to find things? One obvious solution would be to create a central database of knowledge – a kind of Wikipedia for free software – where everything is linked together logically, and laid out in a way that people can find what they're looking for with ease.
But that's neither realistic nor particularly desirable. It would require a huge amount of effort from volunteers whose skills could be more usefully applied elsewhere in terms of solving new problems rather than rehashing old ones. Moreover, it seems a terrible waste just to ignore the huge body of knowledge that already exists out there. What we need, rather, is a more intelligent way to search through what we've got – one that can respond to any kind of naive question posed by beginners.
There are two main issues here. One is the question of intelligence – of search engines that “understand” what we are looking for. The other is how the search is framed. At the moment, we typically throw in a few keywords to a search box; even if we pose it as a sentence, it is not much better in terms of the richness of the query. Maybe we need new ways – new interfaces – to help users, especially non-technical ones – express themselves more usefully as far as the search engine is concerned. That, in its turn, would allow exactly the right information to be pinpointed more easily, avoiding the need to go sifting through the hundreds of Google hits that are typically turned up.
These kinds of capabilities are typical features of what is generally called the semantic web. Despite interesting experiments like Freebase and Powerset, it's clear we're still a long way from realising that idea. Maybe the free software world should be doing more in this space, since it stands to gain more than most from the boost to search capabilities it implies. As a start, it might encourage researchers and companies active in the field of the semantic web to use the vast, chaotic body of knowledge about free software as a test-bed for their ideas and technologies. They would get the best beta-testers in the world, and we would gain better access to that rich mine of information.
Glyn Moody writes about open source at opendotdotdot.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide