Wackypedia: the Wikipedia fork
The fork occupies an ambivalent place in the world of open source. On the one hand, it is widely perceived as the worst thing that can happen to a project, pitting hacker against hacker, and dissipating coding effort that could be more usefully applied in a united way. On the other, it is the ultimate test and guarantee of openness: if code cannot be forked, it is not truly open. Perhaps most importantly, it is the threat of the fork, hanging over projects like a digital sword of Damocles, that keeps them close to their constituencies, as free software's short history has shown time and again.
The closest that the Linux kernel has come to forking, was during the famous “Linus does not scale” incident that began on 28 September1998 with the innocent question:
Am I the only one for whom 2.1.123 fbcon.c doesn't compile?
and culminated with Linus losing it:
Go away, people. Or at least don't Cc me any more. I'm not interested, I'm taking a vacation, and I don't want to hear about it any more. In short, get the hell out of my mailbox.
At which point senior coders like Alan Cox and David Miller feared that a fork might be necessary if it proved impossible to revise the existing patch submission system that was centred around Linus. Fortunately, that split never happened, not least because its threat was enough to concentrate people's minds on finding a mutually satisfactory solution – in this case, first, to change the way patch submissions were made, and secondly, to use Larry McVoy's BitKeeper for overall source management, later replaced by Linus' own Git.
Ultimately, then, the spectre of a fork proved salutary for kernel development: it forced the main players to confront the growing problems, and find a solution, rather than just carry on and hope for the best.
Given this dynamic, I wonder whether it might be time to start thinking about forking Wikipedia – purely for its own good, you understand.
Some might say that such drastic action is hardly necessary, because Wikipedia is not faced by any looming crisis; on the contrary – it is going from strength to strength, with the English-language version alone storming past the two-million article mark. But there has always been an unresolved tension at the heart of Wikipedia's mission, one that goes right back to its origins.
Before Jimmy Wales's Wikipedia, there was Jimmy Wales's Nupedia, inspired by Dmoz, a volunteer effort to create a free version of Yahoo's hierarchical listings. Dmoz began in 1998 under the name Gnuhoo - which was inspired by GNU/Linux - before turning into Newhoo. It was acquired by Netscape and released as open content. As Wikipedia's co-founder Larry Sanger explained:
Originally [Wikipedia] was the Nupedia Wiki - our idea was to use it as an article incubator for Nupedia. Articles could begin life on this wiki, be developed collaboratively and, when they got to a certain stage of development, be put it into the Nupedia system.
That is, Wikipedia was originally a kind or rough and ready working area, where articles were knocked together and then polished for inclusion in the definitive Nupedia. The free-for-all nature of Wikipedia was one reason why Sanger later decided to set up his own Citizendium project, what he called a "progressive or gradual fork" of Wikipedia. That idea was later dropped, and Citizendium is now essentially a standalone effort to produce an online encyclopedia where the emphasis is on using the knowledge of experts to guide the creation of articles, rather than weighting contributions from anyone equally, as with Wikipedia.
Ironically, though, Wikipedia has been moving steadily towards Citizendium's philosophy. The original claim of being "an encyclopedia anyone can edit" has become less all-inclusive – not so much in terms of who may write, but rather as far as what they can write about. This involves the issue of “notability”:
Within Wikipedia, notability is an inclusion criterion based on encyclopedic suitability of a topic. The topic of an article should be notable, or "worthy of notice". This concept is distinct from "fame", "importance", or "popularity", although these may positively correlate with notability. A subject is presumed to be sufficiently notable if it meets the general notability guideline below, or if it meets an accepted subject specific standard listed in the table to the right.
That is, Wikipedia is no longer interested in accepting entries about any old thing, but requires them to be “notable” in the above sense. Now, that's all very well if you aspire to be a dead serious kind of encyclopedia along the lines of the Encyclopaedia Britannica, but given that Wikipedia offered the hope of something more than just an online version of that dead tree monument to dead knowledge, I'm not so sure this attempt at increasing the respectability of Wikipedia is right.
For a start, it means that I can't just go to Wikipedia and find entries on any old subject: if my query is about something insufficiently “notable”, then it's back to Googling for an answer. But it seems to me that part of the promise and power of Wikipedia is that it could be the ultimate repository of knowledge – all knowledge, not just the serious, notable bits.
Of course, Wikipedia is perfectly entitled to take any direction it wants to, but given that I, at least, want something more, I wonder whether there might be others out there who would also prefer a more inclusive, less picky, Wikipedia – as it was in the beginning - one that routinely has entries on anything and everything. After all, the great thing about such online repositories is that nobody is forced to read stuff they don't care about. You could even offer users “notability” filters, so that you only saw the level of trivia you could stomach.
It would be easy for Wikipedia to accomplish this – it might even regard this rough and ready Wackypedia as the development branch for the “real”, grown-up Wikipedia – just as Wikipedia was originally meant to be for Nupedia. If it doesn't want to do that, fine – but maybe somebody else does: anyone for a fork?
Glyn Moody writes about openness at opendotdotdot.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide