Now that the final version of the GNU General Public Licence version 3 has been released, the in-depth analysis of its implications can begin. Two of the first commentaries to be published have come from the legal world, and there are doubtless many more being prepared for purely internal use within software companies wondering whether to adopt the new licence. But important as both the legal and commercial details are, I believe the true significance of the GPLv3 lies elsewhere.
For example, in all the high-profile excitement about whether the Linux kernel will or won't adopt GPLv3, its effect on consumer devices like those from TiVo, or whether the new licence does or doesn't block the Microsoft-Novell deal, one amazing fact has been overlooked: that the extremely slow, meticulous and obsessive revision of a legal document designed to regulate the use of a certain class of software has generated thousands of articles, many in the mainstream press. This is rather incredible: who, a few years ago, would have thought that something as archetypally dull as a software licence could elicit such passion and and such interest?
In part that interest has been stoked by the manner in which the licence revision has been carried out. Whereas the first version of the GNU GPL was essentially the product of one man – Richard Stallman – and even the second involved only him and a few close collaborators, the drafting of version 3 has been opened out in an exemplary fashion to allow as wide a participation as possible. Given Stallman's close control of his GNU project and all that pertains to it, this new style of transparent, collaborative and inclusive working is a significant development for the future.
It is important not least because it is indicative of a wholly new spirit within the GNU movement. Take, for example, the following section from the one of the FAQs accompanying the licence:
Some companies effectively outsource their entire IT department to another company. Computers and applications are installed in the company's offices, but managed remotely by some service provider. In some of these situations, the hardware is locked down; only the service provider has the key, and the customers consider that to be a desirable security feature.
We think it's unfortunate that people would be willing to give up their freedom like this. But they should be able to fend for themselves, and the market provides plenty of alternatives to these services that would not lock them down. As a result, we have introduced this compromise to the draft: distributors are only required to provide Installation Information when they're distributing the software on a User Product, where the customers' buying power is likely to be less organized.
Compromise? Richard Stallman has accepted a compromise? Obviously aware of the shocking nature of this confession, the FAQ hastens to add -
This is a compromise of strategy, and not our ideals
- but even a compromise of strategy is an extraordinary shift for the hitherto unbending Stallman. It may not represent any fundamental shift away from free software purism to open source pragmatism, but it is certainly indicative of a more nuanced and sensitive approach. This can also be seen from the fact that considerable efforts have been made to improve compatibility with the Apache licence, and that the threatened tough stance against companies using free software as the basis of Software as a Service (SaaS) offerings was ditched in favour of a completely separate licence, the GNU AGPLv3, still being drafted.
The other notable aspect of the GPLv3 is that discussions about its possible adoption by software houses are framed against a background where the GPL is now widely accepted as the best licence for businesses based around free software. That is, not so much the best licence for a company's coders, but the best licence for its capitalists. The announcement by Sun that it would adopting the GPL for Java is perhaps the clearest demonstration of this, but an increasing number of open source companies are moving from other licences to “pure
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?