My Visit to SCO
One of the last things Chris Sontag said before he left is SCO is not against Linux. SCO likes Linux. SCO wants to get to the point where Linux can move forward.
This may be a deep misunderstanding of the free software process. If Linux becomes encumbered to the point where commercial users must pay a fee, I expect that many independent developers will stop working on it. Linux development will slow down and may eventually stagnate. The people in charge at SCO may not understand that.
On the other hand, Chris Sontag's statement may simply have been cynical and manipulative--the sort of thing that people say to make malicious statements appear fair and open minded, as in "Joe is a bloodthirsty cannibal, but I like him as a person".
I can't help thinking that as of this writing SCO has a market cap of around $130 million and Red Hat has nearly $300 million in cash and investments. Even at an inflated price, Red Hat could afford to buy SCO and free up Unix once and for all. Live the dream.
I am not a Linux maintainer. But I would like to suggest that this case make the Linux maintainers take the issues of copyright paperwork seriously.
First, I think all Linux contributors should consider their own contributions. Is there any chance that they have contributed code that is copied directly from Unix or any other non-free source? Here I'm not talking about SCO's expanded sense of derived work; I'm talking about direct copying, such as may (or may not) have occurred in the one example SCO showed me. Any such directly copied code should be rewritten in a different fashion, perhaps by somebody else.
Similarly, I think all Linux maintainers should consider the code for which they are responsible and convince themselves that the contributors did not do any direct copying. I personally doubt that anybody is intentionally copying non-free code into Linux. But mistakes can happen.
Removal of any copied code, if there is any, won't affect the lawsuit against IBM, but it may affect legal liability concerns for Linux users.
My next suggestion is that Linus and the Linux maintainers form a foundation to hold copyright declarations for Linux. Linus has made clear in the past that he does not want all the Linux copyrights held in the same place. While that means there is no single party who can be sued about a GPL violation, my impression is Linus thinks that is an advantage.
However, perhaps it would be okay to require all significant Linux contributors to sign papers stating they own the code they contribute and to require their employers to also sign papers. This would be along the lines of the paperwork used by the Free Software Foundation, but it wouldn't actually be a copyright assignment.
Such paperwork would not eliminate the possibility of a mistake, nor the possibility of malicious code insertion. But I think it would make such occurrences considerably less likely. It would force people to think about the issue. It also might permit moving any legal liability for copying from Linux users to Linux contributors, which would be good for users. The increased risk for contributors might make them more careful, though hopefully not too careful.
It would be necessary for somebody to monitor accepted contributions and make sure that copyright declarations are signed by all new contributors before each release. It would be unreasonable to expect Linus or the other central maintainers to do this work.
I would be willing to help set up such a foundation, although I don't think my help is required. The FSF started requiring copyright assignments in the wake of the threats from Unipress over the Gosling Emacs code. Perhaps the SCO lawsuit means Linux needs to start tightening up its IP processes. In an ideal world this would not be necessary, but unfortunately we must all live in this world.
My plane from San Francisco left 90 minutes late. I arrived in Salt Lake City well after midnight and got lost driving to the hotel. The next morning, I locked my keys in the car. Fortunately, Avis repair service showed up in 25 minutes with a new key, but I was then 20 minutes late getting to SCO. Rather than look like a total idiot right off the bat, I told Blake Stowell that I "had trouble with my rental car." He was very nice about it.
My plane leaving Salt Lake City that afternoon hit a seagull shortly after take off. We returned to the airport. After landing, the pilot told us the windshield now had a small crack, and the plane wasn't going anywhere. After disembarking, we were able to look back at the plane--a rather gory sight. I have enough travel experience that I immediately used my cell phone and booked a seat on the next flight out. When that plane left, two hours later, there was still a long line of people trying to get to San Francisco that day.
All told, on the trip I spent about $350, plus 25,000 frequent flier miles, plus 24 hours away from my family. Free software has given me a lot over the years, and I can afford it. If you want to contribute in support of my trip, please make a donation to the Free Software Foundation, the Electronic Frontier Foundation or Amnesty International.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?