Google Adjusts WebM License
When Google announced the initial release of WebM — its collaborative drive to create a new open video format for HTML5 — there was much excitement within the Open Source community. Amid the excitement, however, was concern about the project's licensing, concern that quickly led to calls for change.
On Friday, Google announced that, in response to the community's concerns, a "small change" was made to WebM's license, restoring the "pure BSD nature" of the license as well as GPL compatibility. The change modifies the license's patent clause to reflect the GPL3 and Apache patent clauses, which separate rights to patents from rights to copyright. Under the original license, anyone bringing patent litigation against Google would have surrendered all rights granted by the license — the new language clarifies that only patent rights would be terminated, the same penalty incurred under the GPL3 and Apache licenses.
Google's Open Source Programs Manager Chris DiBona noted that in making the change, the company has avoided creating a new Open Source license, a practice which many consider harmful to Open Source in general. Additionally, Google updated other portions of the license and supporting documentation to clarify the license terms and the rights granted under it.
Justin Ryan is a Contributing Editor for Linux Journal.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
3 hours 36 min ago
- Please correct the URL for Salt Stack's web site
6 hours 47 min ago
- Android is Linux -- why no better inter-operation
9 hours 3 min ago
- Connecting Android device to desktop Linux via USB
9 hours 31 min ago
- Find new cell phone and tablet pc
10 hours 29 min ago
11 hours 58 min ago
- Automatically updating Guest Additions
13 hours 7 min ago
- I like your topic on android
13 hours 53 min ago
- This is the easiest tutorial
20 hours 29 min ago
- Ahh, the Koolaid.
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?