Pass the Bug, Collect $500
Bugs are a reality of software development, and a pain for both coders and users. Security bugs are a particularly nasty variety, and in an effort to kill as many as possible, Google is now coughing up cash for catching Chrome and Chromium glitches.
The new program, modeled on Mozilla's successful Bug Bounty program, will pay rewards to bug-catchers who report "interesting and original vulnerabilities" in the code of either the Open Source Chromium browser, or Google's Chrome implementation. Google's Chris Evans, who announced the program on the official Chromium blog, described it as both a "token of our appreciation" for existing contributors and an incentive for new participation.
Only security-related bugs will be considered, with emphasis on those classified as "high" and "critical" severity, though any "clever vulnerability" could be considered. Only the first report of a particular bug will be considered, with the first entry in the project's bug tracker being considered the earliest report. A reward committee — composed up of Adam Barth, Chris Evans, Neel Mehta, SkyLined and Michal Zalewski — will determine which bugs are eligible, as well as whether a specific report constitutes one or multiple vulnerabilities.
Both Chrome and Chromium bugs will be considered, whether in the Dev, Beta, or Stable channel, provided the glitch occurs in the project's code. Plugins, extensions, and other add-on code from third-parties is ineligible. Shared components, however, could be eligible, provided they are in the browser itself — Evans cited "WebKit, libxml, image libraries, compression libraries, etc" as examples. The post does not give a clear answer on whether advance notice before public disclosure is required, saying only that "we encourage responsible disclosure."
The standard payment for eligible bugs will be $500, with a special — and comical — reward of $1337 for "particularly severe or particularly clever" vulnerabilities. In addition to the cash, the selected individuals will be credited in Chrome's release notes, and nominated for Google's "thank you" page. Contributors to the project are eligible, though those who "worked on the code or review in the area in question" will not be. The standard legal disclaimers apply — no payments to U.S. export-restricted countries, no minors unless represented by an adult, individuals are responsible for tax and other legal responsibilities, etc. etc.
No rewards have been announced thus far, though Evans indicated that the first would be prominently featured on the Chrome release blog. Whether the promise of bucks for bugs will result in an influx of security searchers remains to be seen, but anyone who happens to catch a glimpse of a glitch would do well to turn it it. After all, who couldn't do with an extra $1337?
Justin Ryan is a Contributing Editor for Linux Journal.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Validate an E-Mail Address with PHP, the Right Way
- New Products
- Weechat, Irssi's Little Brother
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
38 min 9 sec ago
- This has already been done
39 min 9 sec ago
- Reply to comment | Linux Journal
1 hour 24 min ago
- Welcome to 1998
2 hours 12 min ago
- notifier shortcomings
2 hours 36 min ago
4 hours 13 min ago
- Android User
4 hours 15 min ago
- Reply to comment | Linux Journal
6 hours 8 min ago
8 hours 57 min ago
- This is a good post. This
14 hours 10 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?