Is Linux now a slave to corporate masters?
Does it matter who pays the salaries of Linux kernel developers? If so, how much, and in what ways?
The threads fan out from Linux Kernel Development (April 2008) — a report by Greg Kroah-Hartman, Jonathan Corbet and Amanda McPherson. The report covers a lot of ground. Here are the subheads:
- Development Model
- Release Frequency
- Change Rates
- Source Size
- Who is Doing the Work
- Who is Sponsoring the Work
- Why Companies Support Linux Development
Guess which one has been getting the most attention?
One of the highlights: "over 70% of all kernel development is demonstrably done by developers who are being paid for their work". 14% is contributed by developers who are known to be unpaid and independent, and 13% by people who may or may not be paid (unknown), so the amount done by paid workers may be as high as 85%. The Linux kernel, then, is largely the product of professionals, not volunteers.
So Linux has become an economic joint venture of a set of companies, in the same way that Visa is an economic joint venture of a set of financial institutions. As the Linux Foundation report makes clear, the companies are participating for a diverse set of commercial reasons. Some want to make sure that Linux runs on their hardward. Others want to make sure that the basis of their distribution business is solid. And so on, and none of these companies could achieve their goals independently. In the same way, Visa provides services in many different locations around the world in different sizes and types of stores. Some banks need their service mainly in one country, some in another, but when they work together they all get to provide their services all around the world.
...the Linux Foundation report has made clear that open source has crossed its commercial Rubicon, and there is probably no going back.
A new report from the Linux Foundation reveals the extent to which the most famous and successful open source software project - the development of the Linux operating system - has shifted from being a volunteer effort to being a corporate initiative.
Nick goes on to quote Tom Slee's piece, and adds,
There's nothing particularly surprising in the shift from the volunteer to the corporate model - it tends to be what happens when lots of money enters the picture - but it does reveal that while Net-based "social production" efforts may be unprecedented in their scale and unusual in their technology-mediated structure, they are no more immune, or even resistant, to being incorporated into established market systems than any other type of labor that produces commercially valuable goods. The shift in Linux kernel development from unpaid to paid labor, from volunteers to employees, suggests that the Net doesn't necessarily weaken the hand of central management or repeal all the old truths about business organization.
In TechnologyOwl, Timothy Lee pushes back with The Open Source Model Is About Organization, Not Who Signs Your Paycheck. Addressing Nick specifically, Tim writes,
For starters, most of the people contributing to the kernel are professional programmers, and most professional programmers have jobs in the software industry. So it's totally unsurprising that most kernel contributors work for software companies.
But Carr's observation also misses the point in a deeper way. What makes the open source model unique isn't who (if anyone) signs the contributors' paychecks. Rather, what matters is the way open source projects are organized internally. In a traditional software project, there's a project manager who decides what features the product will have and allocates employees to work on various features. In contrast, there's nobody directing the overall development of the Linux kernel. Yes, Linus Torvalds and his lieutenants decide which patches will ultimately make it into the kernel, but the Red Hat, IBM, and Novell employees who work on the Linux kernel don't take their orders from them. They work on whatever they (and their respective clients) think is most important, and Torvalds's only authority is deciding whether the patches they submit are good enough to make it into the kernel. Carr suggests that the non-volunteer status of Linux contributors proves that the Internet "doesn't necessarily weaken the hand of central management," but that's precisely what the open source development model has done. There is no "central management" for the Linux kernel, and it would probably be a less successful project if there were.
What that kind of analysis is missing is that IBM is paying engineers to work on projects that IBM doesn't own, or solely direct. You pay these engineers -- but of all the relationships between senior management and line employees, the fact you are paying them is about the least important, institutionally. The idea that the minute you pay people to do something, you have the right to manage them and the right to completely take over that work for the benefit of the company -- that's not true.
IBM is not producing that code, IBM engineers are. IBM is paying those people because it's getting value out of them -- Linux creates value for the enterprise, it lowers our cost of managing software, it increases peoples' budgets for hardware and services -- but there's this crazy middle step where Linux is not now and cannot be owned or controlled by IBM. Linux is a brutal technical meritocracy, and there is no senior manager at IBM who can say, "I don't care what the kernel engineers think, I want this." They can't put it into the product without appealing to people who don't work for them. If they announced a strategic change in the kernel they would be laughed out of the room. They have given up the right to manage the projects they are paying for, and their competitors have immediate access to everything they do. It's not IBM's product.
There is a kind of perverse misreading of the change here to suggest that as long there are paid programmers working on the project, it's not developing in any way different from what's going on inside traditional organizations. It badly misunderstands how radical it is to have IBM and Novell effectively collaborating with no contractual agreement between them, and no right to expect that their programmers' work is going to be contributed to the kernel if people external to those organizations don't like it. And that's a huge change.
When people read those statistics, they think, If there's a salary, then all the other trappings of management must go along with it. Not only is that not true, it's actually blinds you to the fact that paying someone a salary without being able to direct their work is probably the biggest challenge to managerial culture within a business that one can imagine.
Now my own few cents worth.
First, if Tim and Clay are right, the language of the report needs some debugging. For example, this line:
What we see here is that a small number of companies are responsible for a large portion of the total changes to the kernel. But there is a "long tail" of companies which have made significant changes.
The operative noun there is companies, not engineers. Then there's this:
The list of companies participating in Linux kernel development includes many of the most successful technology firms in existence. None of these companies are supporting Linux development as an act of charity; in each case, these companies find that improving the kernel helps them to be more competitive in their markets.
While there is a difference between "improving the kernel" and Nick's "an economic joint venture", the stretch isn't a big one. In fact, it's one I'd expect quite a few people to make.
Second, the essential role of Linux in the growing world of utility computing — notably search and big back-end Web services such as Amazon's S3 and EC2 — is right up the alley of Nick's new book The Big Switch: Rewiring the World, from Edison to Google. There Nick describes an emerging networked computing future dominated by a few large companies providing computing and related services as pure utilities. Microsoft's Live Mesh, announced a few days ago, appears to be another one of these.
Third, in all the conversations I've had over the years with kernel developers, none has ever copped to obeying commands from corporate overlords to bias kernel development in favor of the company's own commercial ambitions. In fact, I've only heard stories to the contrary.
This is from my Geek Cruise report in November 2005, where I'm reporting a conversation with Andrew Morton:
Andrew went out of his way to make clear, without irony, that the symbiosis between large vendors and the Linux kernel puts no commercial pressure on the kernel whatsoever. Each symbiote has its own responsibilities. To illustrate, he gave the case of one large company application.
The (application) team don't want to implement (something) until it's available in the kernel. One of the reasons I'd be reluctant to implement it in the kernel is that they haven't demonstrated that it's a significant benefit to serious applications. They haven't done the work to demonstrate that it will benefit applications. They're saying "We're not going to do the work if it's not in the kernel". And I'm saying "I want to see it will benefit the kernel if we put it in".
He adds, "On the kernel team we are concerned about the long-term viability and integrity of the code base. We're reluctant to put stuff in for specific reasons where a commercial company might do that." He says there is an "organic process" involved in vendor participation in the kernel. Earlier this year I had a conversation with IBM's Dan Frye in which he said the same thing, and that it had taken IBM a number of years to lean how to adapt to the kernel development process, rather than vice versa. Andrew explains,
Look for example at the IBM engineers that do work on the kernel. They understand (how it works) now. They are no longer IBM engineers that work on the kernel. They're kernel developers that work for IBM. My theory here is that if IBM management came up to one of the kernel developers and said "Look, we need to do that", the IBM engineer would not say, "Oh, the kernel team won't accept that". He'd say "WE won't accept that". Because now they get it. Now they understand the overarching concern we have for the coherency and longevity of the code base.
Given that now these companies have been at it sufficiently long, they understand what our concerns are about the kernel code base. If IBM need a particular feature, they can get down and put it in the kernel. Just as they would for AIX. There are some constraints about how they do that, however, and they understand that.
But it has to be good for the kernel. And good for supporting, as Andrew puts it, "serious applications".
For those not involved in that process, "Good for the kernel" is a hard concept to grasp. In fact, I'm not sure I would have grasped it myself it I hadn't spent a week on a boat getting schooled by Andrew, Ted Ts'o and a bunch of other kernel developers.
In that same piece, I suggested that Linux development resembles that of a species, rather than of a commercial project:
Kernel development is not about Moore's Law. It's about natural selection, which is reactive, not proactive. Every patch to the kernel is adaptive, responding to changes in the environment, as well as to internal imperatives toward general improvements on what the species is and does.
We might look at each patch, each new kernel version, even the smallest incremental ones, as a generation slightly better equipped for the world than its predecessors. Look at each patch submission -- or each demand from a vendor that the kernel adapt to suit their needs in some way -- as input from the environment to which the kernel might adapt.
We might look at the growth of Linux as that of a successful species that does a good job of adapting, thanks to a reproductive cycle that shames fruit flies. Operating systems, like other digital life forms, reproduce exuberantly. One cp command or ctrl-d and you've got a copy, ready to go -- often into an environment where the species might be improved some more, patch by patch. As the population of the species grows, and more patches come in, the kernel adapts and improves.
These adaptations are more often reactive than proactive. This is even (or perhaps especially) true for changes that large companies such as IBM and HP, which might like to see proactive changes made to the kernel, to better support their commercial applications.
Responding on his blog, Greg called that "what I think is one of the most insitful descriptions about what the Linux kernel really is".
Still, questions about corporate influence on kernel development have been raised. Such as, How do companies influence Linux kernel development, beyond paying developers?
Will the answers expose the kernel as a "corporate initiative?". I doubt it, but I'm not the one who needs to be convinced.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide