EOF - Getting Real about the Ideal
Solutions might be useful, but problems are what make stories interesting. That's why I like reading the Linux-Kernel Mailing List (LKML) and the Kernel Trap Web site. I'm no hacker, and most of the work that's discussed there is too arcane for me. But, there still are problems to follow, and most of them lead somewhere.
Take the thread New Kernel Bugs, started by Natalie Protasevich on November 13, 2007. Andrew Morton followed by noting “no response from developers” after most of the bugs, concluding:
So I count around seven reports that people are doing something with and 27 that have been just ignored.
Three of these reports have been identified as regressions. All three of those remain unresponded to.
After many posts about particulars, David Miller added, “I think you like just saying 'No response from developers' over and over again to make some point about how developers are ignoring lots of bugs. That's fine, but at least be accurate about it.”
Andrew replied, “Do you believe that our response to bug reports is adequate?”
David came back with:
Do you feel that making us feel and look like shit helps?
...When someone like me is bug fixing full time, I take massive offense to the impression you're trying to give, especially when it's directed at the networking.
So turn it down a notch Andrew.
That doesn't answer my question.
See, first we need to work out whether we have a problem. If we do this, then we can then think about what to do about it.
I tried to convince the 2006 KS attendees that we have a problem and I resoundingly failed. People seemed to think that we're doing OK.
But it appears that data such as this contradicts that belief.
This is not a minor matter. If the kernel is slowly deteriorating, then this won't become readily apparent until it has been happening for a number of years. By that stage, there will be so much work to do to get us back to an acceptable level that it will take a huge effort. And it will take a long time after that for the kernel to get its reputation back.
So it is important that we catch deterioration early if it is happening.
Ingo Molnar followed with a long post that ended with:
Paradoxically, the “end product” is still considerably good quality in absolute terms because other pieces of our infrastructure are so good and powerful, but QA is still a “weak link” of our path to the user that reduces the quality of the end result. We could really be so much better without any compromises that hurt.
Much discussion among many participants followed, about the “new development model” and about policies and practices around bug-fixing, patching and, in general, debugging the debugging process. The thread ran to more than 100 posts, near as I can bother to count, over two days.
What stands out for me is how participatory it all is. Even its disorganization has organized qualities to it. What organizes it, I think, is respect for actual contribution. If it doesn't help, the principle says, it doesn't matter. There is gravity there. It keeps conversation grounded in the realities of actual contribution.
Linus has been saying this kind of thing for years. You can hear it again in the interview excerpted in the UpFront section of this Linux Journal issue. You also hear something new concerning the social side of kernel development. Here's what Linus says:
So, the technical sides are often easier in the sense that I don't get frustrated. Okay, we've had a bug and we've hit our head against a technical bug for a couple months and, yes, that can be slightly frustrating, but at the same time, you always know it's something that you are going to solve and...I never worry about that.
The social side is maybe a bit more difficult in the sense that that can be really frustrating and sometimes you don't solve the social problems and people get upset, and I think that's very interesting too. I mean...if everybody was easy and everybody was all pulling in the same direction, it wouldn't be as fun and interesting. And it's different and also it changes from time to time. Sometimes we concentrate on technical problems and then occasionally, happily fairly seldom, there comes this perfect storm of social issues that start up, and one flame war perhaps brings out some other issues that people have had and have been kind of simmering under the surface....
Outside this small world it has become fashionable to talk about “social networks” and point to Facebook and MySpace, with their millions of users and zillions of posts, as examples of those. Perhaps they are. But there's a difference between those and the societies of constructive problem-solvers who create the infrastructure on which civilization relies. One welcomes, and even values, noise. The other one doesn't. Which would you rather build on?
The trick is knowing what goes into what you rely on. With open-source code, and open development methods—including discussion among developers themselves—you can do that. You can know. Or at least try to know.
At their best, humans are creatures that try to know what's going on. But humans also aren't perfect. No species is. Life is experimental. Behavior, like the beings that commit it, is all prototype. So are developments amidst crystals, weather, geology, stars and galaxies. All is alpha and beta, and we never get to omega. Nor should we. Getting better is far more interesting than being perfect. You can build toward the ideal. But you use what's real.
Doc Searls is Senior Editor of Linux Journal. He is also a Visiting Scholar at the University of California at Santa Barbara and a Fellow with the Berkman Center for Internet and Society at Harvard University.
Doc Searls is Senior Editor of Linux Journal
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Sony Settles in Linux Battle
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Libarchive Security Flaw Discovered
- Profiles and RC Files
- Maru OS Brings Debian to Your Phone
- Understanding Ceph and Its Place in the Market
- Snappy Moves to New Platforms
- The Giant Zero, Part 0.x
- Git 2.9 Released
- Astronomy for KDE
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide