EOF - Getting Real about the Ideal
Solutions might be useful, but problems are what make stories interesting. That's why I like reading the Linux-Kernel Mailing List (LKML) and the Kernel Trap Web site. I'm no hacker, and most of the work that's discussed there is too arcane for me. But, there still are problems to follow, and most of them lead somewhere.
Take the thread New Kernel Bugs, started by Natalie Protasevich on November 13, 2007. Andrew Morton followed by noting “no response from developers” after most of the bugs, concluding:
So I count around seven reports that people are doing something with and 27 that have been just ignored.
Three of these reports have been identified as regressions. All three of those remain unresponded to.
After many posts about particulars, David Miller added, “I think you like just saying 'No response from developers' over and over again to make some point about how developers are ignoring lots of bugs. That's fine, but at least be accurate about it.”
Andrew replied, “Do you believe that our response to bug reports is adequate?”
David came back with:
Do you feel that making us feel and look like shit helps?
...When someone like me is bug fixing full time, I take massive offense to the impression you're trying to give, especially when it's directed at the networking.
So turn it down a notch Andrew.
That doesn't answer my question.
See, first we need to work out whether we have a problem. If we do this, then we can then think about what to do about it.
I tried to convince the 2006 KS attendees that we have a problem and I resoundingly failed. People seemed to think that we're doing OK.
But it appears that data such as this contradicts that belief.
This is not a minor matter. If the kernel is slowly deteriorating, then this won't become readily apparent until it has been happening for a number of years. By that stage, there will be so much work to do to get us back to an acceptable level that it will take a huge effort. And it will take a long time after that for the kernel to get its reputation back.
So it is important that we catch deterioration early if it is happening.
Ingo Molnar followed with a long post that ended with:
Paradoxically, the “end product” is still considerably good quality in absolute terms because other pieces of our infrastructure are so good and powerful, but QA is still a “weak link” of our path to the user that reduces the quality of the end result. We could really be so much better without any compromises that hurt.
Much discussion among many participants followed, about the “new development model” and about policies and practices around bug-fixing, patching and, in general, debugging the debugging process. The thread ran to more than 100 posts, near as I can bother to count, over two days.
What stands out for me is how participatory it all is. Even its disorganization has organized qualities to it. What organizes it, I think, is respect for actual contribution. If it doesn't help, the principle says, it doesn't matter. There is gravity there. It keeps conversation grounded in the realities of actual contribution.
Linus has been saying this kind of thing for years. You can hear it again in the interview excerpted in the UpFront section of this Linux Journal issue. You also hear something new concerning the social side of kernel development. Here's what Linus says:
So, the technical sides are often easier in the sense that I don't get frustrated. Okay, we've had a bug and we've hit our head against a technical bug for a couple months and, yes, that can be slightly frustrating, but at the same time, you always know it's something that you are going to solve and...I never worry about that.
The social side is maybe a bit more difficult in the sense that that can be really frustrating and sometimes you don't solve the social problems and people get upset, and I think that's very interesting too. I mean...if everybody was easy and everybody was all pulling in the same direction, it wouldn't be as fun and interesting. And it's different and also it changes from time to time. Sometimes we concentrate on technical problems and then occasionally, happily fairly seldom, there comes this perfect storm of social issues that start up, and one flame war perhaps brings out some other issues that people have had and have been kind of simmering under the surface....
Outside this small world it has become fashionable to talk about “social networks” and point to Facebook and MySpace, with their millions of users and zillions of posts, as examples of those. Perhaps they are. But there's a difference between those and the societies of constructive problem-solvers who create the infrastructure on which civilization relies. One welcomes, and even values, noise. The other one doesn't. Which would you rather build on?
The trick is knowing what goes into what you rely on. With open-source code, and open development methods—including discussion among developers themselves—you can do that. You can know. Or at least try to know.
At their best, humans are creatures that try to know what's going on. But humans also aren't perfect. No species is. Life is experimental. Behavior, like the beings that commit it, is all prototype. So are developments amidst crystals, weather, geology, stars and galaxies. All is alpha and beta, and we never get to omega. Nor should we. Getting better is far more interesting than being perfect. You can build toward the ideal. But you use what's real.
Doc Searls is Senior Editor of Linux Journal. He is also a Visiting Scholar at the University of California at Santa Barbara and a Fellow with the Berkman Center for Internet and Society at Harvard University.
Doc Searls is Senior Editor of Linux Journal
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- The Italian Army Switches to LibreOffice
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Petros Koutoupis' RapidDisk
- Linux Mint 18
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Varnish Software's Varnish Massive Storage Engine
- Privacy and the New Math
- Ben Rady's Serverless Single Page Apps (The Pragmatic Programmers)
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide