EOF - Getting Real about the Ideal

Nothing's perfect. That's why we'll never stop debugging everything.

Solutions might be useful, but problems are what make stories interesting. That's why I like reading the Linux-Kernel Mailing List (LKML) and the Kernel Trap Web site. I'm no hacker, and most of the work that's discussed there is too arcane for me. But, there still are problems to follow, and most of them lead somewhere.

Take the thread New Kernel Bugs, started by Natalie Protasevich on November 13, 2007. Andrew Morton followed by noting “no response from developers” after most of the bugs, concluding:

So I count around seven reports that people are doing something with and 27 that have been just ignored.

Three of these reports have been identified as regressions. All three of those remain unresponded to.

After many posts about particulars, David Miller added, “I think you like just saying 'No response from developers' over and over again to make some point about how developers are ignoring lots of bugs. That's fine, but at least be accurate about it.”

Andrew replied, “Do you believe that our response to bug reports is adequate?”

David came back with:

Do you feel that making us feel and look like shit helps?

...When someone like me is bug fixing full time, I take massive offense to the impression you're trying to give, especially when it's directed at the networking.

So turn it down a notch Andrew.

Andrew replied:

That doesn't answer my question.

See, first we need to work out whether we have a problem. If we do this, then we can then think about what to do about it.

I tried to convince the 2006 KS attendees that we have a problem and I resoundingly failed. People seemed to think that we're doing OK.

But it appears that data such as this contradicts that belief.

This is not a minor matter. If the kernel is slowly deteriorating, then this won't become readily apparent until it has been happening for a number of years. By that stage, there will be so much work to do to get us back to an acceptable level that it will take a huge effort. And it will take a long time after that for the kernel to get its reputation back.

So it is important that we catch deterioration early if it is happening.

Ingo Molnar followed with a long post that ended with:

Paradoxically, the “end product” is still considerably good quality in absolute terms because other pieces of our infrastructure are so good and powerful, but QA is still a “weak link” of our path to the user that reduces the quality of the end result. We could really be so much better without any compromises that hurt.

Much discussion among many participants followed, about the “new development model” and about policies and practices around bug-fixing, patching and, in general, debugging the debugging process. The thread ran to more than 100 posts, near as I can bother to count, over two days.

What stands out for me is how participatory it all is. Even its disorganization has organized qualities to it. What organizes it, I think, is respect for actual contribution. If it doesn't help, the principle says, it doesn't matter. There is gravity there. It keeps conversation grounded in the realities of actual contribution.

Linus has been saying this kind of thing for years. You can hear it again in the interview excerpted in the UpFront section of this Linux Journal issue. You also hear something new concerning the social side of kernel development. Here's what Linus says:

So, the technical sides are often easier in the sense that I don't get frustrated. Okay, we've had a bug and we've hit our head against a technical bug for a couple months and, yes, that can be slightly frustrating, but at the same time, you always know it's something that you are going to solve and...I never worry about that.

The social side is maybe a bit more difficult in the sense that that can be really frustrating and sometimes you don't solve the social problems and people get upset, and I think that's very interesting too. I mean...if everybody was easy and everybody was all pulling in the same direction, it wouldn't be as fun and interesting. And it's different and also it changes from time to time. Sometimes we concentrate on technical problems and then occasionally, happily fairly seldom, there comes this perfect storm of social issues that start up, and one flame war perhaps brings out some other issues that people have had and have been kind of simmering under the surface....

Outside this small world it has become fashionable to talk about “social networks” and point to Facebook and MySpace, with their millions of users and zillions of posts, as examples of those. Perhaps they are. But there's a difference between those and the societies of constructive problem-solvers who create the infrastructure on which civilization relies. One welcomes, and even values, noise. The other one doesn't. Which would you rather build on?

The trick is knowing what goes into what you rely on. With open-source code, and open development methods—including discussion among developers themselves—you can do that. You can know. Or at least try to know.

At their best, humans are creatures that try to know what's going on. But humans also aren't perfect. No species is. Life is experimental. Behavior, like the beings that commit it, is all prototype. So are developments amidst crystals, weather, geology, stars and galaxies. All is alpha and beta, and we never get to omega. Nor should we. Getting better is far more interesting than being perfect. You can build toward the ideal. But you use what's real.

Doc Searls is Senior Editor of Linux Journal. He is also a Visiting Scholar at the University of California at Santa Barbara and a Fellow with the Berkman Center for Internet and Society at Harvard University.

______________________

Doc Searls is Senior Editor of Linux Journal

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix