EOF - No 2.7 Kernel?
At the 2004 Linux Kernel Summit, the core kernel developers announced they weren't creating a 2.7 development kernel anytime soon. They said they liked the way things were going and didn't want to change things. This caused a lot of confusion, so this article is an attempt to explain.
During the 2.5 kernel development cycle, the top-level maintainers' process changed. Before Linus started using the BitKeeper version control system, kernel maintainers would send Linus 10–20 patches at once, then wait for him to release a snapshot to determine whether the patches had made it in. If not, they would try again. This worked pretty well for more than ten years.
Early in the 2.5 development cycle, a huge flame war over dropped patches ended with Linus deciding to try BitKeeper. After much hacking by the BitKeeper developers to clean up some features that kernel developers needed, Linus released the 2.5.3 kernel on February 2, 2002, and announced he was going to use BitKeeper. This really didn't change the way the majority of kernel developers worked. They still send patches to the upper-level maintainers and wait. But for the small subset of maintainers that decided to use BitKeeper, life changed a lot. They would create a BitKeeper tree, populate it with the changes they wanted to send to Linus and then point Linus to it. He would suck the patches into his tree and merge any minor conflicts with other people's work.
BitKeeper had some unexpected consequences. First, everyone had an up-to-date view of Linus' tree at any moment. A few developers, including Peter Anvin and Jeff Garzik, created the ability to make nightly snapshots appear as patches at kernel.org. They also, with the help of the BitKeeper developers, created CVS and Subversion repositories for users of those version control systems.
Knowledge of the current state of the tree meant maintainers could start sending patches to Linus faster and see when he accepted them. Instead of waiting two weeks for a new snapshot, they could send in more changes immediately. The rate of kernel development instantly increased.
Second, every patch accepted into Linus' tree started going to a mailing list, which enabled everyone to see changes. Developers could watch what was happening in all parts of the kernel, see the reasoning as explained in changelog entries and point out problems. The list increased the peer review process, allowing new bugs to be noticed sooner, while the area of development was still fresh in the developer's mind.
On October 31, 2002, kernel developers announced the 2.6 feature-freeze. On July 7, 2003, the first 2.6.0-test1 kernel was released, and the maintainer process changed again. Andrew Morton started being the funnel to Linus for almost all patches. However, maintainers that used BitKeeper kept having their trees being pulled directly in by Linus. Finally, on December 17, 2003, the 2.6.0 kernel was released, representing an average of 1.66 changes per hour for the 680 days of 2.5 and 2.6 development.
The next five 2.6.x kernel releases happened about every month, with 538–1,472 changes per release. Then, with the 2.6.5 kernel things started to move much more quickly; 2.6.6 came out with 1,757 changes, and 2.6.7 had 2,306 changes. From 2.6.0 to 2.6.7, the stable kernel, at 2.2 patches per hour, was changing at a rate greater than the “development” kernel had. But, the 2.6.7 kernel was the most stable Linux kernel ever, by a wide range of benchmarks and regression tests.
Have the core kernel developers gone mad and started to add untested code willy-nilly? No. In 2.6, Andrew Morton continued to stage all proposed patches for testing before sending them to Linus. Maintainers using BitKeeper would check the status of their patches in Andrew's tree, and if no problems were found, they would ask Linus to accept them.
So, all changes now are being tested by users in the -mm tree. This is different from how things had been done before. Now, patches are tested, built, used and abused by users in the world before being deemed acceptable. If a specific patch or set of patches is found to have problems, Andrew simply drops them from his tree and forces the original developer to fix the issues.
Because of the ability for a wider range of testing of patches to go into the tree, the development process for 2.6 will consist of the following: 1) Linus releases a 2.6 kernel release. 2) Maintainers flood Linus with patches that have been proven in the -mm tree. 3) After a few weeks, Linus releases a -rc kernel snapshot. 4) Everyone recovers from the barrage of changes and starts to fix any bugs found in the -rc kernel. 5) A few weeks later, the next 2.6 kernel is released and the cycle starts all over again.
However, if a set of kernel changes ever appears that looks to be quite large and intrusive, a 2.7 kernel might be forked to handle it. Linus will do this, putting the new experimental patches into that tree. Then he will continue to pull all of the ongoing 2.6 changes into the 2.7 kernel, as the 2.7 kernel stabilizes. If it turns out that the 2.7 kernel is taking an incorrect direction, the 2.7 kernel will be deleted, and everyone will continue on with the 2.6 tree. If the 2.7 tree becomes stable, it either will be merged back into the 2.6 tree, or it will be declared 2.8.
All of this is being done because kernel developers are working very well together in the current situation. Large changes that are arguably rather intrusive, like the change from 8k to 4k kernel stacks, are being merged into the “stable” kernel series. Users have access to the latest features with a greatly reduced delay time. Distributions can provide a more stable kernel to their customers, as they are not forced to backport features from the “development” kernel into their “stable” kernel, as was the case during the 2.5 development series.
Quicker development ensures that the in-kernel API will change constantly. This always has been the case for Linux, but now it is even more pronounced. Thus, any kernel modules that are not kept in the main kernel.org tree quickly will stop working. It is essential that these modules be added to the main kernel tree. That way, any API changes also are made to the affected modules, and all users benefit from the added features these modules provide.
The process really hasn't changed suddenly, it has evolved slowly into something that has been working quite well—so well in fact, no one outside of the kernel community realized it had changed, only that they were using the best Linux kernel ever.
Greg Kroah-Hartman currently is the Linux kernel maintainer for a variety of different driver subsystems. He works for IBM, doing Linux kernel-related things, and can be reached at firstname.lastname@example.org.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Profiles and RC Files
- Astronomy for KDE
- Maru OS Brings Debian to Your Phone
- Understanding Ceph and Its Place in the Market
- OpenSwitch Finds a New Home
- Git 2.9 Released
- SoftMaker FreeOffice
- Snappy Moves to New Platforms
- What's Our Next Fight?
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide