The Linux Kernel Summit

At the Linux Kernel Summit, held March 28 and 29, 2001 at the Hyatt Hotel in San Jose, the programmers of the Linux kernel met face to face to determine future directions of the kernel. The summit was structured as a number of talks separated by chat time. The attendance was strong, about 60 kernel programmers and a few carpetbaggers like myself. I was attending to provide an OSDN presence at the

A webcast of the summit will be up April 10th on the OSDN web site. I recommend listening to the talks there to learn more about what went on.

Day 1

After registration and danish, Ted opened the show by expressing his thanks to the show sponsors, IBM, AMD and EMC. The show was well appointed, with a wireless 802.11b network and cards available for attendees. On each table was a power strip for laptops--you've never seen so many laptops. The network was somewhat sporadic, but as I walked around the room I noticed a lot of compiling going on. A lot of software development was happening. While I was there, Dave Miller wrote a utility to modulate the speed of the CPU fans based on the temperature reading from the motherboard.

Each talk lasted about one hour, with ample time for questions and interruptions built in.

Talk 1Requirements for a High Performance DatabaseLance Larsh, Oracle

The first talk, given by Lance Larsh of Oracle, covered the requirements for running a high performance database under Linux. Just its inclusion in the show should indicate this is an important issue for the kernel group. The back-and-forth was exciting and important for both sides. One interesting point was the statement that the raw filesystem isn't as important as previously considered because database administrators hate it, and it makes a database harder to backup and restore.

That was followed by a spirited discussion of O_SYNC and O_DSYNC (flags to the open system call to force synchronous I/O) and the changes from 2.2 and 2.4 on multiple CPU machines running SCSI. This was followed by many discussions, ranging from shared memory paging and page sizes to process memory consumption for said tables. The problem, Linus noted, with shared pagetables is you would have to have some sort of page table lock or semaphores, and it wasn't going to happen that way.

Talk 2SCTP Linux Kernel ImplementationLa Monte H.P. Yarroll, Motorola

SCTP is the Streaming Control Transport Protocol for streaming media, and La Monte has been working on a kernel module to implement it. Selected by the IETF, it is a peer to TCP and UDP. It's described as a reliable, message-oriented, multiple-ordered message stream with support for automatic network failure--think multihomed multimedia serving.

SCTP requires a number of changes to established systems, namely the bind(2) system call (for SCTP, bindx). Since SCTP is designed to be handled across a number of machines, it requires sets of addresses from bind. The XOpen folks were reluctant to advocate multiple bind calls for resolution, favoring a set to be returned from a single request.

For proper implementation, SCTP also needs some other network features. Additionally, some of the more "single-threaded" bits of the networking present challenges for SCTP as well.


There was some very real action going on during the breaks. Ted Ts'o, the organizer, knew this would be true and planned 30-minute breaks between talks. It's all very "old home week" for these guys, and you can tell that many haven't seen each other for some time. For myself, it was good meeting people I had previously met only through e-mail and seeing those who I only run into at these kinds of conferences. I don't want to focus too much on personality in this article, but it's all very chummy and fun.

It's funny to think that Linux and the surrounding software was all developed over the Net and not in person. Of course, you could never bring everyone involved into one place, but it's impressive that the Open Source/Free Software movement can change the world so effectively while not being in physical proximity. I don't want to get too Jon Katzian here, but you can be assured that free software is alive, well and thriving.

Talk 3:2.5 Block Device To-DoStephen Tweedie, Red Hat, and Jens Axboe

Ted jokingly called this "a completely non-controversial and inconsequential talk". The block device is absolutely vital and important and, well, Steven is the guy.

Starting with scalability, Stephen spoke about the naming of devices, the need for bounce buffering to be optional for large memory support, the 2TB device support limit, the need to drop the 1k disk alignment, and issues regarding SCSI (LUN rescanning) and SMP scalability in the SCSI layers.

On robustness, Steven commented on the need to deal intelligently with different errors. Currently error response can be brittle, causing drives to be taken off-line when it could be a fairly inconsequential sector error. Also, he mentions there is a problem to date with not distinguishing between read and write failures.

In the realm of performance, Steven mentioned issues including buffer efficiency problems and queuing issues. Steven feels a per spindle approach is preferable to a per unit approach.

During his discussion of the extra features he was considering, one mentioned was the possibility to defer atime updates on sleeping drives. This would make for more power-efficient Linux laptops and desktops, which, as a Californian, I can appreciate. Atime is set whenever a file is accessed by any program, even if it is not changed. This could be considered useful but not enough to spin up the drive and spend vital power on. One of Nate Myers's tricks back when he ran Linux Laptops was to mount drives with the atime feature shut off to save power, and it is a great trick to know.