Interview with Ted Ts'o

Ted discusses his work on the Linux kernel, Linux International, Linux Standard Base and other areas of the Open Source community.

Don Marti and Richard Vernon recently had the rare opportunity of taking some time from Ted Ts'o's tight schedule to talk about his role with the Linux kernel, IBM and the Linux community. Ted seems to be everywhere in the Linux community—inside the kernel and out. He is currently a senior technical staff member of the Linux Base Technology Team for IBM's Linux Technology Center. He also chairs the Technical Board of Linux International, serves on the Board of Directors for the Free Standards Group, is a member of the Internet Engineering Task Force and serves on the Security Area Directorate of the IETF. Previously, he worked at MIT in Information Systems, where he was the development team leader for Kerberos. Through it all he's played a principal role in the development of the Linux kernel.

LJ Many Linux enthusiasts know you for your work on the Linux kernel, but are perhaps less familiar with your service to Linux International and the Free Standards Group. Could you talk a little about your respective capacities with those organizations?

Ted Well, I chair the Technical Board of Linux International. Linux International is a vendor group that got started back in the good old days of Linux startups—when Bob Young would personally show up at tradeshows and help hand out CDs containing the Slackware distribution. So from the beginning, Linux International had as a strong emphasis the concept that its members should band together to help “grow the pie”.

The technical board was there to help make sure the organization stayed connected to its technical roots and later picked up the responsibility to examine applications to the Linux International Development Grant Fund, which is still operating today.

Very recently, Linux International has begun considering a new program that will focus on strengthening the various local Linux Users' Groups and working with them to support people who are interested in doing various types of “Linux Advocacy” (i.e., pushing Linux to be used in local public schools or in the corporate infrastructure). This is an idea that I've been discussing with Jon “maddog” Hall, and I think it's a great initiative. I hope it works out well.

As for the Free Standards Group, I currently serve on the Board of Directors for the FSG. The FSG provides a legal and financial home for the Linux Standards Base (LSB) and the Linux Internationalization (Li18nux) efforts. I was involved with the LSB from almost the very beginning because I believe in providing a stable environment so that members of the community can release binary distributions of programs that will run on any Linux system of the same architecture, regardless of the distribution that the user chose to use.

I was student systems programmer in MIT Project Athena during the height of the UNIX wars and saw firsthand how incompatibilities between the various UNIXes allowed Microsoft to dominate the desktop. So as a result, I've always thought that the LSB is incredibly important for the Linux community.

LJ How do you feel about the recent progress of LSB (the release of LSB 1.1), and what do you feel is the future for LSB—will it be something that evolves into an ubiqitous standard? What might be some of the advantages of LSB for developers who distribute software in source-code form?

Ted Progress on the LSB front has been slow but steady. LSB 1.1 isn't perfect, but it's at the stage where it should be possible for both distributions and independent software vendors to start implementing against it. We expect to start seeing LSB-compliant distributions and application programs within a year.

The LSB standard is working to make it possible for third-party application programs to be installed and run across multiple distributions. Initially, the majority of packages on a Linux system will still be provided by the distribution and will not be LSB-compliant packages.

Hopefully, as the distributions start seeing the advantages of the LSB, and as demand increases for more commonality between the various distributions, the LSB will help encourage distributions to start converging gradually, as new features are added. This will act to benefit all developers, even those who distribute code in source form already.

ABI-compatibility, while most important to people or companies that distribute sources in binary form, is also important to people who are using exclusively open-source software. For example, some library maintainers don't bother to change symbol names or even in some cases, library version numbers, when they make incompatible library changes. This can cause all sorts of headaches if two application programs installed on the same system need to reference different libraries. An extreme example of ABI-instability can be found in libgal (the GNOME Applications Library), which has had 19 different, incompatible ABI changes in about as many months. Even if source is available, this kind of ABI-instability is extremely inconvenient.

LJ What areas will FSG look at standardizing next?

Ted Well, there are two groups that have approached the FSG. One is interested in standardizing some kind of high-level printing libraries interface. Another group is interested in standardizing library interfaces for clusters. In general, the FSG doesn't try to find new technologies to standardize; instead it allows people who are interested in forming a workgroup to work on some standard to do so. The FSG Board simply insists that the process is open and, to the extent possible, that all interested parties are at the table while the standard is being developed.

LJ Could you briefly describe your work at IBM?

Ted Well, I'm continuing to work on the kernel, especially the ext2/ext3 filesystem work. I've also been consulting with some of the other teams at the Linux Technology Center, helping them with design issues and helping them make their contributions be more easily accepted into the mainline Linux kernel.

LJ What would you consider to be some of the most significant developments of the 2.5 series kernel?

Ted It's early in the 2.5 development series, so it's really hard to say right now. I'd say that scalability to larger machines and associated sub-goals, such as reducing or removing the need for the global “big kernel lock” is certainly going to be one of the more significant efforts in the 2.5 series. The introduction of the O(1) scheduler is also quite significant. Work to continue improving the virtual memory subsystem and the I/O subsystem also is ongoing and ultimately very important. With the exception of a few new features, such as better ACPI support and asynchronous I/O support, I suspect most of the improvements in the 2.5 kernel will be performance-related.

That being said, as Linus has said—and I very much agree—a lot of the exciting new work that is happening in the Linux community isn't necessarily happening in the kernel, but in user land. For example, who would have thought that five years ago, Linux would have not one, but two graphical desktop environment systems under development?

LJ You are the author of /dev/random. How will the Linux kernel hackers approach crypto-enabling technology in the kernel? Cautiously, or are the developers jumping in with both feet now that US export restrictions are looser?

Ted Well, Peter Anvin did some wonderful work laying the legal groundwork (thanks must also go to Transmeta for paying the legal bills) so that cryptographic software could be distributed from the kernel.org FTP distribution network. There are certainly some people who are still a bit cautious. That's understandable since many developers have lived behind the Crypto Iron Curtain for so long that they're still afraid the US government might change its mind and suddenly try to regulate cryptography again. At this point though, my belief is that the crypto genie is so far out of the bottle that this sort of nightmare scenario is very unlikely.

I think that it's only a matter of time before developers start adding more cryptography into the kernel. On the other hand, there's a lot of cryptographic solutions where the right place to put things really is outside of the kernel.

LJ With all the many demanding activities with which you are involved, how do you stay organized and find sufficient time to devote to each activity?

Ted It's hard. One of the disappointing things about being involved with doing more organizational tasks, such as serving on the board of the FSG and working on the LSB, is that it means I have less time to do real kernel-level programming. But, someone has to do it, and I happen to be somewhat good at it, so....

That being said, I am hoping that I'll be adjusting my workload so that I will have more of a chance to do some real programming than I have in the past year or two.

One of the other ways that I try to find time is to pass off projects to other people. For example, I was one of the original instigators of bringing Pluggable Authentication Modules architecture to Linux. At the time, I was working at MIT, and I visited Sun Microsystems to discuss some issues relating to Kerberos. Near the close of the meeting, the Sun engineers introduced me to this thing called PAM, and I immediately thought that this was a really great idea, and gee, wouldn't it be great if Linux could have it too. So I started suggesting that this would be a good thing to do, and next thing I knew, Andrew Morgan had stepped forward and ran with it. The funny thing about this whole story is that even though the engineers had been working on PAM for at least a year or two before they introduced it to me, the Linux-PAM Project had an initial implementation working, which was shipping in commercial distributions before Sun was able to ship a version of Solaris that had PAM support. That's what's so great about the open-source model.

LJ How is IBM contributing to the development of open-source software?

Ted Well, IBM started the Linux Technology Center with some 250 or so engineers, spread out across some 16 cities and six countries, all working on open-source software. And we have a mandate to try to get our changes accepted into the mainline versions of the kernel or whatever open-source project we might be working on. So we're trying very hard to work as members of the Linux and OSS community. Of course, the sort of OSS enhancements we choose to work on are also those that are important to IBM's customers, but that's true at all Linux companies. The wonderful thing is that in most cases, the interests of the global Linux community and the interests of IBM's and other Linux companies' customers are the same.

LJ What pieces of the kernel are you working on right now?

Ted Right now, I've been mainly focused on the ext2/ext3 filesystem. I'd like to work on reworking the tty layer, but there are only so many hours in a week. Maybe in a month or two, I'll have some time to actually try tackling that.

LJ How long have you been interested in amateur radio and what got you interested?

Ted I've had an amateur radio license since 1997. I got involved because I knew a lot of people at MIT who were using the MIT UHF Repeater to communicate and that sucked me in.

LJ What has been your role in developing Linux POSIX capabilities, and what is your position on the current number of 28? Do you think this should be maintained, or expanded?

Ted Like PAM, Linux POSIX capabilities is one of those things that I tried pushing, but with less success. I still think that something like POSIX capabilities is important, but I'm not so sure anymore that Posix capabilities is the right way to go about solving the problem. Most system administrators have trouble dealing with 12 bits of UNIX permission bits per file. Adding another 3 × 28 = 84 capability bits that must be configured correctly or the executable will either stop working or be insecure, is a nightmare.

A simpler system where programs are still setuid root, but then permanently drop all of the capabilities they won't need, is certainly a lot less flexible than the full POSIX-capabilities model, but I think it is so much easier to administer that this makes it far more important than other considerations.

LJ Have you tried SELinux? If so what do you think?

Ted No, I haven't had time to actively install and play with SELinux. I think it's great that the NSA has been working on it, though.

LJ Do you see a conflict yet between optimizing Linux for throughput on mid-range or large servers and going for small size and latency on embedded-class systems?

Ted Well, I think it's a challenge to come up with algorithms that work well on both mid-range and large servers, yet are also well adapted to typical desktop machines. But, I think it's doable. In some cases, perhaps the end result won't look like what has traditionally been done to support large-scale servers or small embedded-class sytems. But that's what makes working on the Linux kernel so neat! We're not always implementing things the traditional way, but trying to find new ways of skinning the proverbial cat.

So no, I don't think there will necessarily be a conflict between optimizing Linux both for large servers and small servers. I do believe that the primary tuning target will continue to be the typical desktop machine, since that's what most developers have and can afford. However, the typical desktop (as well as the typical embedded system) has been gradually becoming more and more powerful, and over time, the range of systems where Linux will have excellent performance will continue to grow with each new major version of Linux.

LJ Besides the Rubini and Corbet book, how would you recommend that people who want to contribute learn about the kernel, both for writing drivers for 2.4 and wild and woolly features for 2.5?

Ted The www.kernelnewbies.org site is definitely one of the best places to start. Beyond that, the best way to learn about the kernel is to jump in and start playing with it! Come on in! The water's fine!

LJ Anything you'd like to add?

Ted Only that I consider myself incredibly lucky. Ten years ago, Linux was just a hobby; something that I did for fun. Now it's become a major force in the computer industry, so I can work full-time on something that I once did just because I loved doing it. That's neat. That's really neat.

email: dmarti@zgp.org

Don Marti is technical editor of Linux Journal, and Richard Vernon is editor in chief of Linux Journal.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

searching unexpected ABI breaks

Andrey's picture

To control ABI breaks in linux shared libraries may be used tools for static analysis of library header files and shared objects:
1) ABI-compliance-checker, http://ispras.linux-foundation.org/index.php/ABI_compliance_checker
2) icheck, http://www.digipedia.pl/man/icheck.1.html

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

It is time that the process of linux kernel development is formalized-- soon it may be too late. Linux may loose momentum, and the current state of linux kernel development only plays in the hands of Microsoft.

For a true acceptance of linux within the IT industry, a credible organization, an entity, must stand behind the kernel development--not an individual. At present, no sane large business enterprise would built its infrastructure on linux. The reason for that is simple: how do I trust my multi-billion $ business to an OS whose development is controlled by a single guy in his spare time? What if he gets sick, or tired, or simply goes nuts? No large enterprise would take such a stupid step.

Heavyweights of the industry should come together and invest resources in a body which would coordinate linux kernel development full-time -- an organization initially headed by Linus himself. Such a body could plan linux's long term future and release new kernel versions, and employ full-time the best linux kernel developers on competitive basis. Such a body would ensure that linux does not fork, and give a lot more confidence to the real world about linux. It would be a rock solid guarantee of the continuity and stability of the linux kernel. Otherwise, once the linux hype is over, we may wake up in the world still dominated by Microsoft-- if they survive the linux wave nothing will put an end to their madness and arrogance.

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

Any plans to go to a microkernel, a la QNX's neutrino for example? This seems to be the logical next step in the layering approach that UNIX initiated.

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

Linux, to my understanding, is quite efficiently modular as is. But out of curiosity, what advantages would the pure micro-kernel approach bring to the present Linux marketplace?

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

>...or simply goes nuts...

...or bananas, from all the excited infrastructure big manufactures, jostling to get their favorite toys into the official Linux kernel tree!

libgal is not supposed to be a stable library

Anonymous's picture

libgal is not an official GNOME library. It was only created to share code between Evolution and Gnumeric, and it was never supposed neither to be stable nor to be part of the platform. It is more of a playground and a code sharing device than a system library. That's why its ABI is unstable; it was never advertised as stable either.

So I don't think bringing it as an example of lack of ABI stability is fair. ;-)

It's time for the Linux Community Process

Anonymous's picture

A formalization of the Linux kernel development process, akin to Java's JCP (www.jcp.org), is necessary at this time. Linux will never be truly open otherwise, IMHO.

Re: An informed view from the outside

Anonymous's picture

I think everything is just about fine as it is, maybe there could a bit more formalism in the process, but I think this could come from new tools or basic organization more than yet another 'commercialish offical organization'.

The use of BitKeeper instead of relying on CVS and/or straight source has probably speeded things up for Linus. And he had a bit of a 'slump' in 'through put and scheduling' before and after the transition occured.

2.5 looks like a major, major, step forward compared to the previous, kernel generations, there are a lot of issues, alot of new technologies being incorporated. This is not a time to put too much haste into development, there are alot of far reaching descisions being made, they should not be rushed. Patches that are not ripe will be dropped.

The rejection of patches is a two way thing, the incorporation of too many patches against time leads too much kernel bloat. So some people get a bit narked at not having their work included in 'the kernel' but so what this is just a good argument to form more 'rings of conformance and testing' round the kernel proper, the tendancy to need this is seen as forking, but if things are done right with the right mechanisms put in place, 'the organization', then forking becomes just special interest groups of kernel hackers doing more work and testing on individual patches or sets of patches, before putting them up for acceptance in the 'kernel proper'.

This is where maybe a bit of organization is needed, maybe a web space devoted to allowing individuals or parties to 'formally' express interest in a given area/topic or sets of topics for/under development. Then developers can search a database of kernel development 'initiatives' and either join and existing one working on the area or form a new interest party, for others to come along on either on the coding and/or testing fronts.

There I will leave it.

Re: An informed view from the outside

Anonymous's picture

Still, there will always be that question of undue influence, rightly or wrongly.

Re: It's time for the Linux Community Process

Anonymous's picture

I am amazed how easy it the process is of getting patches included in the stock kernel. I don't think it could get much easier.

Perhaps it's because my changes were so small and obvious, or perhaps I've just been lucky. But I've had no problem getting patches accepted.

Re: It's time for the Linux Community Process

Anonymous's picture

A formalization of the Linux kernel development process, akin to Java's JCP (www.jcp.org), is necessary at this time. Linux will never be truly open otherwise, IMHO.Troll. But I'll bite.The development process has nothing to do with how "open" it is; Linux's "openness" is described precisely in the GPL. Besides, what makes you think adding a layer of bureaucracy will improve kernel development in any way? Perhaps if the developers weren't volunteers, they would tolerate having a political, bureaucratic process interfering with their hacking. Maybe.

Re: It's time for the Linux Community Process

Anonymous's picture

APIs in the kernel do tend to be a bit unstable, but at the same time are worked in only after "reviewed" by the community. for instance async IO is not in the kernel because linus rejected it since there was no feedback on it.

if there is no feedback it means that noone uses it, and it is by no means perfect, so this argument makes sense.

so linux indeed, as a community project, is already following a community process. and all that is not new like this follows POSIX, which is standard for ages.

so try and think: what issue are you trying to solve? if you can't come up with a real world problem you saw, maybe there is no problem at all :O)

emmanuel.

Re: It's time for the Linux Community Process

Anonymous's picture

POSIX is a formal process, is it not?

------

www.unix-systems.org/version3/

Re: It's time for the Linux Community Process

Anonymous's picture

You're a ***** idiot.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState