Interview with Ted Ts'o
Don Marti and Richard Vernon recently had the rare opportunity of taking some time from Ted Ts'o's tight schedule to talk about his role with the Linux kernel, IBM and the Linux community. Ted seems to be everywhere in the Linux community—inside the kernel and out. He is currently a senior technical staff member of the Linux Base Technology Team for IBM's Linux Technology Center. He also chairs the Technical Board of Linux International, serves on the Board of Directors for the Free Standards Group, is a member of the Internet Engineering Task Force and serves on the Security Area Directorate of the IETF. Previously, he worked at MIT in Information Systems, where he was the development team leader for Kerberos. Through it all he's played a principal role in the development of the Linux kernel.
LJ Many Linux enthusiasts know you for your work on the Linux kernel, but are perhaps less familiar with your service to Linux International and the Free Standards Group. Could you talk a little about your respective capacities with those organizations?
Ted Well, I chair the Technical Board of Linux International. Linux International is a vendor group that got started back in the good old days of Linux startups—when Bob Young would personally show up at tradeshows and help hand out CDs containing the Slackware distribution. So from the beginning, Linux International had as a strong emphasis the concept that its members should band together to help “grow the pie”.
The technical board was there to help make sure the organization stayed connected to its technical roots and later picked up the responsibility to examine applications to the Linux International Development Grant Fund, which is still operating today.
Very recently, Linux International has begun considering a new program that will focus on strengthening the various local Linux Users' Groups and working with them to support people who are interested in doing various types of “Linux Advocacy” (i.e., pushing Linux to be used in local public schools or in the corporate infrastructure). This is an idea that I've been discussing with Jon “maddog” Hall, and I think it's a great initiative. I hope it works out well.
As for the Free Standards Group, I currently serve on the Board of Directors for the FSG. The FSG provides a legal and financial home for the Linux Standards Base (LSB) and the Linux Internationalization (Li18nux) efforts. I was involved with the LSB from almost the very beginning because I believe in providing a stable environment so that members of the community can release binary distributions of programs that will run on any Linux system of the same architecture, regardless of the distribution that the user chose to use.
I was student systems programmer in MIT Project Athena during the height of the UNIX wars and saw firsthand how incompatibilities between the various UNIXes allowed Microsoft to dominate the desktop. So as a result, I've always thought that the LSB is incredibly important for the Linux community.
LJ How do you feel about the recent progress of LSB (the release of LSB 1.1), and what do you feel is the future for LSB—will it be something that evolves into an ubiqitous standard? What might be some of the advantages of LSB for developers who distribute software in source-code form?
Ted Progress on the LSB front has been slow but steady. LSB 1.1 isn't perfect, but it's at the stage where it should be possible for both distributions and independent software vendors to start implementing against it. We expect to start seeing LSB-compliant distributions and application programs within a year.
The LSB standard is working to make it possible for third-party application programs to be installed and run across multiple distributions. Initially, the majority of packages on a Linux system will still be provided by the distribution and will not be LSB-compliant packages.
Hopefully, as the distributions start seeing the advantages of the LSB, and as demand increases for more commonality between the various distributions, the LSB will help encourage distributions to start converging gradually, as new features are added. This will act to benefit all developers, even those who distribute code in source form already.
ABI-compatibility, while most important to people or companies that distribute sources in binary form, is also important to people who are using exclusively open-source software. For example, some library maintainers don't bother to change symbol names or even in some cases, library version numbers, when they make incompatible library changes. This can cause all sorts of headaches if two application programs installed on the same system need to reference different libraries. An extreme example of ABI-instability can be found in libgal (the GNOME Applications Library), which has had 19 different, incompatible ABI changes in about as many months. Even if source is available, this kind of ABI-instability is extremely inconvenient.
LJ What areas will FSG look at standardizing next?
Ted Well, there are two groups that have approached the FSG. One is interested in standardizing some kind of high-level printing libraries interface. Another group is interested in standardizing library interfaces for clusters. In general, the FSG doesn't try to find new technologies to standardize; instead it allows people who are interested in forming a workgroup to work on some standard to do so. The FSG Board simply insists that the process is open and, to the extent possible, that all interested parties are at the table while the standard is being developed.
LJ Could you briefly describe your work at IBM?
Ted Well, I'm continuing to work on the kernel, especially the ext2/ext3 filesystem work. I've also been consulting with some of the other teams at the Linux Technology Center, helping them with design issues and helping them make their contributions be more easily accepted into the mainline Linux kernel.
LJ What would you consider to be some of the most significant developments of the 2.5 series kernel?
Ted It's early in the 2.5 development series, so it's really hard to say right now. I'd say that scalability to larger machines and associated sub-goals, such as reducing or removing the need for the global “big kernel lock” is certainly going to be one of the more significant efforts in the 2.5 series. The introduction of the O(1) scheduler is also quite significant. Work to continue improving the virtual memory subsystem and the I/O subsystem also is ongoing and ultimately very important. With the exception of a few new features, such as better ACPI support and asynchronous I/O support, I suspect most of the improvements in the 2.5 kernel will be performance-related.
That being said, as Linus has said—and I very much agree—a lot of the exciting new work that is happening in the Linux community isn't necessarily happening in the kernel, but in user land. For example, who would have thought that five years ago, Linux would have not one, but two graphical desktop environment systems under development?
LJ You are the author of /dev/random. How will the Linux kernel hackers approach crypto-enabling technology in the kernel? Cautiously, or are the developers jumping in with both feet now that US export restrictions are looser?
Ted Well, Peter Anvin did some wonderful work laying the legal groundwork (thanks must also go to Transmeta for paying the legal bills) so that cryptographic software could be distributed from the kernel.org FTP distribution network. There are certainly some people who are still a bit cautious. That's understandable since many developers have lived behind the Crypto Iron Curtain for so long that they're still afraid the US government might change its mind and suddenly try to regulate cryptography again. At this point though, my belief is that the crypto genie is so far out of the bottle that this sort of nightmare scenario is very unlikely.
I think that it's only a matter of time before developers start adding more cryptography into the kernel. On the other hand, there's a lot of cryptographic solutions where the right place to put things really is outside of the kernel.
LJ With all the many demanding activities with which you are involved, how do you stay organized and find sufficient time to devote to each activity?
Ted It's hard. One of the disappointing things about being involved with doing more organizational tasks, such as serving on the board of the FSG and working on the LSB, is that it means I have less time to do real kernel-level programming. But, someone has to do it, and I happen to be somewhat good at it, so....
That being said, I am hoping that I'll be adjusting my workload so that I will have more of a chance to do some real programming than I have in the past year or two.
One of the other ways that I try to find time is to pass off projects to other people. For example, I was one of the original instigators of bringing Pluggable Authentication Modules architecture to Linux. At the time, I was working at MIT, and I visited Sun Microsystems to discuss some issues relating to Kerberos. Near the close of the meeting, the Sun engineers introduced me to this thing called PAM, and I immediately thought that this was a really great idea, and gee, wouldn't it be great if Linux could have it too. So I started suggesting that this would be a good thing to do, and next thing I knew, Andrew Morgan had stepped forward and ran with it. The funny thing about this whole story is that even though the engineers had been working on PAM for at least a year or two before they introduced it to me, the Linux-PAM Project had an initial implementation working, which was shipping in commercial distributions before Sun was able to ship a version of Solaris that had PAM support. That's what's so great about the open-source model.
LJ How is IBM contributing to the development of open-source software?
Ted Well, IBM started the Linux Technology Center with some 250 or so engineers, spread out across some 16 cities and six countries, all working on open-source software. And we have a mandate to try to get our changes accepted into the mainline versions of the kernel or whatever open-source project we might be working on. So we're trying very hard to work as members of the Linux and OSS community. Of course, the sort of OSS enhancements we choose to work on are also those that are important to IBM's customers, but that's true at all Linux companies. The wonderful thing is that in most cases, the interests of the global Linux community and the interests of IBM's and other Linux companies' customers are the same.
LJ What pieces of the kernel are you working on right now?
Ted Right now, I've been mainly focused on the ext2/ext3 filesystem. I'd like to work on reworking the tty layer, but there are only so many hours in a week. Maybe in a month or two, I'll have some time to actually try tackling that.
LJ How long have you been interested in amateur radio and what got you interested?
Ted I've had an amateur radio license since 1997. I got involved because I knew a lot of people at MIT who were using the MIT UHF Repeater to communicate and that sucked me in.
LJ What has been your role in developing Linux POSIX capabilities, and what is your position on the current number of 28? Do you think this should be maintained, or expanded?
Ted Like PAM, Linux POSIX capabilities is one of those things that I tried pushing, but with less success. I still think that something like POSIX capabilities is important, but I'm not so sure anymore that Posix capabilities is the right way to go about solving the problem. Most system administrators have trouble dealing with 12 bits of UNIX permission bits per file. Adding another 3 × 28 = 84 capability bits that must be configured correctly or the executable will either stop working or be insecure, is a nightmare.
A simpler system where programs are still setuid root, but then permanently drop all of the capabilities they won't need, is certainly a lot less flexible than the full POSIX-capabilities model, but I think it is so much easier to administer that this makes it far more important than other considerations.
LJ Have you tried SELinux? If so what do you think?
Ted No, I haven't had time to actively install and play with SELinux. I think it's great that the NSA has been working on it, though.
LJ Do you see a conflict yet between optimizing Linux for throughput on mid-range or large servers and going for small size and latency on embedded-class systems?
Ted Well, I think it's a challenge to come up with algorithms that work well on both mid-range and large servers, yet are also well adapted to typical desktop machines. But, I think it's doable. In some cases, perhaps the end result won't look like what has traditionally been done to support large-scale servers or small embedded-class sytems. But that's what makes working on the Linux kernel so neat! We're not always implementing things the traditional way, but trying to find new ways of skinning the proverbial cat.
So no, I don't think there will necessarily be a conflict between optimizing Linux both for large servers and small servers. I do believe that the primary tuning target will continue to be the typical desktop machine, since that's what most developers have and can afford. However, the typical desktop (as well as the typical embedded system) has been gradually becoming more and more powerful, and over time, the range of systems where Linux will have excellent performance will continue to grow with each new major version of Linux.
LJ Besides the Rubini and Corbet book, how would you recommend that people who want to contribute learn about the kernel, both for writing drivers for 2.4 and wild and woolly features for 2.5?
Ted The www.kernelnewbies.org site is definitely one of the best places to start. Beyond that, the best way to learn about the kernel is to jump in and start playing with it! Come on in! The water's fine!
LJ Anything you'd like to add?
Ted Only that I consider myself incredibly lucky. Ten years ago, Linux was just a hobby; something that I did for fun. Now it's become a major force in the computer industry, so I can work full-time on something that I once did just because I loved doing it. That's neat. That's really neat.
Don Marti is technical editor of Linux Journal, and Richard Vernon is editor in chief of Linux Journal.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide