LJ Interviews Linus Torvalds
Why is 2.0 so wonderful?
If you mean “why do we call it 2.0 instead of 1.4”, then the answer is that there are a lot of major conceptual jumps in it, notably the multi-architecture support and the SMP [symmetrical multi-processor] support. Even with nothing else new, those two features certainly are enough to warrant a major number bump.
Note that this doesn't mean that the multi-architecture and SMP work is finished—the only really supported platforms in 2.0 will be alpha and the x86, and the SMP stuff will need to be worked on too in order to scale better. The important thing about 2.0 is that the support is there, even though we'll obviously continue working on it (the same way 1.0 had networking support, even though we obviously had to work on that too after releasing 1.0).
Now, if the “why is 2.0 so wonderful” question is more “why should I upgrade”, i.e., you have a machine that is already supported by 1.2, then the answer is that no, you don't have to upgrade, but there are lots of things that makes an upgrade to 2.0 a good idea.
For example, performance is much better, especially in the networking and NFS client department. If you have your home directory NFS-mounted, you definitely want to upgrade. There are other areas too where 2.0 is noticeably faster, like process handling and filesystem throughput.
Also, 2.0 finally supports read-write shared memory mappings of files, along with file descriptor passing, using Unix domain sockets. Those are the two major things that 1.2 doesn't do and that most other modern Unices support.
What comes next?
The SPARC and the PPC port are very close to being integrated into the normal kernel—most of it is already there, just a few things missing that haven't been applied due to the code-freeze. And MIPS support will also probably go in early in the 2.1 development kernels. So, during 2.1 we'll get a lot of new interesting architectures supported—essentially 2.0 lays the ground work for multi-architecture, and so it becomes a lot easier to do ports now.
Also, SMP needs to do finer-granularity locking for good performance on more CPU's, and while SMP currently works on the x86 and Sparc, we'll probably work on making the other architectures SMP-aware too.
There are obviously other things too: the normal kernel update stuff, more drivers, and more performance improvements. IPv6, DECnet etc.
Anything about yourself that you want to share? Lots of people ask us to cover more about who the developers are. They can read the code or the documentation but not everyone can meet the developers.
Ehh, umm, aahh... Nothing much has happened, I finally got my BSc but I'm still working on my masters degree. It's slow, as Linux obviously takes a lot of my time. Hopefully I'll have it done by the end of the year, I mainly just have the thesis to write. It's not as if I didn't have anything to write about, it's just hard to get started with so much else going on.
Tove [Linus' “significant other”] and the cats are doing fine...
When you get your masters degree do you plan on “getting a real job” or do you intend to stay the highest-profile unemployed person in the computer field?
Hey, I resent that remark. I'm not unemployed, I'm just selective about what I do...
Actually, I've been employed by the University of Helsinki for the last few years, and that has been paying my bills. While I'm still studying, I also work in a research and teaching position. And the CS department is flexible enough that I can do Linux on work hours, and they encourage this by trying to keep my other work at a minimum.
Obviously a “real job” pays better than most universities will pay, but I've been very happy with this arrangement—I get to do whatever I want, and I have no commercial pressures whatsoever doing this. Getting a masters won't change things radically, although it will obviously make it easier for me to accept other work as I don't need to worry about wanting to graduate some day.
As I've been able to live happily on university pay, the deciding factor is not so much the money as the interest level of any “real job” (but lest somebody gets the idea that money doesn't matter at all, I'll just mention that yes, it does).
How much of the code in the kernel is still yours?
Umm.... Very little when it comes to the number of lines. What's still “mine” is the mm/*.c, kernel/*.c, fs/*.c (only the BVFS code, not the specific filesystem stuff) and parts of the x86 and alpha-specific low-level architecture files.
Even those parts have much of the code contributed by others, but the basic stuff is still pretty closely under my control. It's essentially all of the really basic stuff—things that everything else depends on.
There are lots of things I haven't really even touched: most of the device drivers are totally written by others, and while they sometimes are based on stuff I have written they really aren't mine any more. Same goes for a lot of the filesystem code.
The networking has been completely written by others, although I've touched some of it.
In lines of code, I probably am responsible for about 10% these days. That's just a rough guess, I haven't really taken a look.
How (or maybe why) does project management work? That is, Linux is a huge effort and it continues to progress very well. How is this cooperation possible?
Most of it happens automatically—people who are doing things for fun do things the right way by themselves. That said, I do work 8 hours a day (and that's just about minimum) on Linux, and most of the time goes to administrative things, mostly email. And it's not as if I'm the only “manager”--there are others who manage their own subsystems and then send me already cleaned-up patches (notably when it comes to networking).
Is there any expertise lacking? Is there something or someone that, if available, would make development go better?
I think we're doing pretty well. I need longer days (and nights!), but there isn't anything specific we really need. Lots of areas needing work, and lots of developers that don't have enough time, but we can't really complain.
We asked this before, but the answer may have changed. What, if anything, did you do wrong in Linux development. (When Ken Thompson was asked this about Unix he said he left the “e” off the “creat” system call. Is there something that you would do different?
I'll be arrogant and say “nothing”. I think that's the same answer as last time. I've made lots of mistakes, but that's okay and normal, and the kernel is the better for it—it tends to only help make the corrected version more robust. And I've obviously conned a lot of people to work for free on this project!
Are there any shows you will be attending in the U.S. later this year or in early 1997?
I hope to attend at least the USELINUX conference (or whatever it's officially called) in January. No firm plans...
I hope to see you there and let you trade some of the virtual beer I promised for the real stuff.
How do you feel about commercialization of Linux?
On the whole, it's a very positive thing for me. I'm not worried about the kernel itself or the basic system—all the commercialization is about the distributions and the applications. As such, it only brings value-added things to Linux, and it doesn't take anything away from the Linux scene.
However, I don't think commercialization is the answer to anything. It's just one more facet of Linux, and not the deciding one by any means. Let me mention Wabi as an example—a commercial windows emulator (that may actually be out by the time this article hits print; it's in beta testing now).
Wabi is a nice program (I've been using it to make slides with PowerPoint under Linux) and a lot of people will find the reason to pay for it. However, it won't revolutionalize Linux the way Wine may do—a freely available windows emulation package will make a lot of difference for the whole market, while Wabi makes a lot of difference to only a subset of the market. Obviously, right now Wabi is a lot more advanced than Wine, but we'll see what happens in a year or two.
What challenges do you see the Linux Community facing in the next 10 years?
How do you feel about commercialization of Linux?
I'm having trouble planning two weeks ahead, much less 10 years... It's too hard to say what will happen. It all depends on availability of good applications, and the first steps are being taken with both commercial and free end-user applications starting to appear. Linux already has a lot of traditional UNIX applications, what I'm really looking forward to is the desktop personal and business stuff...
What advice/pearls o' wisdom would you share with new members of the Linux Community?
Umm.... “Be excellent to each other”? No, wrong movie.. Ahh.. “Multiply and populate the earth”? Naah, that's been done too..
What keeps you motivated (i.e., why do you keep on doin' what you do)?
It's a very interesting project, and I get to sit there like a spider in its web, looking at the poor new users struggling with it. Mwbhahahhaaahaaa..
No, seriously, what kept me going initially after I had “completed” my first test-versions of Linux back in '91 was the enthusiasm of people, and knowing people find my work interesting and fun, and that there are people out there depending on me. That's still true today.
And it really is technically interesting too—still, after these five years. New challenges, new things people need or find interesting. But the community is really what keeps me going.
Thanks. A case of virtual beer will be on the way.
Virtual, smirtual.. Where's the real stuff?
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide