Linus & the Lunatics, Part II
Part I of this series was a transcript of the main talk Linus gave, guided by a slide presentation, aboard this year's Linux Lunacy cruise. Here in Part 2, Linus indulges in his preferred speaking format, the Q&A.
Part III will cover a conversation between about 30 Linux Lunatics on the same Geek Cruise and 50 members of the Victoria Linux Users Group.
A report on the full cruise starts here.
Linus: Now we can go to the real Q&A.
Neil Bauman: In the last year or so, Linux has been embraced by a large number of established companies. [Do] you consider this a good thing, a bad thing? Are you happy? Sad?
Linus: I don't care. I used to be a lot more worried about it. A long time ago I used to be worried about companies having their own [garbled] about doing this stuff. And that hasn't been so in a long time. Companies now have two reasons to come in and embrace Linux. One is the lost penguin.
But the other reason, more common, is just because they see a cheap development model that actually works. There is both a cheap part and an actually works part. That's something a lot of these companies haven't seen before. I find it hilarious how some of these companies, big companies, are afraid to muck with the model too much. You have companies with two and three letters that actually require their employees to [learn] open-source tact--a series of lessons on what to do and what not to do. Which is, I mean, completely strange but kind of encouraging in the sense that other companies coming in do seem to realize that if they want to get something out of it, they need to work with the program and not against it.
Audience member: You'd be surprised how clueless people are. I get called all the time. "I'd like to do this project, and do it open source. What does this mean?" They have no idea.
Linus: Some people have a hard time getting used to it. Many managers do tend to find it hard to let go of managing. And they don't take patches in from the outside, for example. They may be open source in the sense that they release the code, and release all the changes they make, but don't actually talk to anybody outside.
Audience member: [Asks about courses inside companies.]
Linus: Intel is a good example because they used to be so closed. They had rules saying, "When you join the company, everything you ever write is ours, as far as laws allow, and you can never ever use your company email address publicly, for anything, because that would imply that Intel endorses..." Or something like this.
This used to be true five years ago. You had a lot of people who worked at companies like that for a long time. And then they needed to really change how they think. And they go through the courses. But these courses are not geared toward people who know what open source is. Very introductory.
Audience member: I work for a company with three letters. We do have a program where you participate in open source. We need to acknowledge that we understand the difference between the various open-source licenses and the traditional IBM development model. And that's extremely important because of things like the SCO lawsuit. Because without understanding what intellectual property is, and whose intellectual property you're allowed to put into open-source stuff, you wind up...
Linus: The scary point is that all the same problems exist in proprietary source too. It's just that you don't get caught. Right? So it's not about open source versus anything else. It's really about "Oops. Now they actually see us doing this stuff. And so we'd better be careful."
Audience member: Every IBM product has to have associated with it a certificate of originality [garbled] that asserts that we know where the source--
Linus: Right, but the actual developers who are involved with it probably don't have much to do with that.
Doc Searls: I want you to say something about Linux on the desktop and where it's going.
Linus: So I always think that Linux on the desktop is where it is, right? That's the only part I care about. Servers are kind of; they're easy. They only do one thing. The one thing may be one of N things, but at the same time it's very tunable, it's very straightforward; people have been doing it for a long time. Server people usually do stuff that they've done for ten, fifteen years. So I don't think servers are very interesting from a technical standpoint. It's all been done before. Desktop is what I use, and desktop is technically much more challenging anyway.
Clearly most people who use Linux on the desktop tend to be pretty technical right now. The nice thing is that is changing. It's changing mainly inside companies that just decided, "Hey, our secretaries are actually better off using Linux, because we don't want them playing solitaire."
That's how DOS came to be, right? Linux has solitaire too, but you can control better how to install it. Right? People who bought PCs for home, initially, did so because they used them at work. So that's the way things get done, and it does seem to be happening.
Audience: How do you feel about large and small Linux. Is there going to be an SMP 16 version that [garbled]...
Linus: I used to think that it made no sense to try to support huge machines in the same source tree as regular machines. I used to think that big iron issues are so different from regular hardware that it's better to have a fork and have some special code for machines with 256 CPUs or something like that. The thing is, the SMP scalability has helped even the UPKs just by cleaning stuff up and having to be a lot more careful about how you do things. And we've been able to keep all the overheads down. So that spinlocks, which are there in the source, just go away because you don't need them. We're scaling so well right now that I don't see any reason to separate out the high-end hardware. A lot of the reason for using Linux in the first place ends up being that you want to ride the wave of having millions of machines out there that actually incorporate new technology faster than most of the big iron things usually do. So the big iron people want to be in the same tree, because having a separate big tree would mean that it wouldn't get the testing, it wouldn't get the features, it wouldn't get all the stuff that Linux has got, and that traditional Unix usually doesn't have.
Audience: Do you have any thoughts about the way that device drivers are currently brought into the [garbled]?
Linus: The problem with device drivers is, they usually aren't well documented, the hardware usually isn't that accessible in the sense that, yeah, sure you can go out and buy it, but a lot of people aren't interested in it. So the development base is very limited. The hardware manufacturers, even if they are interested, are usually not software people. So even when they write a driver, the driver sucks. And if it isn't the hardware manufacturer who writes the driver, the driver will usually suck just because there is no documentation. People were guessing. The people involved didn't actually want to do the driver, but they had to because they wanted a specific piece of hardware to work.
If you compare core kernel code with device driver code, device drivers have more bugs, are uglier, are less well maintained, and it's a fundamental problem. The good news is, integration has meant that the number of devices that you need to support has actually gone down a lot. We have something like over 100 SCSI drivers in the kernel, if I remember correctly. Of which about two are actually relevant any more. That was very different five, ten years ago. There really were all these oddball things. We used to have fifteen or twenty different CD-ROM drivers. Every sound card and his dog had a CD-ROM chip on it, right? And they were all different. Nobody uses them any more. They're dead. Because IDE and, not just IDE but one specific implementation of the IDE controller, namely Intel's, just became the standard.
USB helps to some degree. Graphics are moving in that same direction, too. There used to be 10 different graphics vendors, and right now there are, like, three or four. So the good news is, the device driver problem may actually be improving just because small companies go out of business. The bad news is, small companies go out of business and can't make hardware. It's just not economically viable any more.
Audience member: The big problem has always been getting documentation out at vendors, which have always had this very strange belief that if you publish interface specs that competitors will steal their [property]. Do you see this problem getting better or worse?
Linus: It's gotten a lot better. Right now we have no problems at all getting specs for anything that is server-related, so things like SCSI drivers. The SCSI manufacturers basically are falling over themselves writing the drivers for us. Or, like, multiport cards for anything where Linux has a big chunk of the market. The real problems that remain tend to be very specific. At the embedded end, things seem to be getting better too. Linux ends up being very interesting to a lot of embedded vendors. So the embedded vendors tend to actually write the drivers. So the problem area tends to be notebooks, but also desktops. We're not there, but it's been improved. It will always be a problem area. I'm actually thinking it seems to be getting better.
Audience member: I was wondering if you could address some of the issues related to laptops and which vendors tend to support [Linux].
Linus: Quite honestly, none of the laptop vendors support Linux at all, really. To be real. Some of them go through a certain amount of motions. They support Linux in certain configurations if it's not too painful. But the amount of support tends to be okay [to a limited degree]. It may not suspend. It may not actually do half of the things you want a laptop to do. But you can run Linux on it.
I expect that to change as companies start to use Linux more on the desktop. If you have a few big companies that just say, "Hey, Linux has to work on our laptops", suddenly hardware manufacturers will start caring a whole lot more.
Audience member: HP has started QA with Linux on the laptop, shipping BIOS updates. Not in the old market, but in the commercial market. There needs to be pressure in the commercial market. Customers are stepping up and saying, "We're going to buy 10,000 Linux machines..." So the pressure is starting to be put on the larger vendors.
Audience member: Are you afraid of SCO?
Linus: I'm a bit nervous about the US legal system. Not SCO in particular, but just the randomness. It seems that anybody who has a business that's bigger than a lemonade stand really needs to be aware of legal issues, right? But to some degree I'm fairly happy [that] SCO does seem to have no case at all, which means that when it eventually gets resolved, which can take way too long--the IBM people seem to think it is easily dragging into 2005--we will actually have a legal precedent, which is good. But it's bad to have it drag out.
Audience member: Does it affect you in any way, personally?
Linus: Not really. I started doing this project. There is actually a historic Linux kernel tree as a BitKeeper archive that I worked on for a week, which brought me up from 0.01 up to 0.99, which is like, October '91 through, I think, May '93 or something like that. And I have every single version I can find checked in, together with any comments I can find. Which is just, I was thinking, "Okay, if SCO actually ends up starting sending out invoices, then [it will] start being against my copyright." At which point I have to register my copyright with whatever handles copyrights. The copyright office. And then a few months go past and then you get a paper back saying you can sue people for violations. Just to lay groundwork.
But that was actually funny. It was funny seeing the old names. It was funny seeing the kinds of problems we had. It was funny seeing how few patches went in. Like, there were stretches in '92--I had this memory of '92 being very active and a lot of people--and there were stretches of like, one month, where I could easily go through the patch and see exactly what it did. And I think of what happens now in one month, and how big the patch ends up being. And it's completely different. So it is kind of fun.
The other part [about SCO] that was fun and instructive is how personally you take lawsuits. And how really nasty it gets. Like, people react very personally to these things, right. And I'm pretty good at avoiding that, usually. And I notice that in myself when I do. And ... that was interesting.
But I haven't cared much about [the issue].
Oh! I have to hassle with journalists these days. That's a major pain in the butt. If a journalist used to send me an e-mail and I didn't recognize the journalist, I'd just [think] "another e-mail I won't have to answer". This time if a journalist sends me an e-mail and mentions "What about SCO?" I feel I have to answer it. And it's really cutting down into my productivity. Maybe.
Audience member: There seems to be a drift towards clustering. I noticed there aren't a whole lot of really successful kernel space clustering exercises or projects. What's the right way to do--
Linus: In the clustering space, the main issue, I'm convinced, is a coherent filesystem. And there are no well-behaving, easily installed coherent filesystems that most people are interested in. Once again, with a coherent filesystem, pretty much everything else can just be done with, um... You don't have to do the full kernel-level SSI like a lot of companies have historically done, Compaq being one. In fact, you don't want to do that. But you want to do tools to make it look pretty close to a single system image. You may have some kernel hooks to distribute IP addresses or TCP connections better. Things like this. But, those are the details. The big thing is the cluster filesystem isn't there. And people are working on several. Some of them actually work today. Some of they are cumbersome enough to actually use [garbled]. I'd like to have a cluster filesystem at home. How many people have more than a few computers at home? And find NFS kind of annoying? So I'd like to have something that's useful at home just because it's nice to have a transparent, coherent filesystem. But none of the offerings out there are usable enough and easy enough to set out that it's worth doing.
Audience member: I'm just wondering if you've formed an impression about the Opteron and can make comparisons.
Linus: In the 64-bit space, everybody else is completely irrelevant except for Opteron and Power. Nothing else matters. That's just the way it is.
I actually find Power to be very interesting now that they've made the 9070. And you can actually buy them in reasonable machines. And you can buy a Macintosh G5 and get a real 64-bit CPU. And I think that may actually be enough, too. There is enough of a user base for normal people that I suspect a lot of Linux developers would love to have one of those. And are ready to switch away from X86 entirely. While I don't see that happening on IA64. Because there is not any nice boxes you'd switch away to, if you were to switch away from X86.
Opteron, I think their approach is solid. But AMD seems to have a history of problems with execution. Sometimes they hit every milestone. With the Athlon they held the lead over Intel for half a year or more. But historically they always stumble too. The question is, will they hit the milestones this time around? They've stumbled a few times, but they are getting very good reviews. So, who knows?
IA64 does have a lot of money behind it. That matters. I don't like their architecture, but at least they fixed all the major performance problems with whatever [garbled].
Audience member: [Question about Opteron.]
Linus: The problems with Opteron may well be core technical things in the sense that they can't crank up the megahertz, right? Their fabs have been. ... uneven.
Audience member: Do you see the DMCA or other legislation, or the whole PR thing... as an issue, as a threat to...
Linus: I'm a blue-eyed optimist. That's not an area that I get really upset about a lot. Because A, I think that consumers just won't buy devices that don't let them do what they want to do. We saw that with the original DivX, right? And B, because if flaws end up being too draconian, they will eventually reach the normal user. The computer geeks have been complaining about the DMCA for what, five years? When was it started?
Audience member: October 1998.
Linus: A loooong time. Have you noticed in the last few months you have normal publications that have complained about the DMCA? Not for any computer geek kind of reasons, but because they're unhappy about the way the RIAA uses it right now. So I'm kind of optimistic. [While] it's not an area I personally get hung up about, I do send a check to the EFF every year. And I encourage everybody else to do the same. Because it's good to have people who do get hung up about it.
Audience member: ...there was this 12-year-old girl sued by the RIAA...and it isn't working...
Linus: That's the kind of backlash you end up getting when you start using the DMCA for things that normal people care about.
Doc Searls: This may be a good time to segue to software patents in Europe.
Linus: There's not much to say about it.
Doc Searls: [Points to the subject on Linus' Q&A slide.]
Linus: [Laughing] Yes. Method patents are just bad. They were bad in the US, and they are bad in Europe too. I haven't followed it too closely. They seem to have at least tried to make their patent law slightly better. But some of the proposals--I'm not sure which one they are now fighting over--had the requirement that it was not a pure method patent. That it was part of an apparatus. Like the original patent requirements. It's still a bad idea. My problem with the whole discussion is kind of similar to the TCPA thing. The subject gets so polarized that people talking to each other aren't really talking to each other. They're really at opposite ends and throwing stones at each other. Right? Instead of even trying to see if there is a middle ground. Which means that the discussion itself is not worthwhile. That's my problem with it. I don't think you should be asked to polarize this issue. Because as long as we just have this gut reaction--Software patents are bad--it's not going to help us discuss the issue with people who have this other gut reaction of, I am greedy.
Audience member: [Asking if Linus would say something usefully negative about software patents.]
Linus: I would be happy to say anything bad about software patents if I could just ... formulate a sentence that makes sense. And I am not in the lobbying industry, so I don't.
Audience member: Work with PR. Let them send you a quote and then you approve it.
Linus: Exactly. That's how it works in PR. They haven't sent me a quote!
Don Marti: I'll be the sacrificial one to ask the question, "When is 2.6 coming out?" [Laughter.] And seriously, how do you work with the last 2.6 test kernel and decide to call it 2.6?
Linus: We've sat down with Andrew several times. See, the problem with 2.6 is every single time before when I made a stable release, it's been kind of--I put a line in water, but I still continue to maintain it. Which hasn't worked very well, historically. And I'm not saying that the new way will work any better. But I am hoping that Andrew, who has been very actively involved with the 2.5 kernels and, obviously, with the 2.6 test kernels, I'm hoping that instead of it just me drawing a line, it's actually more me and Andrew saying, "Okay, Andrew is actually willing to accept this crap." And that's really what it's all about. When he's saying, "Okay, I think we're at the stage where I'll take over", that is what 2.6 will be. Right? As to when it will be, I don't know.
We don't have a lot of outstanding issues. We have a few. And the problem is, right now. I said no to a clean-up patch today, which started adding warnings for stuff that you really shouldn't do. But this is not the time to even add warnings about stuff you really shouldn't do. Because that kind of patch will result in people looking at the warnings and trying to clean up code. And yes, it will clean up code; but it will also break stuff by mistake. Which actually happened with this patch already. So at this point we're just into a situation where it's hard to convince people not to do clean-ups.
A lot of people want to polish it for 2.6. And the thing is, we don't want it polished. We want it solid as a rock. And it is okay to be scruffy-looking like a rock too. But it has to be solid.
Both Andrew and I are happy about where we are right now. But when Andrew will actually take over, I don't know.
Audience member: What does this have to do with maintaining the integrity of the kernel as it grows larger and larger to support all these [SMP point]. You mentioned 8-way machines. You obviously are a big proponent of automated tools--
Linus: I'm actually not a big proponent of automated tools. I'm a big proponent of tools that help you do the stupid stuff. For example, one reason that I love C type checking is, when we make a major change, and we actually change the calling convention or something, the compiler will do all the legwork for us. It will say "Line so-and-so, you are calling this with the wrong argument." And that's wonderful. I think of projects where people don't dare make calling convention changes, because it will break everything under the sun. This is why I am a big fan of typechecking. That doesn't mean I think automation is good for solving hard problems.
Hard problems should be solved by making sure that the sign isn't likely to have the odd call. So one you mentioned, locking... I think Sun spent a lot of effort in doing lock validation that you always take locks in the right order and had a lot of tools for doing this. But the problem was, they actually had very deep nesting of locks, which ends up being a huge performance problem, too. And it means it's really hard to do certain things because it makes the code harder to work with. You can't do the obvious thing anymore because you now violate the locking rules.
What Linux has done for locking, for example, has been to have the rule be Don't have deep nesting of locks. We have very shallow lock nesting. We may have a lot of them, but they're shallow. And that's okay. And the few places where we actually nest something like four deep are very well documented. And it's only a few places, because having them all over the map would be crazy.
So I am actually not a big fanatic about having tools that figure out your problems for you. Because if you need tools for that, you designed something wrong in the first place.
I don't like tools as maintenance help. I like typechecking as a way of showing you where you make mistakes. But it's for stupid errors; it's not for really hard problems.
As to how to solve the complexity problem, so far the real solution has been good taste.
[Chuckles from audience.]
I mean a lot of patches end up getting rejected because they're ugly. And a lot of patches end up getting accepted because they clean up certain things. Like, if you looked at how the architecture handling has changed during 2.5, it is so much cleaner these days.
A PC is a PC, right? That's how the kernel used to think about it. Except now a PC can be a regular PC or a NUMA machine. Or one of the SGI strange wonder machines. Or Voyagers, whatever they're called. Or any of these subarchitectures within PCs. And they got separated out with the common code left in common files and cleaned up a lot. And that was just because the maintainers were having nightmares with ifdefs, and saying "I can't manage this any more, so we need to clean it up." And people did. Good taste.
Ifdevs are bad. Fix them. Not, "Okay, let's have tools that verify that we use them correctly." See? That's the difference.
Audience member: I'm curious how involved ... [garbled] ... the whole process of how less technical people...
Linus: I think the biggest single thing that has happened on the [garbled] have been a lot of good library frameworks. Qt in particular I think made a huge difference. And the KDE libraries and toolbuilder things... [garbled] infrastructure. Gnome is getting there too. But for some reason I just noticed that the KDE people consider it more important to have it working and sane. Instead of trying to aim for perfection, which the Gnome people are trying to do.
I don't get involved very much. I used to send a lot of bug reports to the KDE people, until I didn't have bugs anymore and I stopped.
But I cared about it.
The issue is, it takes a lot of time to build up that infrastructure....
The killer application for Windows was Visual Basic. It allowed you to make your hokey, self-made applications that did something stupid for your enterprise. But you could make them look good, and you could use a database. And you didn't have to understand it. Or care. Right? And that was a huge leap.
And that leap is happening right now in the sense that it is so much easier to make a good-looking clean application for Linux that has all these magic things. Like the menus you can drag off, right? And all of that is just written for you. And you don't need to care. And you can concentrate on the hokey application and it will look good. And that's changed in the last year. To me, at least. Before that, if you wanted to make some good-looking graphical application, it was going to be buggy and you had to do a lot of work yourself.
The framework is really starting to be there.
OpenOffice is still, in my opinion, a complete disaster. And part of the reason is that it's not using any of these frameworks that were signed for different applications. It built its own framework. I am told people are trying to fix it.
Audience member: [Remark about OpenOffice, garbled]... or to write their own.
Linus: It is manly to write your own.
Audience member: Can you say something about the new version of the kernel [and how you're testing it]?
Linus: Mostly it's the same methodology that we've always had. Throw it out and hope people use it. And it's strange, but psychology is so important. It made a huge difference to call it 2.60 Test 1. Because we started getting a lot of bug reports from people who would never touch 2.5.79 with a ten-foot pole, even though it was the same code. Especially on the desktop that's the only way to test it. Because desktops are just so varied that you literally have to get it tested by the user base.
So, that apart, all the Linux distributions basically have their own internal QA stuff. And all of them are moving over to 2.6 to use internally. Some of them are supposed to have 2.6 as an install time option in the next release. That will get a lot more user testing again, a wider testing base. Because that will get the people who wouldn't compile the kernel themselves. They are willing to update to a new SuSE or a new Red Hat or a new Mandrake. And then there are all the big companies that actually have their own test suites. So OSDL has the Linux test suite, and they're doing a database test. IBM has a lot of testing themselves.
Some companies I know test just what they are interested in, like their own hardware, with the 2.6 kernels.
Audience members: So with the test kernels being tested by people who are less sophisticated, how do you get useful bug reports out...?
Linus: Quite often you don't actually need much of a decent bug report. A lot of the problems end up being that the traditional kernel developers and the people who end up using 2.5 the development kernel, actually tend to be a very homogeneous lot. They have high-end hardware. They have hardware that's usually built for Linux. They selected their hardware with Linux in mind. And they have half a gig of RAM, or more.
What is actually important about getting random people to test is you start seeing these patterns of, "Oh, we've never tested that configuration, ever. And it was obviously broken." Right? And then the only important part is, "This configuration is broken under this load." And that in itself is very interesting.
There are sometimes bugs where you actually need the user to...to get back and forth trying to get the exact same symptoms and things like that. But they are actually much less common than the "Hey, this doesn't work" kind of thing.
Audience members: What do you use for bug reporting?
Linus: Some of the vendors have their own bug report configs. And there is the Bugzilla thing for the kernel. From what I can tell, it is mainly used by developers themselves to remind themselves about issues they have seen. The fact is, when the distributions will make a 2.6 distribution, that will make a huge difference. And there is no getting around that. 2.6.0 will have bugs that won't be found until the distributions go out half a year later. And they have then tools for just tracking the support calls. They have a lot of those.
Audience member: Two slightly less serious questions. What's the furthest out from the Earth's surface a Linux system has gotten?
Linus: I know it's been on the Shuttle, but that's just low Earth orbit. And I'm wondering if it's been on anything more interesting.
Audience member: [Something about somebody rendering an image in space using Linux on an IBM laptop.]
Linus: Yeah, it's definitely been in space. But I don't know if it's been, like, on any of the Mars landers. I know it's not been on the Voyager.
Audience member: Second question. Do you ever think about quantum computing?
Linus: I think that's a load of bull. I see all these news reports that say, "Hey, we had a chemistry set that computed pi to seven digits!" That's basically what they're doing. They're not doing computing. They're doing pattern matching with DNA. And that's fine. That's what you want to do if what you are matching is DNA, right? But if you actually want to do computation you obviously don't want to do this biological solution of stuff and just [hope] that the answer will come, right? So I'll believe it when I see it. Until then I'll take transistors. And they'll get smaller. And they'll start getting quantum effects, and that's fine.
Audience member: [inaudible]
Linus: LVM should work. But all the tools need some tweaking. I don't use it myself.
It's not called LVM anymore. It's called DM, for Disk Mapper. But it's the same thing. The interfaces are different and a lot smarter. And sometimes all the user level tools are completely different, right? Yeah, it's not binary backwards compatible. You can't just chuck in a system, which ends up being painful. But, I'm not the right person to do that part.
Audience member: One question you have [on the slide] is "What's with the penguin?"
Linus: I just have that because a lot of people ask me about it. I actually don't have a good answer. It just is, right?
There are a lot of reasons for the penguin. I was bitten by a penguin. And it's a true story. It's funny, because there are a lot of Web sites about the penguin. There's like The History of Tux, and things like that. And some of these Web sites have some of my explanation. And they almost universally say, "It's a great story, but it's not true." That I was bitten by a penguin.
It's true! I was bitten by a penguin! I mean, really! Take it from me! I'm wounded. Okay, so he wasn't six foot tall.
Audience member: Is it true he was radioactive? Is it true you killed it afterwards?
Linus: Okay, some of the rumors aren't true.
I've talked to some people who are in advertising, and they love the penguin. They think it's the greatest logo ever. And it's funny thinking back. Because we made it for, I think, the 2.0 release. Like, in '95 or something? And a lot of people hated it because it wasn't serious enough. But it's great. The advertising people really like the fact that you can do things with it. "That's the stroke of genius! The guy who came up with the penguin is a marketing genius!" [Sarcastically] Yeah.
Audience member: Are you pretty happy with your career path? Are you planning to keep your job for awhile?
Linus: Yeah. I hate changing jobs. Because it's just very mentally draining. I know people, especially in Silicon Valley--it's not as true any more-- but people who basically thought that if you don't change jobs twice a year you're not doing things right. And I guess I couldn't do that because, hey, the stress levels would kill me.
The only thing I want is the monthly or bi-monthly check coming in. And I don't want to worry about it. So yeah, I'm very happy with my choices.
Audience member: Do you ever see the fundamental monolithic nature of the kernel changing in any way?
Linus: The kernel model has changed a lot over time. It's completely different now. At the same time a lot of things have stayed the same. I mean, the monolithic-vs.-microkernel thing is not going to change, that I know of. But we've been very flexible in how things are done internally and how things are organized.
Again, one of the fun things about looking back on the very early kernels is really how crap the code was. And how we had bare assembly statements in the middle of something that obviously should have been inline functions or defines.
It's a different animal, and it's evolving in fundamental ways. But at the same time a lot of things are the same.
I didn't recognize a lot of the code any more, I have to say. It all has gone and been rewritten. But a lot of the ideas are the same.
Audience member: Now that we have this nice big user space, is there a chance [garbled]
Linus: Yeah. I mean, suspend and resume really should be able to do all of that. A lot of people want to just do it with the same kernel. Because replacing the kernel is really hard. So, who knows? Let's get the suspend and resume thing working first.
The nastiest part of that is, it actually works on some machines, but it's completely impossible to debug. Because when the machine doesn't come up, you need an ice to know what's the hell is going on, because you don't have any devices that work. Even getting a serial line out isn't very good, because you're southbridge is hosed. And you can't even get to the chip that does serial I/O, right?
That's why it's really nasty. APM was a lot easier in that respect. When you had APM you had the BIOSes, but at least it made sure the serial line worked.
Modern PCs are horrible. ACPI is a complete design disaster in every way. But we're kind of stuck with it. If any Intel people are listening to this and you had anything to do with ACPI, shoot yourself now, before you reproduce.
Audience member: [Something about "more interesting" and "coming up"]
Linus: So we had the kernel summit a few months ago in Ottawa. And it was actually somewhat disappointing in some respects because there was not a lot of interesting things. A lot of discussion about how things are done, but not a lot--
People are really happy with the level of support they have. I mean, that's a good thing. But it made the conference less exciting than it could have been.
I think the only thing everybody agreed on at the kernel summit was really cluster filesystems. There were a few details in other areas. But they weren't Earth-shattering in any way.
I was talking to somebody about page size extensions. I forget their name now. There are a lot of small projects, and people are thinking about them for 2.7. But at the same time I think we are getting to the point where the kernel actually works for most people. Modular device drivers, right? And just updating for new hardware.
And a lot of the exciting work ends up being all user space crap. I mean, exciting in the sense that I wouldn't care, but if you look at the big picture, that's actually where most of the effort and most of the innovation goes.
Audience member: [Something about pushing stuff down into the kernel.]
Linus: Nobody wants to. There are actually a few things that people are trying to do in user space, and they should be doing more in kernel space. So what happens is, you know, DRI.
Some people are so afraid of kernel space coding that they just put the minimal studs in the kernel for doing certain things, touching certain hardware registers, and then they don't tell the kernel at all what it's actually doing. But they do all the hard work in user space. And the kernel gets these millions of calls that say "Do this." And it doesn't understand what it's doing. It's just mimicking. It's parroting what the user space told it to do. And that's really dangerous, because it means when the kernel doesn't understand what it's doing, and it's obviously doing it with elevated privileges--that's not a good idea.
So sometimes you're just better off doing more in kernel space, if it means that the stuff in kernel space actually knows what the hell it's doing. But people really don't want their stuff--
Audience member: [Garbled.]
Linus: That's okay. Yeah. I don't know. DRI works fine. Performance things, I understand OpenGL is horrible. But, it is a major pain to debug.
Audience member: [Garbled.]
I can't get it right. Okay, you take my pure kernel and you move it into user space and you use GDB to debug it. What's wrong with you?
Linus: But it is very useful for doing virtualization.
The thing is, once you really start to care about performance.. if you don't want to change kernels, you just departmentalize. I think that OpenBSD did that right. What did they call their stronger sentry thing?
Audience member: FreeBSD has jails.
Linus: I think it came from OpenBSD. And that's, from a performance standpoint, it's much nicer. If you want to do just host virtualization. But it's not as bulletproof as a complete kernel virtualization. So if you have a bug you will bring down everything. But performance-wise it's obviously better.
Doc Searls: Imagine it's a year from now and customer demand in large companies is forcing the large OEMs to start making usable Linux laptops. And they visit issues that we've tiptoed around to some degree. What's the scenario here? Is this something that Intel fixes? Is it something that each of the OEMs fixes? I do know that Microsoft goes out of its way to make sure all of these OEMs make their laptops different. They are all essentially embedded machines. What happens?
Linus: So, the good news is, laptops are moving away from the embedded machine kind of thing. They are getting so standardized, especially with chipsets like Centrino. If you use the Pentium and you don't use Centrino, you are doing stuff wrong--except for the fact that they don't support 802.11 A&G; and right now you can't get Centrino drivers for Linux.
Actually, some people decided to go to Broadcom because they need A&G. But it is a matter of time.
Doc Searls: I've heard that Linux drivers exist at Intel.
Linus: I've also heard that they exist, but other people at Intel say "That's crap. We have it on the roadmap, but we haven't been able to get it going." They have been promising them for 2004, but I am not an Intel spokesman by any stretch. So I don't know what the actual date is. But it is supposed to come.
They've actually screwed up on A&G. They had to delay. They were supposed to have an A&G capable chipset in Q3, but they pushed that out to next year or something like that.
The thing is, when you built a laptop, you used to have to scrounge around people's backyards to find strange pieces of hardware to just make it all fit. And that is definitely going away.
And that's not just Centrino. Instead of having hundreds of different chipsets that you wire up a million different ways, you're going to have maybe five different chipsets, and you can't wire them up any way other than the way they are wired up. And that's just going to happen.
This is what we had on the desktop ten years ago. Compaq made their own PC desktops that weren't quite standard. Actually HP was worse. And that just went away because of standard chipsets. And it's starting to happen in laptop space now.
So, a year from now, I'd expect--assuming we can fight those ACPI issue--it's much more likely that when you buy a laptop it will just work. [Knocks on wood.]
Neil Bauman: Thank you very much!
Doc Searls is Senior Editor of Linux Journal, covering the business beat. His monthly column in the magazine is Linux For Suits, and his bi-weekly newsletter is SuitWatch.
Doc Searls is Senior Editor of Linux Journal
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Returning Values from Bash Functions
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide