Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performance

Apple's quaint choice of a microkernel for Mac OS X means Linux will lead in performance on Mac hardware.

Disagreements exist about whether or not microkernels are good. It's easy to get the impression they're good because they were proposed as a refinement after monolithic kernels. Microkernels are mostly discredited now, however, because they have performance problems, and the benefits originally promised are a fantasy.

The microkernel zealot believes that several cooperating system processes should take over the monolithic kernel's traditional jobs. These several system processes are isolated from each other with memory protection, and this is the supposed benefit.

Monolithic kernels circumscribe the kernel's definition and implementation as "the part of the system that would not benefit from memory protection".

When I state the monolithic design's motivation this way, it's obvious who I believe is right. I think microkernel zealots are victims of an overgeneralization: they come to UNIX from legacy systems such as Windows 3.1 and Mac OS 6, which deludes them into the impression that memory protection everywhere is an abstract, unquestionable Good. It's sort of like the common mistake of believing in rituals that supposedly deliver more security, as if security were a one-dimensional concept.

Memory protection is a tool, and it has three common motivations:

  1. to help debug programs under development with less performance cost than instrumentation. (Instrumentation is what Java or Purify uses.) The memory protection hopefully makes the program crash nearer to the bug than without it, while instrumentation is supposed to make the program crash right at the bug.

  2. to minimize the inconvenience of program crashes.

  3. to keep security promises even when programs crash.

''Because MS-DOS doesn't have it and MS-DOS sucks'' is not a motivation for memory protection.

Motivation #1 is a somewhat legitimate argument for the additional memory protection in microkernel systems. For example, QNX developers can debug device drivers and regular programs with the same debugger, making QNX drivers easier to write. QNX programmers are neat because drivers are so easy for them to write that they don't seem to share our idea of what a driver is; they think everything that does any abstraction of hardware is a driver. I think the good debugging tools for device drivers maintain QNX as a commercially viable Canadian microkernel. Their claims about stability of the finished product become suspicious to any developer who actually starts working with QNX; the microkernel benefits are all about ease of development and debugging.

Motivation #2 is silly. A real microkernel in the field will not recover itself when the SCSI driver process or the filesystem process crashes. Granted, if there's a developer at the helm who can give it a shove with some special debugging tool, it might, but that advantage is really more like that stated in motivation #1 than #2.

Since microkernel processes cooperate to implement security promises, the promises are not necessarily kept when one of the processes crashes. Therefore motivation #3 is also silly.

These three factors together show that memory protection is not very useful inside the kernel, except perhaps for kernel developers. That's why I claim the microkernel's promised benefits are a fantasy.

Before we move on, I should point out that the two microkernel systems, Mach and QNX, have different ideas about what is micro enough to go into the microkernel. In QNX, only message passing, context switching and a few process scheduling hooks go into the microkernel. QNX drivers for the disk, the console, the network card and all the hardware devices are ordinary processes that show up next to the user's programs in sin or ps. They obey kill, so if you want, you can kill them and crash the system.

Mach, which Apple has adopted for Mac OS X, puts anything that accesses hardware into the microkernel. Under Mach's philosophy, XFree86 still shouldn't be a user process. In the single server abuses of microkernels, like mkLinux, the Linux process made a system call (not message passing) into Mach whenever it needed to access any Apple hardware, so the filesystem's implementation would be inside the Linux process, but the disk drivers are inside the Mach microkernel. This arrangement is a good business argument for Apple funding mkLinux: all the drivers for their proprietary hardware, thus much of the code they funded, stays inside Mach, where it's covered by a more favorable (to them) license.

However, putting Mach device drivers inside the microkernel substantially kills QNX's motivation #1 because Mach device drivers are now as hard to debug as a monolithic kernel's device drivers. I'm not sure how Darwin's drivers work, but it's important to acknowledge this dispute about the organization of real microkernel systems.

What about the performance problem? In short, modern CPUs optimize for the monolithic kernel. The monolithic kernel maps itself into every user process's virtual memory space, but these kernel pages are marked somehow so that they're only accessible when the CPU's supervisor bit is set. When a process makes a system call, the CPU implicitly sets and unsets the supervisor bit when the call enters and returns, so the kernel pages are appropriately lit up and walled off by flipping a single bit. Since the virtual memory map doesn't change across the system call, the processor can retain all the map fragments that it has cached in its TLB.

With a microkernel, almost everything that used to be a system call now falls under the heading "passing a message to another process". In this case, flipping a supervisor bit is no longer enough to implement the memory protection, as a single user process's system calls involve separate memory maps for 1 user process + 1 microkernel + n system processes, but a single bit has enough states for only two maps. Instead of using the supervisor bit trick, the microkernel must switch the virtual memory map at least twice for every system-call-equivalent; once from the user process to the system process, and once again from the system process back to the user process. This requires more overhead than flipping a supervisor bit; there's more overhead to juggle the maps, and there are also two TLB flushes.

A practical example might involve even more overhead since two processes is only the minimum involved in a single system-call-equivalent. For example, reading from a file on QNX involves a user process, a filesystem process and a disk driver process.

What is the TLB flush overhead? The TLB stores small pieces of the virtual-to-physical map so that most memory access ends up consulting the TLB instead of the definitive map stored in physical memory. Since the TLB is inside the CPU, the CPU's designers arrange that TLB consultations shall be free.

All the information in the TLB is a derivative of the real virtual-to-physical map stored in physical memory. The TLB can represent one virtual-to-physical mapping, but the whole point of memory protection is to give each process a different virtual-to-physical mapping, thus reserving certain blocks of physical memory for each process. The virtual-to-physical map stored in physical memory can represent this multiplicity of maps, but the map-fragment represented in the high-speed hardware TLB can represent only one mapping. That's why switching processes involves TLB flushing.

Once the TLB is flushed, it becomes gradually reloaded from the definitive map in physical memory as the new process executes. The TLB's gradual reloading, amortized over the execution of each newly-awakened process, is overhead. It therefore makes sense to switch between processes as seldom as possible and make maximal use of the supervisor bit trick.

Microkernels also harm performance by complicating the current trend toward zero copy design. The zero copy aesthetic suggests that systems should copy around blocks of memory as little as possible. Suppose an application wants to read a file into memory. An aesthetically perfect zero copy system might have the application mmap(..) the file rather than using read(..). The disk controller's DMA engine would write the file's contents directly into the same physical memory that is mapped into the application's virtual address space. Obviously it takes some cleverness to arrange this, but memory protection is one of the main obstacles. The kernel is littered conspicuously with comments about how something has to be copied out to userspace. Microkernels make eliminating block copies more difficult because there are more memory protection barriers to copy across and because data has to be copied in and out of the formatted messages that microkernel systems pass around.

Existing zero copy projects in monolithic kernels pay off. NetBSD's UVM is Chuck Cranor's rewrite of virtual memory under the zero copy aesthetic. UVM invents page loanout and page transfer functions that NetBSD's earlier VM lacked. These functions embody the zero copy aesthetic because they sometimes eliminate the kernel's need to copy out to userspace, but only when the block that would have been copied is big enough to span an entire VM page. Some of his speed improvement no doubt comes from cleaner code, but the most compelling part of his PhD thesis discusses saving processor cycles by doing fewer bulk copies.

VxWorks is among the kernels that boasted zero copy design earliest, with its TCP stack. They're probably motivated by reduced memory footprint, but their zero copy stack should also be faster than a traditional TCP stack. Applications must use the zbuf API to experience the benefit, not the usual Berkeley sockets API. For comparison, VxWorks has no memory protection, not even between the kernel and the user's application.

BeOS implements its TCP stack in a user process, microkernel-style, and so does QNX. Both have notoriously slow TCP stacks.

Zero copy is an aesthetic, not a check-box press release feature, so it's not as simple as something a system can possess or lack. I suspect the difference between VxWorks's and QNX's TCP stack is one of zero copy vs. excessive copies.

The birth and death of microkernels didn't happen overnight, and it's important to understand that these performance obstacles were probably obvious even when microkernels were first proposed. Discrediting microkernels required actually implementing them, optimizing message-passing primitives, and so on.

It's also important not to laugh too hard at QNX. It's somewhat amazing that one can write QNX drivers at all, much less do it with unusual ease, given that their entire environment is rigidly closed-source.

However, I think we've come to a point where the record speaks for itself, and the microkernel project has failed. Yet this still doesn't cleanly vindicate Linux merely because it has a monolithic kernel. Sure, Linux need no longer envy Darwin's microkernel, but the microkernel experiment serves more generally to illustrate the cost of memory protection and of certain kinds of IPC.

If excessive switching between memory-protected user and system processes is wasteful, then might not also excessive switching between two user processes be wasteful? In fact, this issue explains why proprietary UNIX systems use two-level thread architectures that schedule many user threads inside each kernel thread. Linux stubbornly retains one-level kernel-scheduled threads, like Windows NT. Linux could perform better by adopting proprietary UNIX's scheduler activations or Masuda and Inohara's unstable threads. This performance issue is intertwined with the dispute between the IBM JDK's native threads and the Blackdown JDK's optional green threads.

Given how the microkernel experiment has worked out, I'm surprised by Apple's quaint choice to use a microkernel in a new design. At the very least, it creates an opportunity for Linux to establish and maintain performance leadership on the macppc platform. However, I think the most interesting implications of the failed microkernel experiment are the observations it made about how data flows through a complete system, rather than just answering the obvious question about how big the kernel should be.

Miles Nordin is a grizzled FidoNet veteran and an activist with Boulder 2600 (the 720) currently residing in exile near the infamous Waynesboro Country Club in sprawling Eastern Pennsylvania.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

I think your understanding of the application of microkernels is not well demonstrated by your article.

Memory protection is to prevent one process from impacting(writing over) another. end of story. your 3 motivations are a user view on the whole thing. Every modern OS uses memory protection. So, it's a good thing.

QNX is a fine OS for what it's made for. It's an OS that can be used in a flexible form factor. You can squash a QNX O/S into some horribly small amount of memory, and it can also use as much as you want. I think this is part of the goal of microkernel.

When you have a micro kernel, you can add on systems to fill the space you have available. If you've got 100K of memory, you want nothing more than memory management. If you've got a bit more, you might want to add in text screen handling, a bit more again, and you might want to add disk handling, etc etc. With a monolithic you can't do that.

Hence, microkernels are good for flexibility of delivery on a platform. monolithics are good when you need to deliver the latest and greatest, and memory/disk is not a problem.

What about windows NT/2000/XP ?

Anonymous's picture

As far as I know microsoft windowses based on NT are all microkernel -- and while there are descrepancies in performance, they are no where near high enough to warrant macro-over-mini decisions.

In fact, windows 2000 will in general perform snappier than a linux-box running X11.

Obviously an unfair comparison, because of other differences in design (in X11 and windows, in the kernel) and driversupport.

But I still feel your entire article reads like a rant

about "we are better, because this is how we do things!", with very little substance to it.

Windows is NOT based on a Microkernel Architecture

Anonymous's picture

Windows is largely a monolithic kernel. I say largely because device drivers are loaded in the kernel space. Most of the GDI has been moved away from the kernel in Windows 2000... so in a sence it is less monolithic than it's predecessors, but it is monolithic nevertheless. There are parts of the operating system that are protected from drivers (particularly in Windows Server 2003) but these are used for debugging and diagnostics purposes only.

Re: What about windows NT/2000/XP ?

Anonymous's picture

NT/Win2k/XP is a highly NON-microkernel and many critics complain that there is too much code in its kernel space. All *.SYS device drivers have full access to the single address space in kernel mode. Even parts of the win32 windowing subsystem is in kernel space (win32k.sys) for performance reasons. There are also several hundred exported kernel methods, which is very much the opposite goal of microkernel designs (to reduce the number of methods).

Re: What about windows NT/2000/XP ?

Anonymous's picture

Also, NT/Win2k/XP do in fact delegate a lot of responsibilities to user-mode "helper" service processes that interact directly with its kernel through messages passed through the LPC (local procedure call) mechanism. However, such interactions are generally for events that have higher latency tolerance, such as Event Logging, network login authentication, and high level network protocols (like SMB). The actual low-level network stacks are of course kernel mode sys drivers.

Re: What about windows NT/2000/XP ?

Anonymous's picture

Hah-hah, this is the real evidence that pure micro-kernel sucks.

I recollect the time of NT 3.51 - it was so slowly!

The main reason NT4 has got "built-in" to the kernel GUI was initial slow micro-kernel design.So pure mk zealots in MS had been compelled to do a compromise and make a lilttle pace to monolith.;)

Nevertheless monolithic kernels have well-known problems with SMP.

Re: What about windows NT/2000/XP ?

Anonymous's picture

From the days od Red Hat 6.2 to present-day mandrake 8.2 I have yet to see Windows 2000 on my computer EVER be more snappy than a GNU/Linux box running an X Server. Niether have I seen windows 2000 be more snappy on any other box. I run Windows 200 and try different/new release Linux distros more than I have sex. This is a problem. But I still through experience never ever have seen Windows 2000 be more snappy than a Linux box runing an X server.

Some recollections...

Anonymous's picture

Forgave me if my facts are not too correct on this one, but as I recall NT, which we now call XP, is not a micro-kernel. At the time NT was developed MS was chasing the UNIX industry (still is) which had the matured SVR3 kernel and micro-kernels CMU Mach and Chorus/Mix (now owned by Sun). As Destiny (SVR4) was about to be released, the whole industry (IBM,HP,Digital) rallied around OSF/1 (Mach based) because SVR4 was developed by ATT and Sun (laugh!). Sun still ships SRV4 based systems (Solaris) and Digital was the only one to ship OSF/1. Compaq subsequently bought Digital and renamed OSF/1, Tru64, which we all know has some success in research based industries.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

How can you put so much FUD in one single sentence?

"Apple's quaint choice of a microkernel for Mac OS X means Linux will lead in performance on Mac hardware."

Let's examine this intro/conculsion/summary:

False statement #1: OS X is a microkernel!

Wrong assumption #1: microkernels are slower than monolithic and will remain slower in the future!

Wrong assumption #2: Linux will never be microkernel based!

Wrong assumption #3: "Performance of an OS" is equivalent to "performance of the kernel"!

False statement #2: the "OS X kernel" only runs on mac hardware!

pffffffffff!

People don't put $600M into Introspective State Research for Not

Anonymous's picture

I do know something about Mach. Darwin is not Mach, but is a modern compromise, so these comments are about true microkernels.

DARPA and NSA invested lots of money in microkernels. So did the Open Software Foundation. The reasons were partly for hardware abstraction, but the real reason had to do with introspective fine-grained state control.

Now at present, few people think this is worthwhile, especially given the extra "cost." But you may discover sooner or later the immense security benefits, and the necessity of introspective state in finegrained agent systems in huge (but realistic) enterprises. But you will soon see this, I think.

Meanwhile, Apple is happy to exploit this capability for the relatively mundane advantages it gives for veryhighload distributed streaming servers.

So there are plusses on the MK side. Think also about the compromises implicit in the monokernel: Linuz has only one goal: to make something much like (serverflavored) UNIX run on cheap intel chips. despite the fact that there are a gallizion of them, its a pretty narrow design goal.

Misinformed..

Anonymous's picture

Miles,

It would appear that you haven't done your research w/r/t OS X's kernel architecture. The OSX/Darwin kernel uses Mach's code for processor scheduling and memory allocation *only*. All the rest of the kernel services, the IOKit, the networking stack, etc, run in the same address space as the Mach kernel.

You've done a superb job of trolling, by arguing against something that Apple's not doing.

"I'm not sure how Darwin's drivers work..."

Anonymous's picture

...or what kernel arrangement Apple actually uses, or how fact checking is done, or how benchmarking is done, or... Um, maybe you should do some research before opening your maw next time. God, I love the web. Every moron in the world has a voice. I don't particularly care for Apple, but I try to maintain some level of objectivity and give them a little credit for their Mach/BSD mindmeld. They have done a good job with OS X. Our graphics department's Macs are enviably stable, and our IT guy never spends any time in their hall, go figure. But that's a different story altogether.

Hey, man,

Anonymous's picture

You should work for MozillaQuest, man. All your fact-twisting, dodging of the issues, and avoidance of ever revealing your utter lack of knowledge as to how Darwin works. Man, I tell you, Mike Angelo would be hard-presses to write a better example of journalistic irresponsibility.

Great piece of FUD. Perhaps you should consider working for Billy?

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Miles Nordin is a grizzled FidoNet veteran and an activist with Boulder 2600 (the 720) currently residing in exile near the infamous Waynesboro Country Club in sprawling Eastern Pennsylvania.

Well, it clearly shows that he is in exile. I vote he stays there.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Just one word: Audio.

Apple has gone to great lengths in outlining the latency issues of Audio under OS X.

They are claiming that latency on Mac OS X is going to be a non-issue. Getting 'sound data' into the system, deep inside the kernel and out again in an extremely short period of time (I think 200 microseconds in the case of Midi) doesn't seem to indicate any kind of 'lag'.

I haven't seen any real world performance benchmarks for this but Apple wouldn't be making such pronounced statements about the issue if it didn't think it could carry through with it claims.

They obviously think that the kernel is up to the job.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

MIDI is 38kb per sec, when pro audio (like multi-channels at 96Mhz, 32 bits wide) is another story, in bandwidth and latency terms.

Anyway, audio support in MacOS X seems very good. And not only at latency.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

I think he was talking about bandwidth there, but the time it takes to get a midi event in and out of the system.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

" True, linux is monolithic, and I agree that microkernels are nicer. " Linus Torvalds, 1992

Torvalds admits that Linux would have a microkernel if one had been available when he started the project.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Yeah, if only there had been a microkernel!

Oh wait! There was! Iwas called Minix, and its almost precisely the reason WHY Linus wrote the first Linux kernel. (Because Minix was just so horrendously slow)

Don't accuse someone of being an ignorant troll if you plan on being one yourself!

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

http://news.zdnet.co.uk/story/0,,s2085525,00.html

"Frankly, I think it's a piece of crap," Torvalds says of Mach, the microkernel on which Apple's new operating system is based. "It contains all the design mistakes you can make, and manages to even make up a few of its own."

http://www.itworld.com/Comp/2384/LWD010410maccomments/

"I used to like the _concept_ of microkernels, I just disliked every implementation I had ever seen (both Mach and Minix included, which was the basic reason for the debate/flamewar in question). These days I've pretty much come to the conclusion that the reason few people like microkernel implementations is that the whole concept is flawed -- even if it sounds good in theory."

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Truly a troll, since he didn't even due any basic research. While Apple does use Mach, it doesn't use it as a micro-kernel. It is used more like NT's HAL to abstract dealing with hardware, threads, lock primitives, and such for the BSD 4.4 kernel that runs in the SAME kernel memory space as Mach. That's right the BSD kernel DOES NOT run in user space. The BSD kernel makes function calls (not message passing RPC/IPC) to the Mach kernel so there is very little overhead, and certainly not the kind of overhead the author attributes to OSX. In fact, Apple's Mach/BSD kernel is known to trounce Linux and just about any other OS in at least one area (real-time sound processing via Core Audio).
Here is more from Apple on the Mach/BSD kernel.

I guess it's easier to put your foot in your mouth than to do a search on Google or Apple's site.

Where are the benchmarks?

Anonymous's picture

Can't you show us how much better the performance of Linux on the ppc platform is vs. OS X? I think this article is full of obscure overgeneralizations.

Re: Where are the benchmarks?

RJDohnert's picture

Actually me and my Company have done tests with Linux on PPC and Linux on x86 and Mac OS X the winner was a dual AMD Duron 1.6 ghz it rendered photoshop filters under VMWARE faster than Mac OS X did with the native version of Photoshop, All three Machines had 512 mb of RAM, 60 gig HDs and Dual Processors but the Macs were only available in 1 ghz. The speed on the AMDs were 35 % faster

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

davechen's picture

This is the stupidest, most ignorant troll I've read in a long time. Why don't you try getting a clue before you post?

http://developer.apple.com/techpubs/macosx/Darwin/General/KernelProgramm...

Mac OS X is based on Mach, but it also includes BSD facilites in the kernel. Darwin is anything but a microkernel.

Here's what's in the kernel:
The BSD component provides the following kernel facilities

  • processes
    and protection
    • host and process identifiers
    • process creation and termination
    • user and group IDs
    • process groups
  • memory management
    • text, data, stack,
      and dynamic shared libraries
    • mapping pages
    • page protection control
    • synchronization primitives
  • signals
    • signal types
    • signal handlers
    • sending signals
  • timing and statistics
    • real time
    • interval time
  • descriptors
    • files
    • pipes
    • sockets
    • POSIX shared memory
    • POSIX synchronization primitives
  • resource controls
    • process priorities
    • resource utilization and resource limits
    • quotas
  • system operation support
    • bootstrap operations
    • shut-down operations
    • accounting

Does that look like a microkernel?

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

You are absolutely right!

The article is a pure FUD product out of a troll. Since when trolls have any real knowledge of what they are talking about?

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

I have neither the knowledge nor the interest level to engage in the long-running microkernel vs. monolithic kernel debate. But I think there are a few inaccuracies here. First, you speak as if Mac OS X is an example of an implementation of a pure Mach microkernel, which I don't believe it is. From Apple's own Developer Technical Publications:

...in traditional Mach-based operating systems, the kernel refers to the Mach microkernel, and ignores additional low-level code without which Mach does very little. In Mac OS X, however, the kernel environment contains much more than the Mach kernel itself. The Mac OS X kernel environment includes the Mach kernel, BSD, the I/O Kit, file systems, and networking components. These are often referred to collectively as the kernel.

Second, you make claims about inferior performance in microkernel-based operating systems as compared with monolithic kernel OS's, specifically OS X vs. Linux, without providing hard evidence. You describe the principles behind this performance claim, which I am in no position to dispute; but without real examples and data, how are we to know whether this microkernel performance hit is large enough to even be noticed, let alone degrade the user experience in general? And this is leaving aside the probability that a large factor in Apple's choice of kernel implementation, given their target market, was to obviate any need to ever recompile the kernel.

And finally, it is incongruous to say the least for a hyperlink on the "death" of the microkernel to point to Linus's O'Reilly book. Yes, of course Linus doesn't like microkernels; this is not news. But does his distaste for them somehow "kill" them, particularly given the number of OS X installations currently out there, and the numbers projected for the years ahead? No, personal feelings on all fronts aside, I think it's a bit silly characterize as dead and failed a kernel architecture merely because you and some other people don't favor it.
Again, I don't know nearly enough about this material to talk intelligently at length about it; I know enough, though, to know that this article is stretching the truth a bit.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Ahem...

QNX is targeted towards embedded real-time applications.. a department where Linux is fledging at best...see the following link. EE Times recently had an article describing the difficulty business are having in completing develoipment on ideas using Linux because of the problems caused by the open source os. See: http://www.eetimes.com/sys/news/OEG20020510S0085

You say "Their claims about stability of the finished product become suspicious to any developer who actually starts working with QNX" Forgive me, but my impression is that companies using Linux are saying it about Linux, not the other way around." How do you back up the above assertion? Do these "developers" really understand how to use the OS or are they just Linux developers trying to program like a Linux developer on QNX? Hhmmm?

Secondly, QNX 6.1 out performs VxWorks in almost every area, even when VxWorks uses a 'zero copy design'! Oh yes, this is more than just hear-say..I do have references, this is a report done by a third party, not by QSSL (QNX Software Systems Ltd) You can see the details here: http://www.dedicated-systems.com/encyc/

If you want to focus on the performance of the TCP/IP stack, you have a very narrow vision indeed. To use a single aspect of the system as a performance indicator is a dubious conclusion at best. How can your article be considered credible when you do not take into the full range of things important to a real time embedded systems developer?

Lastly...who is the Zealot here? I have cold hard data. Where's yours?

I don't have a lot of time to respond to all the points of your article, but it's pretty clear to me that you have more homework to do.

Kevin

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

carton's picture

Hi, Kevin!I would like to state, for the record, that QNX RULES. While I don't exactly believe that vxworks sucks, QNX definitely has more impressive ``technology'' than vxworks.My comment about QNX's ``claims about stability f the finished product become suspicious'' does need a lot of context, and maybe even the original article does a bad job of providing it. What I meant was, QNX claims their microkernel makes finished QNX projects more stable, but it does not. If an microkernel process crashes, sure QNX will contain the crash, but the embedded project's user still experiences this as instability. The microkernel benefits only the developer, not the finished product's users.My article carefully stated that microkernels are an advantage to kernel developers, and sure enough this is where QNX really shines: small production runs where software development costs dominate the project. I imagine production runs would have to get pretty big before soldering into the device 4MB of DRAM instead of 8MB is worth the additional difficulty of host-target debugging with no memory protection. But such projects do exist. Your point that copyright makes Linux a hard choice for many jobs (no matter how Linus says he interprets his GPL) is one with which I agree. I've made the same argument to inquisitive corporate masters who asked me what I thought of ditching QNX for Linux. BSD is a much easier choice for projects that need to escape QNX's steep license costs and nickel-and-dime ``modularity,'' and Wasabi Systems has a business supporting embedded BSD projects. All other things being equal, I would prefer to work on an embedded project with BSD. On the other hand, I think it's much easier to find books about Linux internals than BSD internals, which is probably one strong, valid reason why most other developers like to pressure their bosses to embed Linux.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

So why are you completly side stepping the OSX kernel issue now that 60+ people have pointed out your article had no basis in reality? I think the article should be pulled; it's that far off base.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Thanks for responding!

Three things:

1. One the microkernel benifits more than just the developer. In Linux, if a driver crashes, the whole kernel, everything, goes down. In QNX, yes, you can slay fsys, but this doesn't necessarily mean the whole system goes down. The file system isn't necessary to support the operation of the OS. Yes, you will not have command line capability as the shell requires the file system, but you still have QNET or FLEET (QNX4) networking. This allows transparent acces to the kernel via the network, allowing a restart of the file system and any other affected components. I've done it.

If your code checks return values from read and write operations, it can have the ability to wait until Fsys is started again, or it can put the system in to a safe mode. With linux you haven't got a prayer.

If the system is designed around the IPC/microkernel philosphy (and not just 'ported', but engineered to take advantage of the QNX OS), much better reliability can be obtained. Maybe someone can fill in the details, but a chemical plant that processes a ver volitiale acid uses QNX to run their system. When I heard this guy speak (his name was Dan, I think), their system had been up for 3 or more years and never had to be rebooted. This included software updates where critical peices were replaced 'hot' such that the wrote the new (and tested) binary, stopped the process it was to replace and started it.

It can be done, I done it myself on a hybrid electric tank project. Processes/drivers can drop out of memory or be stopped if it has a bug, fixed and restarted without having to stop/start up the processes that depend on it.

You said "If an microkernel process crashes, sure QNX will contain the crash, but the embedded project's user still experiences this as instability" This is not necessarily true. At best a process crashes, it can be restarted (automatically) w/o the user even knowing about it. At worst, there will be a small inconvieniance as the user has to do some operation twice.

But the point is the user still has a functioning unit..even if the probelm reoccurs and is annoying...the device still is able to do its job. If a device driver causes a problem in linux and crashes the kernel, nothing works at all until it is rebooted, this isn't acceptable in many applications.

Designed correctly, applications benifit from QNX's microkernel approach.

2. The article that I indicated was much more than just copyright issues....companies have spent years and tons of money on projects using embedded linux, and they can't come up with a product. Linux is not free by any stretch of the means. Futher more in quantities, the royalty payments to QNX become small, and it really isn't that expensive to start with.

3. I wasn't exactly saying that VxWorks sucked,

I was addressing your remarks that microkernels can't perform as well as monolithickernels. This simply is not true. I said QNX still out performs it in all areas (with the exection of one or two) even with VxWorks' 'no copy' philosophy. QNX not only has more impressive "technology", the've done it with little or no expense at speed and performance.

Thanks for taking the time to respond.

Kevin

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

The main focus of the article is the difficulty companies are having in selling linux solutions in the embedded market, rather than how much it's being used by embedded developers.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Yes, selling was a major theme, but so was technical soundness and the development of linux based systems.

To quote:

Linux is supposed to be free, but in truth the development costs are very high," noted Jerry Krasner, vice president of Market Intelligence for Embedded Market Forecasters (Framingham, Mass.). "Executives often end up saying, 'We spent 65 percent of our R&D budget and didn't get anything.' "

"showed that the main reasons for not using Linux were concerns over memory footprints, GPL licensing and Linux's real-time limitations."

"Real-time Linux isn't going anywhere, because embedded developers don't trust it for mission-critical applications," said Krasner of Embedded Market Forecasters. "In those areas, they're going to stick with the systems they already know."

End of quotes

There are some positive remarks made in the article as well, time will tell.

Kevin

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Mr Nordin makes his case for the monolithic kernel yet leaves me wanting additional information.

While he derides the microkernel I believe he should show real-world statistics to clearly delineate the shortfalls of this architecture. Until he can show results this is nothing more then an opinion piece. I don't mean benchmarks either. I mean using applications in a day to day environment.

The argument for Linux as outperforming OS X does miss the point for a couple of reasons. First, unless the performance differences are orders of magnitude who cares! Kernel hackers and other programmers might but most users won't. They want to get work done and don't give a damn about the kernel in any form. Second, given the politics and of the Linux community I don't see them making significant inroads to the Mac/LinuxPPC community. KDE/Gnome are not going to gather legions of Mac users and cause them to switch from OS X. OS X, while not without faults, is already a very strong contender on the users desktop and could be in the server arena also with Xserve. Time will tell. Also, if this architecture is as bad as Mr Nordin claims I'm sure the engineers at Apple will deal with them. Otherwise they will be marginalized and Apple will fail.

While this article is informative and a good read I find that it's subject confines itself to an audience that is somewhat small yet presumes to speak to all. I think it's rather quaint.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

May be you should find out how things are done in Darwin, instead of spouting in ignorance. AFAIK drivers are (can) be written as dynamic kernel extensions.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

FUD in its purest form.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

>>> I think we've come to a point where the

>>> record speaks for itself, and the

>>> microkernel project has failed.

I'm also not sure that your criticism holds for multi processor sytems. Apple currently offers dual processor sytems, and I think that quad and eight-way systems are not too far in the near future. Is it possible that at some number of processors OSX could out perform a monolithic kernel? Is it possible that 4 processors with 2 MB on-chip L2 cache each on a single piece of silicon (the Power PC G5) might be able to pass messages fast enough to compensate for the extra overhead? Is it possible that certain classes of programs are not affected (or actually perform better) in a Microkernel based OS?

One of my rules of thumb developed over the last 20 years working on computers is "If it works you've gotta respect it." OSX does work right now, and it runs my UNIX System V scripts from the 80's as well as loads of other debugged off the shelf programs. If OSX is less efficient that Linux, but more open/efficient/reliable than Win Whatever, then I think that OSX is a compelling product.

As a programmer I am more interested in the programming environment than the OS internals. I am especially interested in Cocoa which came from NextStep which is the programming environment that gave birth to the first web browser. A slow reliable program is always more valuable than a theoretically faster but not yet debugged program.

BUT, how many times faster could programs run on multiprocessor Linux vs OSX? Nothing says you cannot develop in the mature OSX/Cocoa/NextStep environment, and then port to Linux if it is 2 or more times faster on the same hardware. I think we'll have to run these tests over the next few years before we say that the record speaks for itself.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

JESUS ***** CHRIST

How did this trash get published?

Here's a clue for those who don't want to as painfully ignorant and egotistical as our dear Mr. Miles Nordin.

Linux runs on PowerPC Macintoshes

MacOS X runs on PowerPC Macintoshes

Lots and lots of quantitative benchmarks will compile and run under both platforms.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

How about this

Mac OS X sucks, It is just a vamped up Version of NextSTEP

Linux rules it has been in use more than Mac OS X has and it doesnt have the Proprietary useless crap apple puts in Mac OS X.

What the hell?

Anonymous's picture

"I'm not sure how Darwin's drivers work..."

Why are you pretending to then?

I don't think you know anything about OS X. The kernal space in OS X is more than Mach. It is also the I/O Kit (O-O driver framework in embedded C++), file system (based on enhanced VFS) and plug-ins, network kernal extensions (NKEs), a customised BSD 4.4 (exporting APIs to user space). All this runs in kernal space, i.e. no memory protection.

You claim poor OS X performance without a single benchmark, in any specific task, area of the OS, or end user expectation. For example, OS X has a real-time latency as low as 1 millisecond when handling audio (via CoreAudio). This is exceptional. Windows 2000 and Linux do well to get under 100 milliseconds.

You're a Linux troll.

Re: OSX audio latency

Anonymous's picture

OSX audio latency (Score: 0)

by Anonymous on Sunday, June 02, 2002

"This is exceptional. Windows 2000 and Linux do well to get under 100 milliseconds."

So far no sequencers are even out for osx and yet your trying to quote latency... i know many ppl who get 1 ms latency in windows now with lots of tracks of audioand softsynths. 100 ms ? huh wtf are u on?

Thats 1 ms now compared to OSX which has pretty much 0 pro audio software.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Think of kernels as tools -- to each one its own task (wwws.sun.com/software/chorusos/index.html).

Re: Huh?

Anonymous's picture

I'm no expert on Kernel technologies... but the author himself says that:

"I'm not sure how Darwin's drivers work"

and then goes on to explain why the implementation of Mach on Darwin (OS X) is a failure.

Should you be writing an article condemning something as obsolete when you admitedly don't understand it?

As I understand it, a big reason for choosing a microkernel was to prevent the need for recompilation of the kernel when new drivers are added.It also allows for some other things, like the file system to be abstracted from the rest of the OS, so you can use very different filesystems (HFS+ and UFS currently) and they both work pretty much transparently to the user.

I fully admit that the abstraction that goes on within Mach could very well cause it to trail the Linux kernel in terms of raw performance... but like the author has stated, there are some non speed related advantages to a microkernel...

my 2 cents...

Someone didn't do his homework...

Anonymous's picture

It really doesn't hurt to do some research before starting a rant that only makes you look silly and ignorant :) See e.g. here and here.

Jonas

Obsolete Microkernel? Prove It.

Anonymous's picture

Interesting read--if most of us could understand Geekspeak.

Unfortunately, I failed (after several reads) to glean the evidence that Miles has on overall system performance with Mac OS X's specific implementation of Mach over previous versions or previous microkernels elsewhere. Specifically, no real evidence exists in this article. Only a reinteration of the debate theme of "Well, it sucked then, so it must suck now."
A good example of using this tired argument was Miles' example of using Mach in mklinux, which he iterates:
"This arrangement is a good business argument for Apple funding mkLinux: all the drivers for their proprietary hardware, thus much of the code they funded, stays inside Mach, where it's covered by a more favorable (to them) license."
Apple has stopped MkLinux development for years, now, and does not use it in any way for leverage for any current Apple product. The "proprietary hardware" argument is old as well: Since only Compaq makes Compaq boxes, that makes their computers proprietary too, correct? No one can fault a manufacturer for making a product in the way they see fit--in Apple's case, its hardware is competitive and the bulk of their business. If Apple resorted to using common motherboards and chips (the ONLY truly proprietary components in these systems), then Apple would fade, and quickly, as a leader, and as a business. If Apple leverages anything for its OS X development, its FreeBSD, of which they have leveraged many key users and developers such as Jordan Hubbard.

I'm sure there are some valid points here that make Linux a competitive kernel over others. It's educational. I'm sure that PowerPC Linux distros such as SuSE, Mandrake, and Yellow Dog are quite potent. But some real benchmarks are needed. Simply talking theory isn't going to cut it. Try installing the Darwin distro (OS X's open-source core runs on Intel as well) on a PC box and try making some real-world comparisons.

Dude, do research before you write.

Anonymous's picture

The Mac OS X kernel, XNU as it is called, is a ... drumrolls please ... monolithic kernel ... That's right, Apple chose to melt together parts of Mach and parts of BSD in one big image. There is no such thing as a microkernel design. All things live in kernel land, except for some drivers like for USB, which is also not uncommon for other competing operating systems.

Too bad the author of this article didn't do his research before writing about the microkernel zealots. Ah well, I guess that's part of a religion; assuming things that don't fit your world, without checking facts first.

Sateh

Re: Dude, do research before you write.

Anonymous's picture

It's acually a hybrid, but good piont nevertheless

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

I found your article very interesting.

However, your pooh poohing of Mac OS X is quite unfounded.

If you were to have gone to Apple's developer web site and actually read this page;

http://developer.apple.com/techpubs/macosx/Darwin/General/KernelProgramm...

you would have discovered that "Apple has modified and extended Mach" 3.0.

The kernel environment has linked Mach "with other kernel components into a single kernel address space." This appears to be exactly what the linux (monolithic) kernel does according to your article. Thus, Mac OS X, with its MODIFIED Mach 3.0 kernel does not suffer from MACH 3.0's performance barrier and would have similar performance to Linux in this respect. Further, Mac OS X's kernel retains a microkernels modularity which is not possible with Linux's monolithic kernel.

See this page for a diagram and description of the kernel environment:

http://developer.apple.com/techpubs/macosx/Darwin/General/KernelProgramm...

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Mac sucks and always have

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

It will suffer from Machs Performace barriers because of the implementation can you spell Kernel Panic. Anyone who has ever used Mac OS X can...

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix