Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performance

Apple's quaint choice of a microkernel for Mac OS X means Linux will lead in performance on Mac hardware.

Disagreements exist about whether or not microkernels are good. It's easy to get the impression they're good because they were proposed as a refinement after monolithic kernels. Microkernels are mostly discredited now, however, because they have performance problems, and the benefits originally promised are a fantasy.

The microkernel zealot believes that several cooperating system processes should take over the monolithic kernel's traditional jobs. These several system processes are isolated from each other with memory protection, and this is the supposed benefit.

Monolithic kernels circumscribe the kernel's definition and implementation as "the part of the system that would not benefit from memory protection".

When I state the monolithic design's motivation this way, it's obvious who I believe is right. I think microkernel zealots are victims of an overgeneralization: they come to UNIX from legacy systems such as Windows 3.1 and Mac OS 6, which deludes them into the impression that memory protection everywhere is an abstract, unquestionable Good. It's sort of like the common mistake of believing in rituals that supposedly deliver more security, as if security were a one-dimensional concept.

Memory protection is a tool, and it has three common motivations:

  1. to help debug programs under development with less performance cost than instrumentation. (Instrumentation is what Java or Purify uses.) The memory protection hopefully makes the program crash nearer to the bug than without it, while instrumentation is supposed to make the program crash right at the bug.

  2. to minimize the inconvenience of program crashes.

  3. to keep security promises even when programs crash.

''Because MS-DOS doesn't have it and MS-DOS sucks'' is not a motivation for memory protection.

Motivation #1 is a somewhat legitimate argument for the additional memory protection in microkernel systems. For example, QNX developers can debug device drivers and regular programs with the same debugger, making QNX drivers easier to write. QNX programmers are neat because drivers are so easy for them to write that they don't seem to share our idea of what a driver is; they think everything that does any abstraction of hardware is a driver. I think the good debugging tools for device drivers maintain QNX as a commercially viable Canadian microkernel. Their claims about stability of the finished product become suspicious to any developer who actually starts working with QNX; the microkernel benefits are all about ease of development and debugging.

Motivation #2 is silly. A real microkernel in the field will not recover itself when the SCSI driver process or the filesystem process crashes. Granted, if there's a developer at the helm who can give it a shove with some special debugging tool, it might, but that advantage is really more like that stated in motivation #1 than #2.

Since microkernel processes cooperate to implement security promises, the promises are not necessarily kept when one of the processes crashes. Therefore motivation #3 is also silly.

These three factors together show that memory protection is not very useful inside the kernel, except perhaps for kernel developers. That's why I claim the microkernel's promised benefits are a fantasy.

Before we move on, I should point out that the two microkernel systems, Mach and QNX, have different ideas about what is micro enough to go into the microkernel. In QNX, only message passing, context switching and a few process scheduling hooks go into the microkernel. QNX drivers for the disk, the console, the network card and all the hardware devices are ordinary processes that show up next to the user's programs in sin or ps. They obey kill, so if you want, you can kill them and crash the system.

Mach, which Apple has adopted for Mac OS X, puts anything that accesses hardware into the microkernel. Under Mach's philosophy, XFree86 still shouldn't be a user process. In the single server abuses of microkernels, like mkLinux, the Linux process made a system call (not message passing) into Mach whenever it needed to access any Apple hardware, so the filesystem's implementation would be inside the Linux process, but the disk drivers are inside the Mach microkernel. This arrangement is a good business argument for Apple funding mkLinux: all the drivers for their proprietary hardware, thus much of the code they funded, stays inside Mach, where it's covered by a more favorable (to them) license.

However, putting Mach device drivers inside the microkernel substantially kills QNX's motivation #1 because Mach device drivers are now as hard to debug as a monolithic kernel's device drivers. I'm not sure how Darwin's drivers work, but it's important to acknowledge this dispute about the organization of real microkernel systems.

What about the performance problem? In short, modern CPUs optimize for the monolithic kernel. The monolithic kernel maps itself into every user process's virtual memory space, but these kernel pages are marked somehow so that they're only accessible when the CPU's supervisor bit is set. When a process makes a system call, the CPU implicitly sets and unsets the supervisor bit when the call enters and returns, so the kernel pages are appropriately lit up and walled off by flipping a single bit. Since the virtual memory map doesn't change across the system call, the processor can retain all the map fragments that it has cached in its TLB.

With a microkernel, almost everything that used to be a system call now falls under the heading "passing a message to another process". In this case, flipping a supervisor bit is no longer enough to implement the memory protection, as a single user process's system calls involve separate memory maps for 1 user process + 1 microkernel + n system processes, but a single bit has enough states for only two maps. Instead of using the supervisor bit trick, the microkernel must switch the virtual memory map at least twice for every system-call-equivalent; once from the user process to the system process, and once again from the system process back to the user process. This requires more overhead than flipping a supervisor bit; there's more overhead to juggle the maps, and there are also two TLB flushes.

A practical example might involve even more overhead since two processes is only the minimum involved in a single system-call-equivalent. For example, reading from a file on QNX involves a user process, a filesystem process and a disk driver process.

What is the TLB flush overhead? The TLB stores small pieces of the virtual-to-physical map so that most memory access ends up consulting the TLB instead of the definitive map stored in physical memory. Since the TLB is inside the CPU, the CPU's designers arrange that TLB consultations shall be free.

All the information in the TLB is a derivative of the real virtual-to-physical map stored in physical memory. The TLB can represent one virtual-to-physical mapping, but the whole point of memory protection is to give each process a different virtual-to-physical mapping, thus reserving certain blocks of physical memory for each process. The virtual-to-physical map stored in physical memory can represent this multiplicity of maps, but the map-fragment represented in the high-speed hardware TLB can represent only one mapping. That's why switching processes involves TLB flushing.

Once the TLB is flushed, it becomes gradually reloaded from the definitive map in physical memory as the new process executes. The TLB's gradual reloading, amortized over the execution of each newly-awakened process, is overhead. It therefore makes sense to switch between processes as seldom as possible and make maximal use of the supervisor bit trick.

Microkernels also harm performance by complicating the current trend toward zero copy design. The zero copy aesthetic suggests that systems should copy around blocks of memory as little as possible. Suppose an application wants to read a file into memory. An aesthetically perfect zero copy system might have the application mmap(..) the file rather than using read(..). The disk controller's DMA engine would write the file's contents directly into the same physical memory that is mapped into the application's virtual address space. Obviously it takes some cleverness to arrange this, but memory protection is one of the main obstacles. The kernel is littered conspicuously with comments about how something has to be copied out to userspace. Microkernels make eliminating block copies more difficult because there are more memory protection barriers to copy across and because data has to be copied in and out of the formatted messages that microkernel systems pass around.

Existing zero copy projects in monolithic kernels pay off. NetBSD's UVM is Chuck Cranor's rewrite of virtual memory under the zero copy aesthetic. UVM invents page loanout and page transfer functions that NetBSD's earlier VM lacked. These functions embody the zero copy aesthetic because they sometimes eliminate the kernel's need to copy out to userspace, but only when the block that would have been copied is big enough to span an entire VM page. Some of his speed improvement no doubt comes from cleaner code, but the most compelling part of his PhD thesis discusses saving processor cycles by doing fewer bulk copies.

VxWorks is among the kernels that boasted zero copy design earliest, with its TCP stack. They're probably motivated by reduced memory footprint, but their zero copy stack should also be faster than a traditional TCP stack. Applications must use the zbuf API to experience the benefit, not the usual Berkeley sockets API. For comparison, VxWorks has no memory protection, not even between the kernel and the user's application.

BeOS implements its TCP stack in a user process, microkernel-style, and so does QNX. Both have notoriously slow TCP stacks.

Zero copy is an aesthetic, not a check-box press release feature, so it's not as simple as something a system can possess or lack. I suspect the difference between VxWorks's and QNX's TCP stack is one of zero copy vs. excessive copies.

The birth and death of microkernels didn't happen overnight, and it's important to understand that these performance obstacles were probably obvious even when microkernels were first proposed. Discrediting microkernels required actually implementing them, optimizing message-passing primitives, and so on.

It's also important not to laugh too hard at QNX. It's somewhat amazing that one can write QNX drivers at all, much less do it with unusual ease, given that their entire environment is rigidly closed-source.

However, I think we've come to a point where the record speaks for itself, and the microkernel project has failed. Yet this still doesn't cleanly vindicate Linux merely because it has a monolithic kernel. Sure, Linux need no longer envy Darwin's microkernel, but the microkernel experiment serves more generally to illustrate the cost of memory protection and of certain kinds of IPC.

If excessive switching between memory-protected user and system processes is wasteful, then might not also excessive switching between two user processes be wasteful? In fact, this issue explains why proprietary UNIX systems use two-level thread architectures that schedule many user threads inside each kernel thread. Linux stubbornly retains one-level kernel-scheduled threads, like Windows NT. Linux could perform better by adopting proprietary UNIX's scheduler activations or Masuda and Inohara's unstable threads. This performance issue is intertwined with the dispute between the IBM JDK's native threads and the Blackdown JDK's optional green threads.

Given how the microkernel experiment has worked out, I'm surprised by Apple's quaint choice to use a microkernel in a new design. At the very least, it creates an opportunity for Linux to establish and maintain performance leadership on the macppc platform. However, I think the most interesting implications of the failed microkernel experiment are the observations it made about how data flows through a complete system, rather than just answering the obvious question about how big the kernel should be.

Miles Nordin is a grizzled FidoNet veteran and an activist with Boulder 2600 (the 720) currently residing in exile near the infamous Waynesboro Country Club in sprawling Eastern Pennsylvania.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

wow, can i reply to this

Anonymous's picture

wow, can i reply to this 1000% correct article 8 years later?

Mac will always be slower and cost more than real computers BY DESIGN.
Lots has happened in 8 years.
Just not in OS/X in which each release just a bug fix.
Still cant mount Windows shares properly.

ALL the software Features Gates was going to put in his machines have been put into Linux including NUMA, a new scheduler, more file systems, a better X.

What was the original reason to buy a Mac?
Oh yes, I remember. Photoshop.
Still is.

Gosh I dont see OS/X on the TOP500 Supercomputer list
again....
I wonder why.
http://www.top500.org/charts/list/35/os

And the fact that Apple wants to control the entire user experience.
A liberal worldview is so yesterday.

Just like Lucas accused Bush of being a dictator in Revenge of the Sith, with his infantile "your either with us or against us" drool, while the dictator Obamanation does those things and more.
"So this is how liberty dies, by denying Flash on the IPhone"
Whew, Im so glad that dastardly flash code got denied a chance to run. IT was so terrible.
Apple should know something about wasting CPU cycles.
They spent 10 years on PPC.

Apple, still the wannabe computer company.
Playing catchup with features still.
Still the yuppie etch-a-sketch.

mem protection motivation #2, minimize program crashes

Anonymous's picture

be careful about #2. you refute it by bringing up microkernels, which are not the only memory protection model. in fact #2 is the best reason for memory protection, which is not exclusive to microkernels. 85% of crashes in xp are caused by drivers, and could be avoided by memory protection. most of the code of the linux kernel is kernel extensions, each of which can crash the kernel. ask a company that runs a time-critical application whether system crashes are "silly" or not. references:

Michael M. Swift, Brian N. Bershad, and Henry M. Levy. Improving the Reliability of Commodity Operating Systems. In Proceedings of the 19th ACM Symposium on Operating Systems Principles, pages 207-222, Bolton Landing, New York, October 2003.

Michael M. Swift, Muthukaruppan Annamalai, Brian N. Bershad, and Henry M. Levy. Recovering Device Drivers. In Proceedings of the 6th USENIX Symposium on Operating Systems Design and Implementation, page 1-16, San Francisco, CA, December 2004.

Everyone (read: most people)

Anonymous's picture

Everyone (read: most people) posting on here are complete morons.
a) Why do you hate a company? Did they do something to ruin your life?
b) This windows vs linux vs mac thing is boring. Get over it. Each has its own purpose.

I play games on windows, the interface is a bit weird IMO, some things don't make sense (probably because i don't know how it works nor do i care)
I used linux for about 8 years, did everything on it. There are problems with KDE and GNOME, i was even thinking of making my own DE (pie in the sky i know) and everyone seems to have their own idea of where a program should go (gentoo fixed a lot of that though).
I now have an ibook and use os x. It hasn't crashed after about 2 months of full time use. the applications that come with it are easy to use, and are integrated well.

I'm doing a course at the moment on the linux kernel, and it's pretty confusing given the lack of good source code documentation. Whatever. A computer is a computer. Do what you want with it, your opinion is never going to change someone's mind.

Yes

Anonymous's picture

He's right! And high level languages can never compete against Assembler, performance is much higher in Assembler than C/C++/Ada/Java.

I wouldn't go around saying

Anonymous's picture

I wouldn't go around saying that to everyone. It is theoretically possible to have a high level programming language that compiles to perfectly optimized code. Just that people haven't written a compiler for that yet. :)

TLB miss

marko's picture

I will try to add some techical information, because I've seen that others have already provided with careful descriptions about the author's misinformation about microkernels and darwin in particular.

One of the main reasions why microkernels are slow are, as the author correcly points out, context switches.

On x86 the virtual memory layout is managed by one structured set of page tables, which you need to change completely when you switch from one process to another, flushing all the TLB records.

Recent x86 cpus implement a "global" flag in the page table entires with prevents the flush of the page when a context switch occours. This is "designed" for the case when the kernel is mapped in every process at the same place. It seems a really revolutionar and modern n approach

But there are other architectures which are a bit more modern than a 32bit extension of a 16bit architecture build on a 8bit microcontroller (... you know the rest of this story).

For example sparc (32 and 64) implements separate "address space contexts" where every page is tagged with a number and when you switch process avery VM mapping whill use this context, without trashing the TLB. Kernel need not to be mapped on top of user space.

On powerpc a similar effect is obtained using 52bit intermediate large address space (or 80bit in ppc64) where processes are assigned to subspaces of it.

Itanium has an approach similar to powerpc. etc

on these really modern cpu designs the overhead of the context switch, regarding at least the most costly TLB miss case, is reduced.

The problem is that we must not compare apples with carpets and see microkernel performance on CPU that make sense. Not on x86 which is not a modern cpu architecture (from the ISA and MMU standpoint of course)

Is there any way to do the trick on x86?

yes and no.

there is an interesting little trick we can do if we try to remember the long forgotten x86 segmentation...

using segmentation we can quickly map an address space of a usermode process to anywhere in the virtual address space, thus avoiding
to to flush the entire TLB on the context swith.

some processes can coexist in the same virtual memory map with astonishing fast interprocess comunication between them.

the number of concurrent contexts depends on the size of the address spaces. For example if you are willing to limit the size of one's address space to 256mb you could fit 16 processes in the 4gb address space. (The kernel would need a little bit of address space of his owne but this is another topic)

this limitation seems serious but it can be done in a transparent way to processes, which may be given the illusion to live in a 2gb (or 3gb) user space. Or a mixed approach: user in 2gb, 1.5 gb to isolated tasklets, like drivers or executive service providers, like networking stacks etc.

is not a big overhead because many "monolithic" OS actually use a deferred interrupt scheme where when an interrupt arrives it doesn't process it immediately, so deferring it to a separate process which doesn't incurr in tlb misses is not so terribly slower than using a kernel thread (solaris) or a tasklet (linux).

memory segmentation trick is currently (to my knowledge) implemented only in l4 pistachio, while the transparent address space resizing stuff is not implemented anywhere, but is techically feasible.

Rising flames on a deslote field

Anonymous's picture

Congratulations, through another myth which spreads prejudice against that which threatens your ideology. A monolithic kernel is built for a hardware platform that never changes, and for this does its job well. Micro kernels are built to for scalability, security, and extensibility. Lets see Linux run a Unix application simultaneously with a MacOS X app, this is another point in which a well designed micro kernel has, you may launch a MacOS X server and a Linux server, each extending an address space to support an ABI of its choice. In addition, on the IA-32 based systems marking a page bit as a supervisor will throw a page fault, allowing a kernel to shadow the pages into physical memory.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

I think most of you guys are missing the point on this article. It's not just Apple sucks because they picked a microkernel. If you read it he is merely stating that as long as Macs are using this kernel they will have a disadvatage in high-end speed critical situation. 90% percent of everybody could give crap about something like that. And if he is right about what he is talking about then Apple obviosly choose the Mach kernel based solely on political reasons. And their is nothing worng with that if it makes it possibel to sell many computers. I just capitolism folks. Steve Jobs can run his company like Stalin for all I care, because it's his company, not mine. As long as he's fairly honest and not a evil bastard like Microsoft I'll cheer him for creating another choice.

I like mac's new OS is very cool and I like the fact that we will have a choice between PPC's and X86's not based opon what software thay can run but what we like about each popular platform. It's a good thing to have programs that will eventually compile both on an MAC and a PC. Unlike the freaky divisions in the past between Mac's OSes and Microsoft. And I think my first laptop will probably be a iBook as soon as I am able to afford one!

You also have to have a sense of irony when it comes to the history behind this type of discussion. Back in '92 professors and high and mighty people of the computer world burned on Linux and poor Linus for choosing a 'obsolete' monolithic kernel when he could've of based his new OS on a 'modern' microkernel like Minix's!

Apple has shown to be evil ma

Anonymous's picture

Apple has shown to be evil many times. They killed the clone companies, they often refuse to acknowledge problems untill threatened with class action law suits, etc....

why does everyone rate this as a rant??

Anonymous's picture

I don"t get it - the technical points are _completely valid_ and well understood (you others never got a lesson in Operating Systems at university?) and you don"t have to give numbers.

Everyone knows MACH is slow.

Please go on and read Hurd-Traffic and see their HORRIBLE problems when dealing with real traffic (e.g. the fs-daemon gone mad with 1000s of threads and 85MB RSS!)

Re: why does everyone rate this as a rant??

Anonymous's picture

>and you don"t have to give numbers

right... and we sure don't need technical analysis from the likes of DH Brown, IDC, etc.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Can't we all just get along?

I'm a BIG fan of Linux, and OpenBSD, and I'm typing this comment on my lovely shiny PowerBook G4 running OS X.

Each of these operating systems has it's place OpenBSD is great for most server applications, but falls short of Linux as, for example a file and print server for Windows clients, now I'm not saying that OpenBSD performs worse than Linux in this situation, but that Linux for this purpose is simply a nicer experience, as it is for running php scripts. The same can be said of using OpenBSD for a mail server, it comes with some very nice secured mail server software, and I don't have to ponder about whether the system is inherently insecure no matter how I configure it. Mac OS X on the other hand is a lovely desktop Unix, which offers a very pleasant and polished user experience. I believe it's performance is worse than Linux, but I'm not sobbing myself to sleep at night over that fact, because it is so very nice to use. I will use whatever OS suits the task at hand (allowing for availability) and take everything, good and bad into account when deciding.

When push comes to shove, I'm a unix geek, and Debian, RedHat, Mandrake, SUSE, OpenBSD, FreeBSD, NetBSD, Mac OS X, NeXTstep, Solaris, UnixWare, AIX, HP/UX etc, etc, etc. are all unix like enough for me, and all have their respective places in the world.

whos not getting along?

Anonymous's picture

People should be able to have an objective discussion re the technical merits of different computer architectures, or anything for that matter, without it being viewed as people "not getting along". Are you saying no one should ever have a discussion? Is every discussion a heated argument? What are you afraid of?

I can vehemently disagree with someone on a particular subject without hating them or wanting them to die a horrible death. I have had many vehement disagreements on many topics with my best friend, but he is still my best friend. It is the interesting discussions that keep the relationship alive.

It is possible to be logical and objective about something and not get emotional and blub about it.

I am happy to read anyones opinions about microkernels, for or against, anytime anywhere. Thats how we can learn.

Stifling discussions of this nature is the path to ignorance and prejudice.

I agree 100%

Anonymous's picture

I agree 100%

Another great Mach based OS

Anonymous's picture

OS X is soooo slow

Anonymous's picture

He may have not done his research, but he brings up an important issue that a lot of people have been trying to put their finger on, why is OS X so damn slow?

I'm sure this question begs to answered as your applications bounce 3-5 times before actually opening. It's a beautiful OS, a great combination of Unix and GUI, but the performance is pretty dreadful.

I'll just keep faith in, "the next version that will fix everything."

Re: OS X is soooo slow

Anonymous's picture

Slow relative to what? Back it up with some benchmarks.

Your Linux kernels are belong to us!!!

Anonymous's picture

What doesth thou thinkest powers Jaguar?!!

;-}

Re: Your Linux kernels are belong to us!!!

Anonymous's picture

Your use of verb inflenxions incorrect:

You should have said:

What dost thou think that moveth Jaguar?!!

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Author has not taken the time to investigate the design features which distinguish Darwin from the microkernel-based systems the author uses to prove Darwin must be inferior to his preferred platform. Author fails to notice real-time capabilities in Darwin, which in fact exist, this is not a joke, this is a currently-existing feature of Darwin exploited by many currently-existing applications; yet the author insists Darwin must be "slower" than any likely Linux possible linux distribution ... neglecting to notice Linux' own latency in handling audio streams (um, 200ms?) in comparison to Darwin's (um,
And the author's claim that Darwin must be inferior because a driver on QNX will respond to the kill command is quite interesting. I am dying to see the missing step in which he proves that QNX drivers have some relationship to Darwin drivers. Very, very modest research -- I should say trivial research, but the truth is any research -- would expose the author's gross incomprehension of the subject matter.
I could explain why Darwin's shared memory between Mach and BSD sections result in performance like a monolithic kernel's but the architectural and performance advantages of Mach's respected and robust thread and message handling ... but any fool can find out about Darwin by hunting in http://www.google.com/mac or /bsd. The system has been proven through years of use in enterprise environments by NeXT and its customers, and the improvements to be had through vigorous open-source development are already beginning to be visible.
Has Darwin quite a way to go? Sure. And MacOS X generally. But the proclamations this author has made about the system are ignorant, illogical, and just as bigoted as the wild claims made about MacOS 9 (and its predecessors) by its fans. This suthor merely joins their drum-banging tradition of extolling the manifest superiority of a favored platform with not only total disregard for the facts, but disdain even for the research that would expose them.
Darwin has a way to go, and is subject to criticism -- just bring real criticism so there's a debate rather than a mindless tirade. If I wanted a parade, I'd go to New Orleans.

Greed

Anonymous's picture

Thankyou !

I am definitely not a kernel expert, but from what i have read on kernels and kernel design allot of things just don't sound right in this review-rant?

Mainly how he seems to be comparing the linux kernel to what he knows of as mach and qnx... but darwin uses a hybrid kernel (XNU), it's not just pure mach, it's very basically (and incompletely) - mach hosting allot of the freeBSD kernel within it, and a whole "new" device driver framework.

I am casually interested in kernel design, and I am very fond of the Mac Desktop, but i like to explore other OS and kernels and do not have enough personal insecurities to need to irrationally defend my preferred OS as if i have made a religious discussion that i am not prepared to reconsider upon new analysis... in other words, i am prepared to change my opinion on parts of darwin as i find out more... take a more dynamic approach and don't superglue yourself to a single OS/kernel.

However I think the author might be missing allot of the drive behind the microkernel concepts... it's not all about performance and security, it's mainly about stability, and personally in terms of any kind of design, i don't see the point in having a vastly high performance design that is potentially unstable and unreliable... if you can only go 100-mph and then crash and burn half way around the circuit and have to rebuild your car you might as well drive a more reliable "slow" vehicle that does a consistent 95-mph and gets the actual high level job done in half the time... you need to step back and take a look at the bigger picture. This is the benefit of modularity, however some of the arguments you make about this are very sound... for instance, how there is little benefit in terms of operating reliability by only placing vital components such as the filesystem in it's own server... it's a very good point, and if you look at XNU, you will find that they have distributed freeBSD kernel components between kernel mode and servers based upon this realization... there is no point in keeping the kernel running if you can't do anything useful with it...

However there are some more recent developments in microkernel design since this has been written, by none other than Andrew S. Tanenbaum and company that actually address this, he has taken the basic minix microkernel concept further and attempted to make it more of a practicality rather than just a concept playground... they are now attempting to actually allow drivers to be reloaded on the fly...

anyway, i hope more of the readers here try to find out more from themselves rather than taking the numerologist approach as aforementioned author once did some 6 years ago by finding what he wants to hear... go and google XNU if you really want to know about the darwin kernel, and for more about microkernels google Minix or "Andrew S. Tanenbaum" and keep an open mind when forming opinions at both ends of the spectrum.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

pritchet1's picture

I'd love to see a rewrite and update/follow-up based on the succinct comments made thus far. Miles, can you please do this for us?

Maybe you can get a Mac to run OS X on first so yo u can enjoy the experience? Heck, maybe LinuxJournal will donate one ;^)

I dumped LinuxPPC once I received OS X and I'm really looking forward to Jaguar when it is released this summer. My comments and editorial in the May-June issue of MacNut Magazine express my thoughts on Linux in the XServe portion. I have a copy of YellowDog Linux but haven't installed it yet. They admitted that Apple had created a better OS than they could.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

carton's picture

Actually, I have a Mac, and I've used OS X before. Thanks for the suggestion, but for me at least, the experience was not exactly life-changing.
Anyway, I don't think OS X necessarily sucks or anything. It could easily end up becoming the best platform for running proprietary software applications, taking over from Mac OS 9 and Windows NT. But I don't really care about that. I just think, well, the subject of your post: ``Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performance'' (emphasis added). not the future of the desktop. not the color of Steve Jobs's underoos.
QNX users wrote defending QNX, old NeXT loyalists wrote about Jobs, and so on.
The more I think about the vast majority of these comments---``yes but the Mac has a better user interface and market share,'' or ``do your research. Never mind the papers from peer-reviewed journals you cited---instead go to developer.apple.com and read the ads that Apple has posted there,'' or ``read this System Design whitepaper on QNX, as it is not only an ad but a great introduction to operating systems in general,'' I start to wonder if this has less to do with the publication record and more to do with OS loyalty.
so, fess up, you Mac OS X guys. Was my article posted on some Mac weblog? versiontracker? O'Grady's Power Page? MacNut Magazine? Already I've found the Hurd guys have responded, but they didn't bother to email me so I guess they want to keep the response within their little circle of friends. That's cool, although I wish they had read the comments section and noticed that we had already discussed some of their L4Linux claims.
Anyway, sure, I think it's important to use as many operating systems as possible. It's difficult, especially if you refuse to mess around with junky PeeCee hardware as I do. But I would therefore echo your suggestion, ``maybe you can get a Mac to run OS X,'' as follows: maybe you can try a Linux/ppc, or NetBSD, or noncommercial Neutrino, or XFree86 apps on Darwin, or BeOS (heh). Unfortunately, I think all you'll notice is the difference in GUIs, so sleeping around with a bunch of fresh installs is maybe not as helpful in avoiding OS loyalism as one might hope.

Re: Thanks for bringing it up! This is the best article and set

Anonymous's picture

Thanks for bringing it up! You said what you thought. I suspect that the reply set generated has clarified many of the features of Darwin for you and you would like to have approached the article from a more informed position. I have learned a great deal from the replies to the article. I must say that this community, although harshly critical of misinformation, is clearly oriented to the facts. Thank you all!

Thanks for bringing it up!

Anonymous's picture

Good topic to discuss. However the article should have been researched before publication. Microkernel is just a word, it's meaning in this case is not the same as with other developments. Had this been a MICROSOFT product, there would be alot fewer loose cannons to deal with.

WP

Other micro kernels do exist

Anonymous's picture

Perhaps the most famous (atleast in my circle) being L4 and its many derivatives. Look at the microbenches of L4Linux (this is linux running as a user process on L4) and you will see that already L4 has an almost comparable performance. Real L4 servers and applications will obtain much higher performance. L4 has the highest performing IPC. etc etc. Mach, QNX, XNU are all essentially previous generation kernels compared to Hazlenut (l4 based kernel).

Generalizations based on lack of knowledege . er.. well suck. (Though it is funny how the article has managed to rile up mac uses all over the world!! - It doesn't matter how fast the OSX kernel is. The graphic subsystem sucks more then X Windows even and hence it will be slow atleast until Jaguar comes along).

link on l4: http://l4ka.org

Nordin needs to do his homework...

Anonymous's picture

Stupid article - no good evidence presented...what was he thinking...the unproven hypothesis advanced is nothing but troll bait from what appears to be a 13 yr old coder wannabe.

Nuff said...

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Only problem I've got with macs is that stupid one button mouse

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

LOL.

Amazing how effective M$ FUD (continuing the IBM FUD tradition) is effective on you and on one of my (way distant) colleagues: he admitted (the troll) that the reason he will never buy a Mac computer is that he could never trust an OS only supporting a one-button mouse and because he would face loss of interoperability with all his .doc files.

You should have seen his face when he discovered that I have a 5 buttons, wireless and wheeled mouse (USB without having to install ANY driver to have it working: just plug and play from day 1), use PowerPoint, open, read, modify Word documents at will and at ease and I have a native BSD based Unix OS. The troll was even unaware of the very existence of OS X. He thought the 'ten' was an update of OS 9. He asked which emulator I was using for my X11 applications. UHAUAHAUAHAUA.

Kudos to M$: you really brainwash your customers.

ROFL.

The poor M$ brainwashed slave still can't close his mouth how deep his jaw felt.

I loved it!!!!! LOL

PS

I hope you original poster can save yourself. Go buy a wheeled multi-button mouse. Go buy a new G4 (desktop and/or Ti Powerbook) and start for first time in your life enjoying computing.

"can't stand the one-button mouse". ROFL.

Missing the point?

raffraffraff's picture

LOL!!!!!1! L33t HAXXORS SUXXORS!!!! Whatever...

The comment about the stupid one-button mouse is valid. I didn't hear anybody say the you can't 'go buy a wheeled multi-button mouse' as the reply seems to suggest. And I doubt that the commenter is unaware of OpenOffice for handling .doc files. The point is that Apple sometimes cares more about eye-candy than usability or ergonomics. Sure, you should aim to have both. But don't drop functionality to look good or "think different". The one-button mouse was a stupid regression. If it was such a ground-breaking idea, why has computer mouse evolution gone in completely the opposite direction?

This side track, and at this point I'm sorry I got sucked into this troll-type argument. The point of the article is kernel performance, so I will finish by saying this: while performance is more important than eye candy, it's not as important as functionality. Though I'm a linux user, I think it's unsuitable in a majority of desktop use cases...

  • Linux runs faster than Darwin, but this means nothing to my wife who uses Photoshop and QuarkXpress; she chose a custom-built PC running XP.
  • Mac OS looks beautiful, but this means nothing to my colleagues, who support Windows.
  • Windows Vista came with my laptop, but I support 8000 Linux PCs for a bank. So I use Linux.
  • My mate Dave is into music production and web design. He could have gone for a nice Dell XPS with Windows 7, but he went for a lovely Macbook Air. Because it's beautiful AND because OSX can run the applications he needs to use.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Me too. That's why I use the Microsoft Optical mouse with scrollwheel. Works better out of the box on OS X than OS 9.2.2, and the OS X Event API's still have Left/Right and scrollwheel events.

=td=

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

So get a multi-button mouse. Most work right out of the box.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Mac may lag linux in performace, but when it comes to usability, Linux is...er...coughshitecough.

Face it, people use macs not for searing speed, but because they can get their job done quickly and easily. The state of usability for ordinary non-geek people on Linux is appalling. Linux is faster. So what? By how much? A tortoise is faster than a snail and Farraris Formula1 car is faster than Arrows - but so what? It's the application that matters, not mindless zealotry.

Bottom line is this. Put Linux and Macos side by side on the same hardware and run up both guis - use kde or gnome for linux. Which is faster, more consistent, more responsive, more usable? It ain't the one running on Linux.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Thanks for a great article.

You have brought to my attention one of the reasons why people choose micro-kernels i.e. memory protection, and what are some of the pros and cons wrt to this.

I also didn't realise OS/X was based on a micro-kernel.

You are SO obvious, Nordin - posting positive comments on your o

Anonymous's picture

Ummm, anyone with an SSLA (Strobridge Syntactic Language Analyzer) can run the above post through it, along with the original article, and find a 92.8% probability that they were written by the same author.

Of course, Miles, if you'd post LONGER paeans of fulsome praise for your idiot horse*****, you'd doubtless be 100% fingered by my version of SSLA. (Writted in C, based on Strobridge's original dissertation, U. New Mexico, 1987).

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

And the article is bad exactly because it spread wrong information.

OS/X is not a micro-kernel system. Read more about the subject.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Mac may lag linux in performace, but when it comes to usability, Linux is...er...coughshitecough.

Face it, people use macs not for searing speed, but because they can get their job done quickly and easily. The state of usability for ordinary non-geek people on Linux is appalling. Linux is faster. So what? By how much? A tortoise is faster than a snail and Farraris Formula1 car is faster than Arrows - but so what? It's the application that matters, not mindless zealotry.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

To me, QNX is the flagship of micro kernel design. There are 2 versions of QNX: Neutrino microkernel and legacy, say QNX 4. Thus any performance data not linked to QNX version is useless. It is very interesting that QNX TCP/IP is slow, but without the QNX version I cannot interpret it.

NX is very fast switching processes and has very fast internal network messaging system. It looks like that if TCP/IP is slow in Neutrino, the reason is not the microkernel design, but the TCP/IP process.

Anybody knows better?

What really hurt you was...

Anonymous's picture

What really hurt your credibility was:

1) Your lack of any research. The information on Apple's architecture is not difficult at all to find. Yet you try to defend yourself by blaming Apple's marketing. If you're going to write such a highly critical article, you've got to do more than read the marketing.

2) Your use of sensational terms. Words such as "obsolete," "dooms," and "quaint" did absolutely nothing to enhance your credibility. Rather they exposed the article as a poorly-researched attack on something you didn't understand.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Did someone get paid for writing this ?

If so how much more would it have cost to get someone that actually has any idea what he was writing about ?

Apart from such simple facts that the Next/Apple system is not a true microkernel and others that others have duly pointed out I notice that rather than explainig the differences between the 2 approaces of system designs he drags out old FUD from Linux advocay lists, the one about the slowness of MK systems is one that I hate in particular, in the early 90's we benchmared a few operating systems that where available in 68k versions to see wich one was the fastest, and by far the fastest one was OSK (OS-9 68k), and OSK is actually a true microkernel system.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

carton's picture

Many readers commented that (1) I didn't do my homework, and (2) until
I do my homework, I should shut my god damned pie hole. I think this
is somewhat fair: I knew the article's title while I was writing it,
so I should have read more about Mac OS X before submitting it. And I
did explicitly mention Apple exactly once, in the concluding
paragraph, so maybe I should have read more about Darwin before
writing that sentence containing the word ``Apple.''

If I was so surprised by Apple's apparent illogical choice of a
microkernel, then maybe I should have given Apple the benefit of the
doubt and somehow assumed that their mention of Mach and microkernels
in their marketing blitz was intended as a deception rather than
reality.

A few readers stepped up and did the homework for me---Jonas found the
truth burried in obscure email posts on developer lists:

to which I would add:

It is not so hard to find, once I know I need to look for evidence of
something as absurd as a Mach-based OS that doesn't use a microkernel.
Who woulda thunk?

This corrects my article as follows:

  1. The performance problem with TLB flushing goes away, since Darwin
    system calls involve one process and the kernel, rather than a
    microkernel system which involves n processes and the kernel.
  2. The problem with implementing zero copy remains largely
    unaffected, since I guess significant parts of Darwin are still
    committed to the formatted-message-passing API.

Yes, I'm partly wrong, and I'm humiliated to have written an article
with two microkernel objections without mentioning that only one of
them applies to Darwin. But much of the community's criticism seems to be a bit,
unilateral.

I mentioned details about five different operating systems, obscure
details about Java VMs, quoted three full-length research papers, and
debunked the microkernel hysteria more convincingly than Linus
himself. And I finished constructively by suggesting how the
microkernel experiment's late-coming results might apply elsewhere in
the system. I think at least half the people who told me to shut up
and do my homework learned something useful and interesting from my
article. Maybe telling me that I should shut up is a bit extreme for
the situation. One poster even included the penultimate
condescention: the professorial ``Hhmmm?'' Do I really deserve
THAT? Heh.

so, whatever. Let's forget my pride and deal with some of your
specific questions.

Jonas's links to the developer mailing list are particularly cool because they point out a use of microkernels that I didn't mention at all: making the kernel better organized and easier to understand. I like this because I'm a big fan of NetBSD's primary ``clean code'' motivation.

This works on two levels. First, maybe the traditional BSD-ish reentrant-syscall organization is not the easiest way for humans to understand a kernel. Second, if there is an easier-to-maintain and more transparent kernel architecture, probably the best way to find it is to look for it explicitly, rather than adopting the somewhat arbitrary and ponderous architecture of microkernels, which was designed for its memory protection benefits, not its transparency to humans trying to understand how it works or why it doesn't work.

As for performance comparisons, I agree that they're relevant and
critical. Like I said before, the microkernel debate is pretty much
over now, and microkernels lost. It took a long time, and I'm sure
many papers were published both for and against microkernels, all with
interesting, convincing performance tests.

Here's a sweet and fairly recent paper to which John Jensen pointed me:

which is amazingly relevant. Thanks, John!

A faster version of MkLinux uses a co-located server running in kernel mode and executing inside the

I cannot resist, since this crap is still up

Anonymous's picture

Nordin, all the bootlicking and pettifogging obscure precisely nothing about what a total jackass you are. Viz., "I owe thanks to everyone, even you condescending pricks who told me to shut up, because I think even the harshest comments were basically correct and informative". Jesus.

You write an extensive troll, you slag Apple's OS (just not just the last sentence, mentioning AAPL; idiot, what did you think "MacOS X" meant?), then you grovel about how it kicks ass to post flame-bait and find out how thoroughly goddam wrong you are. Droll...

Now, to cases, this time in elementary logic: Ph.D. thesis or Web Journal, utterly unfounded speculations are just that: unfounded. You screw Linux Journal by writing this crap - and implicitly want kudos for generalizing and speculating? *****.

You allow that you were "partly wrong", based upon an argument where you "guess significant parts of Darwin are still committed to the formatted-message-passing API". Still guess-based, your sophistries are breathtaking for their brazen stupidities, sirrah.

Let's simplify this all. What you COULD have replied is "I'm a lazy ***** who doesn't do the slightest amount of research before dragging a journal's name through the mud. I'm humiliated but unapologetic because, gosh darn it, I learned something. I'm totally cool because I put pictures of women on my home page and *****-all else of value. I'm utterly discredited, and will be lucky to have anything I ever write published again. Good thing I have a web site..."

"Who's the prick? C'mon, that's a good boy, who's the prick? Yeah, good doggy, you're a good little prick".

Man, are you ever a deluded *****.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

This is the same mistake L. Torvald made when he stated that "Mach kernel is a piece of crap".

Well. it seems like some Linux Geeks are hit by Apple's Darwin fever.

Fear is a bad thing. It can push a decent person to look stupid.

Anyway, Mac OS X is here to lead Mac users towards the best computing experience.

Lesson: First do your homework before opening your mouth. Otherwise you will look ignorant.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

``Yes, I generalize and speculate, because these two activities also have value.''

Yes, when conconting FUD!!!

(Interesting to see pro-Linux people using M$ tactics)

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

I was going to comment earlier, but it was more interesting to see a lot of readers pan your article (I'd expect about 25%, given the current pro-Linux readership).

"I should have read more about Mac OS X"

Yes, you should have. Your journalistic ability to browse a few web pages at developer.apple.com is woefully underdeveloped. All details for you to have done a decent article would have been found in at least three major Darwin/MacOS X/Apple sites. Merely sprouting nonsense and expecting readers to show you the right way is an extreme failing.

"...much of the community's criticism seems to be a bit, unilateral."

Indeed, and probably justly deserved. I haven't yet really seen one comment that backs up your claims, so either the general Linux population hasn't bothered to read it or is deliberately unable to comment. (Then again, I haven't dug through all of them, only the top level ones).

"Note that L4Linux, which they claim is 5% - 10% slower than plain Linux, gives up on the idea of memory protection just like Darwin..."

Too bad you didn't mention this before, it would have been excellent in for showing how bad you think microkernels are. The issue I see is that microkernels are getting bad press for oh, a trivial 5%-10% speed loss. Recent research into microkernels have been closing this gap over time as CPUs improve, hardware improves, so does software. Darwin intelligently modularizes so that only key kernel extensions (kexts) get into kernel space for speed reasons, while the non-essential kext's stay in user space, where they belong. A lot better than Linux where a trivial sound driver can bring the system to its knees (yes, this happens to me!)

Speed is also subjective. If you can prove to me that this 5%-10% is going to seriously influence the average user, then maybe it's an issue. Until then, I don't consider it an issue. I'd rather a stable OS than an unstable one, and it's clean modern software engineering techniques favour the microkernel approach. At least, microkernel designers understand the problem and can program it faster, unlike Linux, which I believe everyone would find hard to re-engineer. (Just look at all the effort to get 'low latency' and 'pre-emptive kernel' patches put into the Linux kernel source tree).

"I think my article works as an introduction to the microkernel debate even without the benchmarks."

Debate (from dictionary.com) : To engage in argument by discussing opposing points.

I don't really see how your article does this by being full of nonsense, haphazardly lumps microkernels into one group, goes off on completely independent criticisms and has failed to elaborate on anything of any importance. If you're true to your word, get a -real- OS designer (say, Avie Tevanian) to write a follow up article in conjunction with you to really debate the point.

"But at least give me credit for not making claims that I need benchmarks to back up..."

I'll give you credit for avoiding any attempt to back up your claims. Full marks awarded for propaganda nonsense, however.

"But I'm still damn pleased to have received a link to someone else's benchmarks in my emailbox!" and "... I've got to admit it's sweet to know more about the topic after publishing the article than before."

This merely shows your lack of knowledge. I look forward to seeing any future articles from you to see if you have improved somewhat. Maybe I should do an article for Linux Journal, it seems anybody can get published these days...

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

>I was going to comment earlier, but it was more interesting to see a lot of readers pan your article (I'd expect about 25%, given the current pro-Linux readership).

It looks like 100% pro-mac readership to me.

>Yes, you should have. Your journalistic ability to browse a few web pages at developer.apple.com is woefully underdeveloped. All details for you to have done a decent article would have been found in at least three major Darwin/MacOS X/Apple sites. Merely sprouting nonsense and expecting readers to show you the right way is an extreme failing.

What? You don't like the results then you dismiss them as nonsense? Show me your results then.

>Indeed, and probably justly deserved. I haven't yet really seen one comment that backs up your claims, so either the general Linux population hasn't bothered to read it or is deliberately unable to comment. (Then again, I haven't dug through all of them, only the top level ones).

What about technical discussion resulting in readers' mindless advocacy? No thanks.

>Too bad you didn't mention this before, it would have been excellent in for showing how bad you think microkernels are. The issue I see is that microkernels are getting bad press for oh, a trivial 5%-10% speed loss. Recent research into microkernels have been closing this gap over time as CPUs improve, hardware improves, so does software. Darwin intelligently modularizes so that only key kernel extensions (kexts) get into kernel space for speed reasons, while the non-essential kext's stay in user space, where they belong. A lot better than Linux where a trivial sound driver can bring the system to its knees (yes, this happens to me!)

Yes, thats one of the design issue and it is not bad press. Speeding up the hardware is not going to make the problem go away.

Any driver in kernel can bring the system down, not just Linux.

>Speed is also subjective. If you can prove to me that this 5%-10% is going to seriously influence the average user, then maybe it's an issue. Until then, I don't consider it an issue. I'd rather a stable OS than an unstable one, and it's clean modern software engineering techniques favour the microkernel approach. At least, microkernel designers understand the problem and can program it faster, unlike Linux, which I believe everyone would find hard to re-engineer. (Just look at all the effort to get 'low latency' and 'pre-emptive kernel' patches put into the Linux kernel source tree).

If MS put a Mac OSX web server in a Mindcraft test then you will say different things, I am sure. There is no proof that micro kernel will be more stable, though it has the potential.

What is the issue with low-latency changes in the Linux source?

>I'll give you credit for avoiding any attempt to back up your claims. Full marks awarded for propaganda nonsense, however.

You ask for benchmark, the author gave public benchmarks. Where's yours?

Propaganda nonsense?

>This merely shows your lack of knowledge. I look forward to seeing any future articles from you to see if you have improved somewhat. Maybe I should do an article for Linux Journal, it seems anybody can get published these days...

This is a very old monolithtic vs microkernel argument, not a Darwin vs Linux thing. Please keep your advocacy to your trashcan, please?

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Since I've been away from my usual computer and Internet connection, this reply has been late :

> It looks like 100% pro-mac readership to me.

Yes, that is quite surprising, I would have expected more Linux advocates to get into the fray to provide the alternative point of view.

> What? You don't like the results then you dismiss them as nonsense? Show me your results then.

Results? What results? From the original article, there were no results to back up Mile's claims that Darwin was slower than Linux. If he had results, I'd be impressed a bit more and not bothered to get involved. Using QNX and Mach and then saying 'hey, these are the same thing, Darwin must be as lousy as it's Mach' is wrong. Being specific is a requirement.

> What about technical discussion resulting in readers' mindless advocacy? No thanks.

If it was a technical discussion to begin with rather than a fluff article on microkernels vs monolithics, I would have not bothered to respond. I take offense at flat out nonsense knocking things that they should know better about in the first case.

> Yes, thats one of the design issue and it is not bad press. Speeding up the hardware is not going to make the problem go away.

I agree on that. Faster hardware is avoiding the problem in the first place (too bad many programmers and software shunts the problem over to 'we need faster hardware'). My opinion is that microkernels -can- be made as fast as a monolithic kernel once engineered to do so and the pros/cons have been observed. Put simply, make it work correctly first, -then- make it work faster.

> Any driver in kernel can bring the system down, not just Linux.

I guess quoting sound is a bad example as that -does- get into the Darwin kernel to get direct interrupt access. However, the majority of other things such as network protocols, file system handlers and USB device drivers don't get into the Darwin kernel space. This we generally would expect to result in a more reliable OS over Linux's 'stuff everything into kernel space' approach. Less code in kernel space = less chance to crash on general principle.

> If MS put a Mac OSX web server in a Mindcraft test then you will say different things, I am sure. There is no proof that micro kernel will be more stable, though it has the potential.

Well, that's kind of two separate issues. Are we testing reliability or speed? The last time I checked, the Linux vs Windows test was solely speed based. I haven't seen anything (yet) that stresses general reliability over speed. In any case, I was browsing around and found :

http://www.ddjembedded.com/resources/articles/2002/0206e/0206e.htm

Which shows QNX to be quite reliable, and stress tests it out. That's not to say this maps over to the Darwin kernel, perhaps I should start a test up to see how it compares. It would be a good idea, but I fear the lack of time available to me will get in the way to do it comprehensively.

> What is the issue with low-latency changes in the Linux source?

None, really. Except they aren't in the standard kernel yet. I guess that's a low blow for Linux, but until it's officially in the kernel and in the hands of the average user Linux still has problems with latency and a pre-emptive kernel which have been solved already with a microkernel (depending on which one you use). I applaud the fact they are working on this and are improving the kernel, but I believe a good design from the start would have avoided the issues in the first place. We are wiser in hindsight, mind you, so I can't blame Linus for this.

> You ask for benchmark, the author gave public benchmarks. Where's yours? Propaganda nonsense?

Out of interest (not official benchmarks) try this :

http://www.linuxdevices.com/articles/AT8906594941.html

vs the DDJ URL above, and see what you think.

> This is a very old monolithtic vs microkernel argument, not a Darwin vs Linux thing. Please keep your advocacy to your trashcan, please?

No, it's not. All I ask is that if people write something official (ie, non-comments like these) to a wide reader base, they better be prepared with some real facts/stats to back up what they say.

If Miles had actually done some real research and did some decent debating, I would have no issues. Sure, microkernels are bad at some things, but it's about understanding how something works, and why you'd use it, and when. That's all I ask.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

You should note that John Jensen's benchmarks do nothing but create more questions.

Do we really believe that Darwin is roughly 400% slower at a null I/O than Linux? Or roughly 500% slower at forking a process?

Perhaps the file I/O could be explained by the use of HFS+, but with the non file I/O operations having such a large difference, it's hard to trust any of the benchmarks.

Another benchmark that John Jensen posted showed Darwin achieving a maximum of 5 TCP connections, when Darwin web servers are known to handle thousands of connections.

In short, I don't think anyone has done benchmarking and proved their benchmarks have validity.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

I do not understand geek discourse.

Somebody posts an article that is apparently designed to piss people off with its *purposeful* avoidance of information relevant to its stated purpose.

Said pissed-off people respond to this, instead of ignoring it, though it is clear that article-writer will not recant since there was no intent to communicate in the first place.

Article-writer then claims the whole fractious, hopeless name-calling hate fest was "educational."

Bad faith all around. Nobody wins, or learns, or even feels better.

I think you missed the point entirely

Anonymous's picture

You totally glossed over the biggest advantage of a microkernel. Microkernels offer well defined interfaces and force programmers to obey and respect the boundaries between two separate kernel functions. Forget about your obvious dislike of Apple and Mach and consider the notion of the microkernel in general.

Take, for example, the file system.

If I find the implementation of a particular file system to be lacking, I can rewrite the file system server. Because the interface to the file system server is well defined, I will likely cause very few waves in the rest of the "system level" code. Or suppose that I want to implement some new fancy file system. My code is restricted to playing only with things accessible through the interface. I can't break other "kernel code", etc.

Compare that to changing filesystem code in Linux (aka the extended attributes debacle). XFS has been stable on Linux for what, a year? But it can't be merged into the kernel proper until Linus, et. al., decide on a common manner in which to store said extended attributes. Is anyone here old enough to remember the beginnings of the VFS, or a Unix that only supported one file system? Ripping the guts out of a kernel to implement a file system abstraction so that you can support multiple file systems (or multiple file systems with very different operating semantics) is not fun.

And just as a side note, why is it silly to think that if the server responsible for SCSI dies, it can't be restarted? One server had to be the super-server and exec() the rest of them, so why can't it monitor them? And wouldn't fault-tolerance be worth a little performance?

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState