Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performance

Apple's quaint choice of a microkernel for Mac OS X means Linux will lead in performance on Mac hardware.

Disagreements exist about whether or not microkernels are good. It's easy to get the impression they're good because they were proposed as a refinement after monolithic kernels. Microkernels are mostly discredited now, however, because they have performance problems, and the benefits originally promised are a fantasy.

The microkernel zealot believes that several cooperating system processes should take over the monolithic kernel's traditional jobs. These several system processes are isolated from each other with memory protection, and this is the supposed benefit.

Monolithic kernels circumscribe the kernel's definition and implementation as "the part of the system that would not benefit from memory protection".

When I state the monolithic design's motivation this way, it's obvious who I believe is right. I think microkernel zealots are victims of an overgeneralization: they come to UNIX from legacy systems such as Windows 3.1 and Mac OS 6, which deludes them into the impression that memory protection everywhere is an abstract, unquestionable Good. It's sort of like the common mistake of believing in rituals that supposedly deliver more security, as if security were a one-dimensional concept.

Memory protection is a tool, and it has three common motivations:

  1. to help debug programs under development with less performance cost than instrumentation. (Instrumentation is what Java or Purify uses.) The memory protection hopefully makes the program crash nearer to the bug than without it, while instrumentation is supposed to make the program crash right at the bug.

  2. to minimize the inconvenience of program crashes.

  3. to keep security promises even when programs crash.

''Because MS-DOS doesn't have it and MS-DOS sucks'' is not a motivation for memory protection.

Motivation #1 is a somewhat legitimate argument for the additional memory protection in microkernel systems. For example, QNX developers can debug device drivers and regular programs with the same debugger, making QNX drivers easier to write. QNX programmers are neat because drivers are so easy for them to write that they don't seem to share our idea of what a driver is; they think everything that does any abstraction of hardware is a driver. I think the good debugging tools for device drivers maintain QNX as a commercially viable Canadian microkernel. Their claims about stability of the finished product become suspicious to any developer who actually starts working with QNX; the microkernel benefits are all about ease of development and debugging.

Motivation #2 is silly. A real microkernel in the field will not recover itself when the SCSI driver process or the filesystem process crashes. Granted, if there's a developer at the helm who can give it a shove with some special debugging tool, it might, but that advantage is really more like that stated in motivation #1 than #2.

Since microkernel processes cooperate to implement security promises, the promises are not necessarily kept when one of the processes crashes. Therefore motivation #3 is also silly.

These three factors together show that memory protection is not very useful inside the kernel, except perhaps for kernel developers. That's why I claim the microkernel's promised benefits are a fantasy.

Before we move on, I should point out that the two microkernel systems, Mach and QNX, have different ideas about what is micro enough to go into the microkernel. In QNX, only message passing, context switching and a few process scheduling hooks go into the microkernel. QNX drivers for the disk, the console, the network card and all the hardware devices are ordinary processes that show up next to the user's programs in sin or ps. They obey kill, so if you want, you can kill them and crash the system.

Mach, which Apple has adopted for Mac OS X, puts anything that accesses hardware into the microkernel. Under Mach's philosophy, XFree86 still shouldn't be a user process. In the single server abuses of microkernels, like mkLinux, the Linux process made a system call (not message passing) into Mach whenever it needed to access any Apple hardware, so the filesystem's implementation would be inside the Linux process, but the disk drivers are inside the Mach microkernel. This arrangement is a good business argument for Apple funding mkLinux: all the drivers for their proprietary hardware, thus much of the code they funded, stays inside Mach, where it's covered by a more favorable (to them) license.

However, putting Mach device drivers inside the microkernel substantially kills QNX's motivation #1 because Mach device drivers are now as hard to debug as a monolithic kernel's device drivers. I'm not sure how Darwin's drivers work, but it's important to acknowledge this dispute about the organization of real microkernel systems.

What about the performance problem? In short, modern CPUs optimize for the monolithic kernel. The monolithic kernel maps itself into every user process's virtual memory space, but these kernel pages are marked somehow so that they're only accessible when the CPU's supervisor bit is set. When a process makes a system call, the CPU implicitly sets and unsets the supervisor bit when the call enters and returns, so the kernel pages are appropriately lit up and walled off by flipping a single bit. Since the virtual memory map doesn't change across the system call, the processor can retain all the map fragments that it has cached in its TLB.

With a microkernel, almost everything that used to be a system call now falls under the heading "passing a message to another process". In this case, flipping a supervisor bit is no longer enough to implement the memory protection, as a single user process's system calls involve separate memory maps for 1 user process + 1 microkernel + n system processes, but a single bit has enough states for only two maps. Instead of using the supervisor bit trick, the microkernel must switch the virtual memory map at least twice for every system-call-equivalent; once from the user process to the system process, and once again from the system process back to the user process. This requires more overhead than flipping a supervisor bit; there's more overhead to juggle the maps, and there are also two TLB flushes.

A practical example might involve even more overhead since two processes is only the minimum involved in a single system-call-equivalent. For example, reading from a file on QNX involves a user process, a filesystem process and a disk driver process.

What is the TLB flush overhead? The TLB stores small pieces of the virtual-to-physical map so that most memory access ends up consulting the TLB instead of the definitive map stored in physical memory. Since the TLB is inside the CPU, the CPU's designers arrange that TLB consultations shall be free.

All the information in the TLB is a derivative of the real virtual-to-physical map stored in physical memory. The TLB can represent one virtual-to-physical mapping, but the whole point of memory protection is to give each process a different virtual-to-physical mapping, thus reserving certain blocks of physical memory for each process. The virtual-to-physical map stored in physical memory can represent this multiplicity of maps, but the map-fragment represented in the high-speed hardware TLB can represent only one mapping. That's why switching processes involves TLB flushing.

Once the TLB is flushed, it becomes gradually reloaded from the definitive map in physical memory as the new process executes. The TLB's gradual reloading, amortized over the execution of each newly-awakened process, is overhead. It therefore makes sense to switch between processes as seldom as possible and make maximal use of the supervisor bit trick.

Microkernels also harm performance by complicating the current trend toward zero copy design. The zero copy aesthetic suggests that systems should copy around blocks of memory as little as possible. Suppose an application wants to read a file into memory. An aesthetically perfect zero copy system might have the application mmap(..) the file rather than using read(..). The disk controller's DMA engine would write the file's contents directly into the same physical memory that is mapped into the application's virtual address space. Obviously it takes some cleverness to arrange this, but memory protection is one of the main obstacles. The kernel is littered conspicuously with comments about how something has to be copied out to userspace. Microkernels make eliminating block copies more difficult because there are more memory protection barriers to copy across and because data has to be copied in and out of the formatted messages that microkernel systems pass around.

Existing zero copy projects in monolithic kernels pay off. NetBSD's UVM is Chuck Cranor's rewrite of virtual memory under the zero copy aesthetic. UVM invents page loanout and page transfer functions that NetBSD's earlier VM lacked. These functions embody the zero copy aesthetic because they sometimes eliminate the kernel's need to copy out to userspace, but only when the block that would have been copied is big enough to span an entire VM page. Some of his speed improvement no doubt comes from cleaner code, but the most compelling part of his PhD thesis discusses saving processor cycles by doing fewer bulk copies.

VxWorks is among the kernels that boasted zero copy design earliest, with its TCP stack. They're probably motivated by reduced memory footprint, but their zero copy stack should also be faster than a traditional TCP stack. Applications must use the zbuf API to experience the benefit, not the usual Berkeley sockets API. For comparison, VxWorks has no memory protection, not even between the kernel and the user's application.

BeOS implements its TCP stack in a user process, microkernel-style, and so does QNX. Both have notoriously slow TCP stacks.

Zero copy is an aesthetic, not a check-box press release feature, so it's not as simple as something a system can possess or lack. I suspect the difference between VxWorks's and QNX's TCP stack is one of zero copy vs. excessive copies.

The birth and death of microkernels didn't happen overnight, and it's important to understand that these performance obstacles were probably obvious even when microkernels were first proposed. Discrediting microkernels required actually implementing them, optimizing message-passing primitives, and so on.

It's also important not to laugh too hard at QNX. It's somewhat amazing that one can write QNX drivers at all, much less do it with unusual ease, given that their entire environment is rigidly closed-source.

However, I think we've come to a point where the record speaks for itself, and the microkernel project has failed. Yet this still doesn't cleanly vindicate Linux merely because it has a monolithic kernel. Sure, Linux need no longer envy Darwin's microkernel, but the microkernel experiment serves more generally to illustrate the cost of memory protection and of certain kinds of IPC.

If excessive switching between memory-protected user and system processes is wasteful, then might not also excessive switching between two user processes be wasteful? In fact, this issue explains why proprietary UNIX systems use two-level thread architectures that schedule many user threads inside each kernel thread. Linux stubbornly retains one-level kernel-scheduled threads, like Windows NT. Linux could perform better by adopting proprietary UNIX's scheduler activations or Masuda and Inohara's unstable threads. This performance issue is intertwined with the dispute between the IBM JDK's native threads and the Blackdown JDK's optional green threads.

Given how the microkernel experiment has worked out, I'm surprised by Apple's quaint choice to use a microkernel in a new design. At the very least, it creates an opportunity for Linux to establish and maintain performance leadership on the macppc platform. However, I think the most interesting implications of the failed microkernel experiment are the observations it made about how data flows through a complete system, rather than just answering the obvious question about how big the kernel should be.

Miles Nordin is a grizzled FidoNet veteran and an activist with Boulder 2600 (the 720) currently residing in exile near the infamous Waynesboro Country Club in sprawling Eastern Pennsylvania.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

one and half years... no kernel panics and 16 reboots in os x

*yawn* clearly this is a troll people... nothing to see here, move along.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

What version are you running? Jaguar has known easy to reproduce Kernel panics for such simple things as moving a file to a directory you just deleted. Its rediculous, I was got on Mac OS X.2, and within 1 hour, had incured 12 kernel panics - each for entirely different functions...

I would love to view your logs of system uptime etc, but you either don't ever use XDarwin, or you don't actually perform rigorous tasks.

Either way, I garentee I could crash your Mac within 5 mins without the use of any use of software that is built to crash - and thus test and develope - a computer. Performing normal functions that I use daily on Linux....

Boola sheah

Anonymous's picture

Yes, very good, we're all familiar with the bug.

>$mkdir test

>$cd test

>$mkdir test

>$mv test ..

And the results, blah blah...

panic(cpu 0): lockmgr: locking against myself

Latest stack backtrace for cpu 0:

Backtrace:

0x00084E9C 0x000852CC 0x00027F8C 0x001DD410 0x000BDB98 0x001C5A9C 0x000B93BC 0x0020D8CC

0x00091E90 0x00000000

Proceeding back via exception chain:

Exception state (sv=0x150C7780)

PC=0x90019A2C; MSR=0x0000F030; DAR=0xA0008958; DSISR=0x42000000; LR=0x00002054; R1=0xBFFFF6A0; XCP=0x00000030 (0xC00 -

System call)

Kernel version:

Darwin Kernel Version 6.2:

Tue Nov 5 22:00:03 PST 2002; root:xnu/xnu-344.12.2.obj~1/RELEASE_PPC

I had the pleasure of seeing that when I finally broke down after a year of running OS X and had to see a kernel panic.

I have yet to have a panic that I didn't intentionally cause with that retarded bug (which apple should have fixed no foubt) but which has so far for me been an anomaly..

as it is the only apps that ever crash on my box are Microsoft Word, Internet Explorer, and KDE (because its an unstable, unfinished port)

If you can find me any convincing evidence that OS X is actually unstable aside from one well-publicized bug then please do.... this thread is staying bookmarked so i can hear your pathetic response at some later point

and if you managed to get 12 kernel panics, assuming you weren't just repeating the same ***** bug over again, you must've been using a seriously fucked up system, and thats the long and short of where i'm coming from

teh

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

Bull*****, show me proof

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Well that is quite a stupid comment. Anyone who has ever used any flavor of Unix can "spell kernel panic". Sun, HP, FreeBSD, and I seem to remember before 2.4.10 Linux would panic an a daily basis for those stressing the VM system.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

Thats funny I have never had FreeBSD or Linux panic as much as my G4 does with Mac OS X although my situation did improve when I put on Yellowdog Linux. Since that day no kernel panics.. Go back and play with your aqua and await public beta 3 of Mac OS X

Do some research, please

Anonymous's picture

Mr. Nordin's problem is that he is criticising an OS that he hasn't bothered to study. Darwin does not use a pure microkernel approach but rather a hybrid between microkernal and monolithic. Here's a quote from http://www.cs.nmsu.edu/~lking/mach.html:

"Mach 3.0 was originally conceived as a simple, extensible, communications microkernel. It is capable of running standalone, with other traditional operating system services such as I/O, file systems, and networking stacks running as user-mode servers. "

"However, in Mac OS X, Mach is linked with other kernel components into a single kernel address space. This is primarily for performance; it is much faster to make a direct call between linked components than it is to send messages or do RPC between separate tasks. This modular structure results in a more robust and extensible system than a large, purely monolithic kernel would allow, without the performance penalty of a pure microkernel. "

I suggest that interested parties (especially Mr. Nordin) noodle about on Apples website and read about the Darwin core. They would find that the Darwin architecture is a rather elegant mixture of design philosophies that combine the best of both microkernel and mononlithic approaches.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

"I'm not sure how Darwin's drivers work" says it all. When you figure out how it works you may have some credibility.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

What a load. You both suck a great deal.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

"Given how the microkernel experiment has worked out, I'm surprised by Apple's quaint choice to use a microkernel in a new design. At the very least, it creates an opportunity for Linux to establish and maintain performance leadership on the macppc platform."

I think the author misses the whole point of why MacOS X will be the dominant OS for the Mac platform, and in fact also why it will ultimately prove to be a more commercially successful desktop Unix than Linux ever will. These are its large user-base and the ready availability of the applications that the wide marketplace wants to use.

Let's face it - Apple for a long time had a significant technical lead over MS in the play for desktop customers, but this was not enough to erode MS' lead to any significant degree. The same will apply here.

Furthermore, do the technical shortcomings of the microkernel approach really add up to much in the real world? No, they don't. The VAST majority of people won't know, understand or care about them, even if they are significant in technical terms.

The other aspect in which I personally think this article is misguided is a strategic one - if the different UNIX platforms start sniping at each other (and there are certainly snipes against Linux from MacOS X which will be equally valid) then I think the real battle will get sidelined - and that's the battle between UNIX and Windows.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

Large User Base ?? Try 4 percent. Man you gotta love Apple propaganda

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

4% of the WHOLE COMPUTER MARKET is a lot more than linux's abysmally low usership on the desktop.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

It beats Apples user ship 24% to 4% do your math, and find statistics before you post

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Where the hell do you get that linux makes up 24% of all of the computers sold? Last I read Windows made up some mid 90% and the mac made up 4%. So unless we are talking about some new definition of a whole that is %120+. Linux is no where near 24%

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

No I did not say on computers sold I said computers in USE. No company out there preloads linux on any desktop. So windows will dominate until these Linux vendors start cracking deals. But the point is is that the Mac OS will never dominate, Because A ) No one is going to buy proprietaryt mac hardware its no faster than PC hardware B) Apple wont port it to intel, If apple does not port it wont be any better than what it was in the 80s still a proprietary company with no hope of survival in the future

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

"If apple does not port it wont be any better than what it was in the 80s still a proprietary company with no hope of survival in the future"

Hehe! 80's->00's that's 20 years of no future so far.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

No one buys proprietary hardware? Tell that to all the SGI and Sun users out there!

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Pretty interesting, and maybe one reason why OSX is slow.

The paradigm with Linux though, is while there are good coders writing for it, there are few GUI or User Experience people writing or directing it. Hence an interface that is unusable to all but expert computer users. It kind of has to be that way because how could a User advocate direct the disparate all the disparate people writing for Linux?

The reverse is true at Apple where the interface is bloated with eye candy and slows the system even more. Apple is betting that increases in processor speeds, RAM and Mobos will take care of this.

I guess the Linux Geeks just want raw power to run their servers.

Re: Poor Linux GUI.

Anonymous's picture

Hmm, based on what I can do with KDE/Gnome/at least a dozen different window managers, I find the 'troll' about Linux having a poor UI to be moose poop. Nothing I like better than getting home and getting off the stupid Winders GUI to get to something that I've got heavily customized to suit me. And the Apple GUI is just over-blown in my view. I don't need shriking boxes to show me where the _ell the window went to. That kind of GUI is best left for the kiddies.

Re: Poor Linux GUI.

RJDohnert's picture

Finally someone that has posted that I agree with 100 %. And I have heard it said at my Job from the guys that use the Macs that they much prefer the KDE Desktop to Aqua anyday. The new Mac OS UI is more eye candy and an annoyance than anything else.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Man, go read XNU MacosX kernel docs and source at Apple site (the drooling guy who wrote this article). As far as I can recall, XNU has mk *philosophy*, not *implementation*, so what would be IPCs are actually function calls.

Man, go read the specs and GUI-less operation of XServe at http://www.apple.com/xserve (this is for the guy i've just replied and the drooling one)

Man, my Logitech two-button mouse complete with scroll wheel sits at my side, happily scrolling gFTP on my freshly-compiled X11 server, after I have typed a presentation on MS Powerpoint. Meanwhile, Tomcat happily runs on the background, ready to test my web apps (another one for the drooling one).

Man, I've just tried some Applescript GUI shell scripting using an expat-plugin I've found somewhere... Yeah, to do some video processing and stuff. BTW, I've called the Applescript from a vanilla bash script without any problems...

Who in hell allowed this flamebait to be posted...

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Perhaps you should actually research how Darwin and Mac OS X work before you flame Apple's 'quaint choice'.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

Perhaps you should before you swallow more of what Apple shovels

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

Perhaps you should, Then maybe you wont swallow so much of what Apple shovels

BUT

Anonymous's picture

what a lot of people fail to realize about the various features and flaws of OS X is that much of it wasn't written by apple (at least in the first place).

Steve's return brought with it the whole NeXT platform to draw upon, and it wasn't so much that the Mac OS X team took out useful little nuggets of NeXT to and put it in macosx as much as they improved NeXT and adopted some old MacOS API's to work with it (hence the subservience of classic and carbon).

basically, apple *had* an working OS that did what they wanted to. without the resources to build something that refined from scratch in a timeframe that would keep it relevant, they had no choice but to use it.

Re: BUT

Anonymous's picture

Indeed correct. Apple tried and failed several times to rewrite the MacOS (pink, taligent, copland, rhapsody, etc). The NeXT folks took BSD, wrote a new GUI for it and sold it quite successfully. So much so that they returned to Apple, adapted the Mac GUI for it and now it's called Mac OS X.

It's interesting to see the number of Mac fanatics railing against the discussion here. Their anti-FUD is just as much FUD.

The question of Compaq as a proprietary vendor is facetious in that Compaq's hardware is based on a shared hardware standard. Apple's hardware is not based on a shared hardware standard. It may well use many standard concepts but the core hardware development and interfacing is soley dictated by Apple. No outside evolution of the hardware is possible because of Apple's monopoly strangehold. The same cannot be said of x86-based environments. As a result the x86 platforms have quite a lot more development going on, in both software and hardware. There are certainly arguments to be made for both sides but that's fodder for another thread.

The real question remains unanswered, which methodology is better? I'd argue it's unanswerable because of the complex variety of situations. No one concept needs to be win out, the resulting stagnation would be worse for all of them.

The point is to use what works for now and be prepared to adapt as things improve.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Great thing about the internet, anyone with an opinion can post it....

This does smell like a troll to me as well, so I'm not going to go any further.

Baseless Fanboy Harangue Dooms Linux to Lag Mac OS X in Shipment

Anonymous's picture

Please educate yourself about Mach under Mac OS X and try again. (Better yet, go read the source code.) This article shows no evidence of any knowledge of the Darwin kernel's Mach implementation (hint: it's not a pure microkernel). It may turn out that you have valid criticism after all, but no one can take you seriously if you argue from ignorance.

Re: Baseless Fanboy Harangue Dooms Linux to Lag Mac OS X in Ship

Anonymous's picture

> hint: it's not a pure microkernel

Elementary: a pure microkernel is useless !!!

Re: Baseless Fanboy Harangue Dooms Linux to Lag Mac OS X in Ship

RJDohnert's picture

Do your Research. And of course any OS is going to lag apple in shipments because Apple ships Mac OS X with every mac thats sold. Unlike us you guys dont have a choice of what they will ship with. Tell you what Ask Apple to preinstall and ship your Mac with Yellowdog Linux, see what they tell you.

Re: Baseless Fanboy Harangue Dooms Linux to Lag Mac OS X in Ship

Anonymous's picture

No ,Show us the money will even install windows for you but remember we a charging by the minute...

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Or how about this title instead: 'Obsolete Windowing System Dooms Linux to Trail Mac OS X in Usability'

Bad Linux, no Cocoa!

- Binky

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

How about this one " Proprietary windowing system will only run on the PowerPC because Apple refuses to support Intel, Thats why Apple will only lag in Computer sales for the rest of its proffessional life " I hate morons. Binky Cocoa is Objective C no great developer app. In the linux world for Objective C we have GNUStep which is superior to Apple Cocoa. Also more Apple propaganda " Developers moving from Linux to Mac OS X " I dont think so I know more developers to have abandoned it then stayed with it. Apple will always lag, Intel will always sell more until Apple realizes its not a hardware company any more.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

"Binky Cocoa is Objective C no great developer app. In the linux world for Objective C we have GNUStep which is superior to Apple Cocoa."

Uh ... no. Cocoa is not Objective C. It never was, and never will be. Cocoa is the direct decendant of NeXTStep. It is a set of API. You may use Objective C to develop for Cocoa, but you are not required to. Apple is actually pushing Java as the preferred language for Cocoa development, and has since OS X was first released.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

Dammit RJ! You've forgotten to take your lithium today too.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

RJDohnert's picture

Looks like you forgot to get a life

waaaaa!!!!....waaaaaaa!!!!!

Anonymous's picture

no one knows nothin'!

How does a debate on OS's come to stupid comments like

"...looks like you forgot to get a life."

:[

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

You make some valid general statements but your understanding of the Mac OS X kernel is non-existent.

There are lots of docs available:

http://developer.apple.com/techpubs/macosx/Darwin/General/KernelProgramm...

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

His understanding of the Microkernels functions are right on target. I dont think you know the functions of a Microkernel yourself aside from Apples propaganda that they put on their site. They have stated that Mac OS X is the best BLAH BLAH BLAH. Im still waiting to see Mac OS X walk on water. Until I see proof that the Mach microkernel has actuallyimproved from what it was I will not put much stock into what Apple themselves say, nothing from that URL has changed from what Next and MkLinux have published about Mach. So until such time Im just going to write you people off as Apple loyalists who cannot see the reality in this world.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

this coming from a linux loyalist!

Who the hell are you?

Anonymous's picture

I still think a lot of thought went into the kernel decision by people who have spent years full-time thinking and implementing such things.

Re: Obsolete Microkernel Dooms Mac OS X to Lag Linux in Performa

Anonymous's picture

This rant would be great if there was some quantitative evidence. Alright, a microkernel architecture is slower. So? How much?

So why DID apple go with Mach?

Anonymous's picture

Assuming that the engineers at Apple are not idiots...

What is the advantage to Apple of using Mach? Is it easier to add new devices or drivers? Easier to control the source?

If, for example, it was easier to update and add devices with Mach, then I can see Apple deciding to take a performance hit now with the benefit of fast turnaround times later (where increasing hardware performance means the PERCEIVED disparity between Mach and other solutions becomes less and less).

Anyone care to cast some light on this?

Re: So why DID apple go with Mach?

Anonymous's picture

THe easy of adding and removing drivers form a RUNNING kernel. is ilist as one of teh advantages of the Microkernel on the GNU/HURD site. I noticed that teh Hurd project has never mentioned speed as an adv or dis advan. on todays CPUs teh lag is not that noticable. Beside if you have used XP in the defualts setup. OS X just kills it the performance department

Re: So why DID apple go with Mach?

Anonymous's picture

I add driver to and remove them from my running Linux kernel whenever I want to. Read up on insmod, modprobe, and rmmod.

That's a spicey meatball!

Anonymous's picture

I *HATE* Apple and I'll still be happy to call this troll a troll. :)

OSX audio latency

Anonymous's picture

"This is exceptional. Windows 2000 and Linux do well to get under 100 milliseconds."

So far no sequencers are even out for osx and yet your trying to quote latency... i know many ppl who get 1 ms latency in windows now with lots of tracks of audioand softsynths. 100 ms ? huh wtf are u on?

Thats 1 ms now compared to OSX which has pretty much 0 pro audio software.

aPplE@#$

Anonymous's picture

"osx is better": B********! all u Mac fans, there just just isn't enough app software going around for you people. shame! when apple goes bust,u can hold on dearly to ur apple boxes and stay loyal and say that apple macs were the best computers that existed. u c, all u Mac fans r delusional. apple doesn't have a future. i did the clever thing & stopped wasting money on a lost cause. The grass is way greener on this side. i'm spoilt 4 choice with the software. so much variety. so many tools. i'm liking it...

Re: That's a spicey meatball!

Anonymous's picture

just more crap: "my unix is better than yours" that it's dated 15 years ago. let this clown, who's getting too old (lots of those fidonet dudes are dead) go argue with rms or someone about microkernals. My main working box is a mac with osx and it's great. my server runs suse and it's great. and this is all crap and the time wasting, brain teasing junk it's always been. heh folx, this is an art, it aint no science.

AMAZING THIS ARTICLE IS STILL ON-LINE

Anonymous's picture

Nordin's ill-researched, childish and just-plain stupid rant has been utterly discredited by many posters here.

I'm surprised Linux Journal still has this crap on-line. I've removed MY bookmark, it's obvious that LJ has descended to the level of knee-jerk FUD that is normally attributed to MicroShaft.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix