> Did the experience from writing your own OS solidify that belief?
Absolutely. Micro-kernels have many advantages other than a slight overhead due to message passing (and a large chunk o that overhead can be overcome by using the paging mechanism in a clever way). They're easier to secure, much easier to stabilize, support such luxuries as on the fly upgrades without powering down with grace and allow you to develop drivers in userland greatly simplifying debugging and testing as well as allowing you to do hard real-time (and by extension soft real time) much easier than you could ever do it using a macro kernel.
I've built some pretty large systems using QnX in the 80's and 90's that I would have a real problem with re-implementing even today on todays hardware without the benefits brought by a micro kernel with network transparent message passing. If you haven't used a setup like that then it is probably hard to see the advantages, it goes way beyond some theoretical debate.
In practice two systems side-by-side, one running QnX, one running Linux will have the QnX system come out way ahead in terms of responsiveness for interactive tasks and things like latency and real world throughput.
We'll never know what the world would have looked like if Linus hadn't been as pig headed during that whole debate. Likely we wouldn't be stuck with a re-write of a 30 year old kernel.
The bit where Linux got it right and Tanenbaum got it wrong was that GPL'ing an OS was a much better move than to do a deal with Prentice Hall (who published the minix source). And minix wasn't the most elegant micro kernel either, which may have skewed Linus' perception of what it was that Tanenbaum was getting at.
My guess is if he would have used QnX instead of having looked at minix that he would have readily agreed with Tanenbaum, but we'll never know about that and Linux is here to stay for a long time.
If you haven't used QnX give it a shot and see how it works for you, you might be pleasantly surprised.
Thanks for the pointers. I have heard of QnX through a book about message-based programming with SIMPL. SIMPL I think borrows the QnX APIs for making modular and networked systems.
I prefer open source so I've been taking a look at Minix 3. It seems really cool. And it's only 6 or so years old -- at the time of the argument Minix wasn't meant to be a production system, but now it is.
I feel like it must be easier to trace application performance with Minix since you have natural points to insert hooks. With monolithic kernels it's hard to understand what is really going on.
I see a lot of potential advantages of a microkernel in distributed systems. For example, Amazon EC2 has well known I/O sharing issues. With a microkernel, you could fairly easily reimplement the file server with your own custom disk scheduling logic based on identities (not Unix users) and priorities.
In Linux I know there is some work with containers, but I don't think it is as customizable as you would like.
I think Linus' argument was basically that microkernels require distributed algorithms, and distributed algorithms are more complex.
But maybe in a multicore world that argument is weakened. I like this paper: "Your computer is a distributed system already, why isn't your OS?"
http://scholar.google.com/scholar?cluster=945420933600220038...