Why I agree that the everything-is-a-file approach that Plan 9 takes is pretty great, and it would be great if that would find a greater use in Linux or the BSDs (beyond /proc etc.), one has to take those comments about the size and quality of the code with a grain of salt.
Once again, the last 20 percent of performance take 80 percent of the effort, and quite often about as much lines of code. And Linux/BSD had to be that highly optimized, or it would have failed in the server sector, probably wouldn't support as many architectures and weird hardware parts.
Having said that, I guess that a lot of people would be happy with the level of support and speed Plan 9 supports, as hardware tends to merge (RIP MIPS, Sparc, 68k, PA-RISC, S3, Tseng, Token Ring…), and Moore's Law takes care of the rest.
What you're saying makes sense to me, but do you have a citation for Plan9 being notably slower than Linux on equivalent hardware? Seems like it'd be trivially true unless there is a fantastic architectural advantage to plan9 somewhere, but I'd love to see some numbers.
It's been a while since I lurked on 9fans (after all the late adopting minimalist tiling window manager fanboys started coming in in droves), so I can't give you any links to recent comparisons.
First of all, the comparison would be pretty unscientific if you include all the different variables. So let's say you want to test common server performance, then you'd probably do something like an apache bench for a web server. Now comes the problem: Apart from the kernel issues (memory management, ethernet driver, scheduling), you'll have different web servers, compiled with different C compilers.
The GCC port for Plan 9 is ancient, and I'm not aware that e.g. Apache is running on it.
Yes, you can simply test page loads/second with any technology available to any platform, and if you buy into the fact that everything you're doing is wrong (so not just Plan 9 vs Linux, but also gcc vs. 8c, apache vs pegasus etc.), well…
Plan 9 is actually a great little OS, it is where linux could be if it wanted to badly enough.
Instead we're stuck with early 70's technology. Which is good enough, but it could be so much better if not for a bunch of NIH and ego.
No disrespect to the kernel devs, they're doing a great job. But longer term I wish there was some more real innovation instead of just a slightly better (and free) mousetrap.
Perhaps a group could make a Linux distro with the goal of forking, and Plan9ifying, the source of every package they ship. No one would have to use the resulting distro (other than to demo the neat way that the Plan9ified packages integrate so easily with one another) but the original authors/maintainers of the upstream packages could adopt the Plan9ification patches a la carte, gradually increasing Plan9ification across the board. The patches of one package would always be guaranteed to work with the patches from any other, as the resulting combination has to hang together in the demo distro—and, since this would only be an eventual goal of the group, they would also have to make sure that all their patches didn't expect any Plan9ification on any other package's part.
That's not a trivial exercise. The philosophy behind plan 9 is radically different from some of the ways things are done in Linux, and it would take a great effort to get this even close to production grade.
Plan 9 is structurally very different from Unix under the hood, it is in many ways a better unix but backwards compatibility was not what they had in mind when they designed it.
Unix is said to have 'everything is a file' as its mantra, plan 9 shows what 'everything is a file' really means.
> The philosophy behind plan 9 is radically different from some of the ways things are done in Linux, and it would take a great effort to get this even close to production grade.
So it's not just "NIH and ego", it's the fact that people are loath to abandon working software for benefits that probably seem somewhat abstract?
The "crossing the chasm" approach is to find a niche where your product can win and then go from there. Taking on Linux head-on is just not going to be a winning proposition.
It was NIH and ego that stopped adoption from some of the more advanced knowledge available at the time when the relevant portions of linux were developed.
There was no 'working software' to abandon at the time, there is now.
Plan 9 came much later than Linux, if I'm not mistaken. Things like microkernels were around then, sure, but is Plan 9 interesting because its a microkernel or because of how you interact with it and how it's architected (everything is really a file)? Edit: it appears that Plan 9 existed internally at Bell labs at about the same time Linux was being developed, but was only released to the public, under a commercial license, in 1995. It was finally open sourced only a few years back.
Also, there certainly was plenty of existing Unix software out there when Linux (and BSD) came out. Not nearly as much as now, but nothing to laugh at either.
Don't forget minix and the Tanenbaum / Linus exchanges.
That's a pretty well documented era, and guess what, Tanenbaum was right.
But Linus was riding high on the momentum that he'd generated and Tanenbaum lost the popularity contest. But that didn't make him wrong, and over time those few disadvantages that have been trotted out as the reasons why he was wrong have all been put to rest.
That's part hindsight, but Tanenbaum had a huge amount of experience and was already ahead of 'conventional' unix. Some people think he wasn't ahead enough, but it was certainly a step forward from where linux is, even today.
Citation needed. Where do you get that? Performance-wise the biggest problem of OS X is still that part that is based on the "many servers are the OS" idea.
Also, NT wanted to try moving in that direction but the critical parts are not "many servers."
Also, there's a reason why L4 was developed (note: post Tanenbaum): the "real" microkernels were very problematic performance-wise and it's actually harder to produce them (example: Gnu Hurd)
We have some experiments with very stripped down domains (i.e. virtual machines) that make full use of paravirtualization. They are quite close to being processes in an exokernel OS. And boot up really fast---like processes should do.
Thanks, my wording was indeed clumsy, but it doesn't change the fact: the hypervisor is simply not doing the stuff that the classical OS kernel does, it's not replacing anything, it's just a layer with some specific new functionality, so when you're changing the meaning of the terms being discussed of course it can appear that you're winning the argument, whereas you're just performing tricks. No matter from which direction you observe the systems, the overall functionality possible with monolithic kernels is still not really substituted with something better.
> the hypervisor is simply not doing the stuff that the classical OS kernel does
And that makes the comparison with exokernels apt. Exokernels are not supposed to do what normal kernels do.
(Though if you run a normal kernel on top of a hypervisor or exokernel, in a sense you haven't reached the true potential of the system and your critique is more than valid.)
Microkernels haven't exactly swept the world before them. Apparently, MacOS X and Windows NT based systems have some microkernel in their DNA, but it is my understanding that you couldn't really call them microkernels at this point.
Tanenbaum was right that linux was a giant step back in to the 70's.
Microkernels have swept the world before them in a way that you can't even imagine, in the embedded systems world where 'failure is not an option' microkernels rule supreme without any threat from larger stuff.
When you want deterministic hard real time with very tight upper boundaries on latency then a micro kernel will help tremendously.
Everything else is subordinate to the scheduler, even things that in 'monoliths' are part of the kernel, such as IO driver running as user processes.
> There are some very interesting concepts in Plan 9 that linux could have taken advantage of though.
Given what I posted above, it's not clear that Plan 9 was really on Linus' radar when he started or in the formative years of Linux. When Plan 9 came out, radically changing Linux might have been a significant departure from a working system with a growing amount of software available for it.
Disclaimer: I'm the author of the Glendix project.
As someone already pointed out Glendix tries to bring binaries over to make Linux feel more Plan9-ish. Unfortunately, I no longer have the luxury of being a grad student and so my time these days is very limited and have had to move on to other things. But if there's someone motivated enough to push Glendix further, I'd be more than happy to help!
As for the question of 'Why Plan 9', to put it simply - reading the source code makes me feel like a hacker again. A long time ago if you didn't understand how something worked, you could just peek at the source and everything would be clear. Plan 9 maintains that, the source code /is/ the documentation. Alas, I wish I could say the same of 'modern' free/open source software (Linux/BSD/GNU/what have you).
Porting one of the fine Plan 9 based filesystems to Linux seems like an excellent operating systems project. This would directly benefit Glendix as many applications require these synthetic filesystems provided by the Plan 9 kernel.
/net is a good example of how sockets are done away with, and /dev/draw provides a useful graphics API. You could argue in your project thesis that filesystems sometime provide a better abstraction than traditional programming-language based APIs; and prove it by porting one such filesystem over to Linux.
Unfortunately, I've already done my senior project. Though, I'd be happy to
hack on Glendix stuff in my free time.
/net
I've not done any network or kernel programming, and I'm only slightly
familiar with /net. If I wanted to hack on this, where would you
recommend I start?
It seems like a Glendix /net could be implemented in user space. Yes?
A less ambitious goal would be to patch some of the widely-used higher-level applications so that they would run on Plan9. E.g., if I could get a virtual Plan9 machine with Python/django, PostgreSQL, and lighthttpd, I would have a starting point where I could do something practical, and then I could explore how to use the specific features of Plan9 to make those tools more productive.
http://www.glendix.org/ is going in that kind of direction, albeit without explicitly doing the bit about porting existing packages. Instead it looks like they're building enough support in Linux to run Plan 9 binaries, and then bringing over the Plan 9 equivalents.
The other existing option would be Plan 9 from User Space (http://swtch.com/plan9port/) which includes Rob Pike's innovative editors, sam and acme.
Of course, the biggest problem these days is sourcing a true three-button mouse to really get the feel of the user interface. Mouse chords in acme don't really feel right with a mouse wheel.
On topic of running old OSs in virtualisation, I purchased and installed "VirtualAcorn" and took delivery yesterday. It's RiscOS-in-a-box. I'm running on top of OSX.
My motivation is I have a friend who composes, and is still using a twenty-year-old acorn running the original release of Sibelius on floppy disks. He bought it directly from the Finn's apartment door back in the day. We thought it might be a good idea to get his life's work off this collection of ancient hardware and floppies :)
It has a command-line, but I haven't worked it out yet.
I remember programming on Acorn A3000 or similar, nearly 20 years back. I used to program on display models in computer shops, as my family couldn't afford to buy one. I seem to recall hitting a function key to get a command prompt at the bottom of the screen, and further dialog with the command line caused the rest of the GUI to scroll off screen with each new line printed.
I programmed in its dialect of BASIC, which was - I recall - like a Commodore 64 on steroids. I was impressed, in a way which I wasn't by QBasic on PCs, because I had yet to learn the benefits of structured programming, and missed the line numbers on the PC.
Thinking back, I think it auto-numbered lines as you hit return, like this:
10 PRINT "Hello"<press Return here>
20 |
I remember that in particular being major improvement over the C64; I had seen so little, and had very few resources to learn from.
I ran into Acorn installed on some digital cable set-top boxes I was working on reverse engineering a few years ago. Very impressive that your friend is still composing using such an ancient machine.
Some of the ideas in Plan 9 are utterly fantastic.
/proc comes from Plan 9, and I remember being awed by the Plan 9 filesystem, which used memory and hard disks to cache a giant WORM jukebox (in the Bell Labs version, anyhow). I forget what the path was but you could just transparently go and have a look at the whole file system on any given day and wander around a read-only version of the fs for that day.
I'm sure there's a lot out there that looks like this now, but this was in 1992/1993.
The compiler was also cool as hell but lacked any of the aggressive optimizations that even gcc has now. It was very much oriented towards incredibly fast compile times rather than squeezing out the last x% of performance through optimization. I believe that the go compiler is a descendant of this one.
On the negative side, it was a very insular culture with an attitude that 'things we don't need to do aren't worth doing'. Avoiding shared libraries is fine, I guess, as is having a minimalist GUI approach. But telling people that shared libraries suck and shouldn't be used and using the fact that _you've_ got a tiny static library that does all the GUI stuff _you_ need as proof seems a bit rich, when the GUI that you produced doesn't have any of the conventional features of a GUI library. When so many things are being changed at once it's pretty easy to conflate two unrelated things into a 'win' (e.g. 'haha, we don't have a lame shared library system on our machine, and by the way, Motif sucks anyhow').
It's fine up to a point but it was starting to ring very hollow when you had go to another machine to run a web browser that looks even faintly like an actual web browser.
This may all have changed; it's been a long time since I had anything to do with P9. But I think the 90s are the point where Plan 9 decisively 'missed the boat' and the insularity and general attitude was part of the problem. If 'ape' (ANSI Posix environment) had been better maintained and taken seriously Plan 9 might have been considerably more popular and its alternate (frequently superior) ways of doing things might have taken off.
Note that even pros must use some tricks to run something of it on the modern hardware, so it appears as it's still mostly an experimental research platform?
"I am typing this in acme in 9vx running
on FreeBSD, using the rio port in P9P for my window manager.
Because I'm away from my home network, I'm running 9vx
with the the root on my local machine. When I'm at home, I
use 9vx booting with its root taken from a real Plan 9 file
server. I also run it on qemu fairly often."
Once again, the last 20 percent of performance take 80 percent of the effort, and quite often about as much lines of code. And Linux/BSD had to be that highly optimized, or it would have failed in the server sector, probably wouldn't support as many architectures and weird hardware parts.
Having said that, I guess that a lot of people would be happy with the level of support and speed Plan 9 supports, as hardware tends to merge (RIP MIPS, Sparc, 68k, PA-RISC, S3, Tseng, Token Ring…), and Moore's Law takes care of the rest.