There's a certain kind of magic to m68k. They were the first real 32 bit processor for the masses, at least by the criteria of being able to program without worrying about addressing limits, segments or banks.
The m68020 in 1984 arguably became the first widely available modern CPU, even if one had to add the MMU separately. '020 systems with enough memory can run modern software in 2022, and there are many thousands of binary packages available.
It's an elegant architecture with an orthogonal instruction set, easy to understand instructions, wonderfully documented hardware, very little errata and no artificial limitations.
It's not only interesting to preserve the history of Unix on m68k, but it's interesting to run with NetBSD as a modern machine now.
> It's an elegant architecture with an orthogonal instruction set, easy to understand instructions, wonderfully documented hardware, very little errata and no artificial limitations.
The 68K instruction set was so, so much nicer than anything from Intel. It's a shame that Intel won that round. Imagine if IBM had chosen the 68K for the PC.
To me, the extraordinary thing about m68k is that it's such an ancient processor family and in some case such ancient actual hardware but modern operating systems still work on it; not just NetBSD but Linux still maintains support (although distro support seems to be extremely spotty).
That's fun, it makes m68k the longest-supported CPU for Linux. The m68k was the second CPU Linux ever supported, after the i386, and i386 support is long gone.
Which really is a testament to its instruction set and architecture, I'd bet. I doubt much if any special effort has gone into keeping it there, it's probably mostly just that it hasn't gotten in the way, so no reason to remove it.
I think you're conflating "can handle a general fault" and "does address translation". Some PDP-11s ran Unix just fine with MMUs that didn't generate page faults (they just did address translation and bounds checking). You can even do fault handling on the 68000 if you're willing to limit it to instructions that are known to work or that you can throw away (e.g., XOR, which is what Sun used for its stack probes).
I designed an MMU for the 68000-based Atari ST (it did translation and bounds checking in an interesting way), and we implemented it in the silicon. A Unix for it never happened, unfortunately. https://dadhacker-125488.ingress-alpha.easywp.com/how-the-at...
Thanks for the correction. Your project is very interesting.
I think that my mistake was caused by this, that some manual I read in old days was claiming that m68010 was the first one which was able to run proper unix OSes, because it had correctly implemented privilege levels. And I somehow conflated it with MMU.
FWIW, the 68000 transistors number is just marketing. I don't remember the exact number, but a full netlist has been produced from tracing the 68000 die and IIRC the actual transistor count is at least 20K less than that. Still a lot more than the ARM1 of course. I would guess that 68000 machine code is a fair bit denser than 32-bit ARM though which was important in the 80s when memory was still very expensive.
The m68020 in 1984 arguably became the first widely available modern CPU, even if one had to add the MMU separately. '020 systems with enough memory can run modern software in 2022, and there are many thousands of binary packages available.
It's an elegant architecture with an orthogonal instruction set, easy to understand instructions, wonderfully documented hardware, very little errata and no artificial limitations.
It's not only interesting to preserve the history of Unix on m68k, but it's interesting to run with NetBSD as a modern machine now.