Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Atari System V Unix – Unofficial Website (atariunix.com)
144 points by rbanffy on Jan 28, 2022 | hide | past | favorite | 71 comments


There's a certain kind of magic to m68k. They were the first real 32 bit processor for the masses, at least by the criteria of being able to program without worrying about addressing limits, segments or banks.

The m68020 in 1984 arguably became the first widely available modern CPU, even if one had to add the MMU separately. '020 systems with enough memory can run modern software in 2022, and there are many thousands of binary packages available.

It's an elegant architecture with an orthogonal instruction set, easy to understand instructions, wonderfully documented hardware, very little errata and no artificial limitations.

It's not only interesting to preserve the history of Unix on m68k, but it's interesting to run with NetBSD as a modern machine now.


> It's an elegant architecture with an orthogonal instruction set, easy to understand instructions, wonderfully documented hardware, very little errata and no artificial limitations.

The 68K instruction set was so, so much nicer than anything from Intel. It's a shame that Intel won that round. Imagine if IBM had chosen the 68K for the PC.


> Imagine if IBM had chosen the 68K for the PC.

I guess they thought it was too riscy.


To me, the extraordinary thing about m68k is that it's such an ancient processor family and in some case such ancient actual hardware but modern operating systems still work on it; not just NetBSD but Linux still maintains support (although distro support seems to be extremely spotty).


That's fun, it makes m68k the longest-supported CPU for Linux. The m68k was the second CPU Linux ever supported, after the i386, and i386 support is long gone.


Which really is a testament to its instruction set and architecture, I'd bet. I doubt much if any special effort has gone into keeping it there, it's probably mostly just that it hasn't gotten in the way, so no reason to remove it.


For historical record, first m68k which could use mmu was m68010

I've never seen m68008 m68010 and m68012 in action though. Seems Sun used them.


The vanilla 68000 can definitely use an MMU.

I think you're conflating "can handle a general fault" and "does address translation". Some PDP-11s ran Unix just fine with MMUs that didn't generate page faults (they just did address translation and bounds checking). You can even do fault handling on the 68000 if you're willing to limit it to instructions that are known to work or that you can throw away (e.g., XOR, which is what Sun used for its stack probes).

I designed an MMU for the 68000-based Atari ST (it did translation and bounds checking in an interesting way), and we implemented it in the silicon. A Unix for it never happened, unfortunately. https://dadhacker-125488.ingress-alpha.easywp.com/how-the-at...


Thanks for the correction. Your project is very interesting.

I think that my mistake was caused by this, that some manual I read in old days was claiming that m68010 was the first one which was able to run proper unix OSes, because it had correctly implemented privilege levels. And I somehow conflated it with MMU.


That's a really interesting bit of history -- thanks for writing it all down and sharing!


The Sinclair QL used the 68008.


Actually, ARM1 was a much more efficient design for the masses.

The Motorola 68000 oddly had 68,000 transistors, while ARM1 had 25,000. Both had a 24-bit address bus.

It was introduced much later (1985, versus 1979 for 68000) despite using fewer transistors.


FWIW, the 68000 transistors number is just marketing. I don't remember the exact number, but a full netlist has been produced from tracing the 68000 die and IIRC the actual transistor count is at least 20K less than that. Still a lot more than the ARM1 of course. I would guess that 68000 machine code is a fair bit denser than 32-bit ARM though which was important in the 80s when memory was still very expensive.


Interesting, google had the 68,000 count on several sites. Were they trying to ramp up the transistor count?

ARM Thumb and Super-H were supposed to address the code density problem. I see the smallest ARM binary at busybox.net is for Thumb.


Brings back memories of playing around in the shell on an Atari TT at CeBit '91 or so. Having had an apprenticeship in a company producing their own machines (Norsk Data) and porting System V, as well as having used Atari GEM graphic shells, made me want to avoid DOS or even CP/M for personal use and especially development at the time ;) Then used AIX and Interactive Systems professionally until Linux and the BSDs came about. There was also a short period in 1992 or so when I had the option to use A/UX (Apple's System V port at the time) as file server, though NetWare 2 and 3 were cheaper and better suited for DOS/Windows networking.


> until Linux and the BSDs came about.

I assume you mean BSD/386 and successors? BSD itself was first released about eight years before AIX. AIX even has bits of 4.3BSD in it.


Sure; that was just the order I encountered these. I was also surprised that Linux' LVM was basically a clone of AIX' (whereas FBSD's vinum was a clone of Veritas).


IIRC LVM was based on HP-UX, EVMS was based on AIX LVM


Ooh Norsk Data. How was it?


> How was it?

Good times! Ergonomic terminals, hamacas, happy hour on Fridays, smoking at the desk, SINTRAN ...

To clarify, this wasn't in Oslo but in Kiel, North Germany, where they had a co-op with Christian-Albrecht-Uni for compilers (other than ND's own PLANC language), and also developed a system for public libraries.


LI-FI,,,

It's sad how the ND-100/500 (and 5000+) families have almost completely disappeared, including online material about them.

The IT department at my university was involved in NDIX development (BSD for ND-5000), I believe. This was a few years before my time so I didn't get first-hand exposure to that.

I do regret not holding on to one of the Compact ND-100/110s that we had around in the late 90s, nor any of the Tandberg terminals that we had huge numbers of.


Sounds great thanks for the insight :)


Not sure about the software, but the hardware was gorgeous. I have an eBay alert for the "Norsk Data" and "Nordata" strings.


Some are stored throughout Norway. If you're willing to maintain it well you might find a donor in. Norway. http://www.sintran.com


NTNU's computer museum has quite a number of Norsk Data machines, but they are unfortunately not accessible to the public.


There are a few collectors in Sweden too.


Here is a motherboard for sale https://www.finn.no/246230727


In 80s and 90s engineers knew how to write developer documentation - a lost skill http://www.atariunix.com/docs/developers_guide.pdf


Documentation like that is written by specialist technical writers in cooperation with engineering.

Corporations found out that they can skimp on that and still get paid.


Yep. I haven't seen a technical writer position since the early 2000's, at an enterprise SaaS company (we called them "ASPs" back then.)


It was the golden age of computing. Today’s best in class documentation (Stripe!?) doesn’t come close to average documentation in those days.


They still know. They're just not told/incentivized to do so.


This must have been quite a sight in 1991 on the 19" 1280x960 mono monitor.

(The ordinary Atari mono monitor for the ST/STe/Falcon was really nice. Some slightly unusual phosphor, I think, which meant a very nice slightly muted contrast ratio, and no discernible flicker despite being 72 Hz. Decent 640x400 resolution as well. But... it was absolutely tiny.)


I would have loved to have had this. As it was, I used MiNT and it gave me everything I needed (preemtive multitasking OS bootable from hard drive with a POSIXish userspace). I think MiNT was possibly the most impressive single-developer project I have even encountered.


I didn't realize Atari made a 68030 machine. It's too bad they didn't pivot to high end engineering/academic workstations to compete with Sun/SGI. They definitely had the engineering talent.


They tried at the end (with the TT/030) but it was too late. They folded two years later.

I remember a snippet about Unix on the TT030 in UnixWorld from around then: "Up from toyland." They weren't going to be taken seriously.

Atari's last two years of engineering were excellent between the TT030 and the Falcon030 and the last versions of TOS after they hired Eric R Smith to fold his open source MiNT project into the official OS.

But at that point in time nothing could compete with x86 and the 68k architecture was end of life. Even Apple had a rough time of it (after switching to PowerPC) and barely held on for the next 10 years.

EDIT: I should also mention the Atari Transputer Workstation project around this time, which was a multiprocessor Transputer + some pieces from the Mega ST attached as a controlling terminal. Another attempt to get into the higher end research & workstation market. Didn't sell any really though.


Apple, Atari, and Commodore had all suffered under bad management for years by then. I don't know if they could have stayed relevant, but management not being the best harmed them. Apple had just enough with the mac to not die.


NeXT also ran on 68k at the time.


And was also trying to get off of it before they stopped making hardware altogether. Motorola fumbled the ball by declaring 68k pretty much over, pushing their doomed 88k arch then killing that and moving to PowerPC just a couple years later.


IIRC, Motorola was never able to put an 88K CPU, FPU and MMU in a single package. They were also unwilling/unable to make it inexpensive enough.

A sensible Motorola would have made the price target of the low-end 88K the same unit price of a 68030.


You might like to search a USENET archive for old posts in comp.arch. I'm fairly sure that the 88120 did combine everything in a single package.


That was the 88110. The 88120 never saw the light of day.


The 88120 wasn't sold but apparently it did work fine before the project was cancelled.

The architect of it regularly posts to comp.arch.


My modern NeXT lament is that was had an extraordinary machine that ran Unix with a Postscript based windowing environment, and some rather remarkable applications, written in a "slow" C language, on a 25Mhz '040 with 400M of disk and 20MB of RAM.

Meanwhile, getting Linux to run on a R Pi is a major endeavor.

I don't know how light you can get a Unix with, I guess, X running on it today.


> Meanwhile, getting Linux to run on a R Pi is a major endeavor.

I don't really think downloading a disk image and copying it to a microSD qualifies as a major endeavor.


Considering that a Pi CPU sill runs about 50 times as fast and has 100 times more memory than a NeXT had, whatever Unix you get running on it doesn't have to be "light" by OP's measure.

With a 25Mhz CPU and 16MB of RAM, every cycle and byte has to be accounted for.


OK, but Linux is the default OS for the RPis. They spin their own distro based on Debian.


My first Linux box was a 386SX with 3 megs of RAM. This was in the 0.99.x kernel days. I later upgraded to a 486/100 with 16 megs. Linux (Slackware, kernel 1.0!) ran like lightning on that thing, including X and an early browser like Netscape. I would often have over a dozen users logged in remotely (telnet...) Things are incredibly bloated today.


Love your story! R Pi seems ok these days though. Just loaded Armbian on an SD card and away it went. Descent performance even.


Were those not mostly lightweight MOTIF apps? RPi Linux distros running heavyweight GUI desktops are the problem.


I did run NetBSD on my MIPS-based IBM z50 w/ 16MB of RAM, complete with ethernet and X and twm.

But then a 16MB RISC Unix workstation wasn't really low-end

Sadly, it wasn't possible to make it boot directly to BSD - it always needed a pass through Windows CE


I have run NetBSD on my Mac Quadra 950 with X, only ran a few xterms but it was fine.


They definitely had the engineering talent.

Atari, Commodore, Digital, Digital Research, were lacking the management talent.

They had the nicest gear, but the suits botched it primetime. That's why I drink.

Now, more to the point of the parent: Sun and SGI had the management talent. They had a good run. But that run ended anyway. At some point they run out of ideas to compete with "industry standard" hardware.

So, to come to the point of the parent: would a very successful Atari UNIX station, based on the 68030 made a difference?

NeXT used the 68k processors as well. And Apple. Even with the Mac's success the PowerPC eventually run out of steam against Intel. Would an additional load of many, many Ataris made a difference here?

What did ARM do differently than all of those mentioned above?


> Atari, Commodore, Digital, Digital Research, were lacking the management talent.

There isn't much they could have done. The good-enough x86 PC steamroller would have crushed them anyway. When the cheap average PC you could buy had VGA and a Sound Blaster, these platforms quickly ran out of gas in the gaming space.

If, and that's a big if, both Commodore and Atari managed to get cheap Unix (or Coherent) workstations out, at prices similar to PCs (which were generally more expensive), they could, perhaps, carve themselves a second niche as cheap Unix workstations.


ARM and its licensees stayed focused on embedded after walking away from the Acorn machines. Power usage became their focus. And so they were right there as pretty much the only good option when the portable and embedded market blew up.

So now with that under the belt it can return to desktop.


IIRC, ARM was low-power from the start. I remember the story of a board with one of the first ARM CPUs, that powered up even though Vcc was not connected. It was working from leaked current from the other signals being fed into the processor.


In an alternative universe where IBM would have succeed in preventing the reverse engineering done by Compaq, they could very well had survived.

It was the PC clone market that killed them, more than management errors.


I had several STs starting with the initial developer offer, I never saw any of the 030 machines advertized for sale in Europe, Atari could have made a better job of marketing them.


Apart from Atari's proprietary Unix, the TT also runs Linux (https://imgur.com/a/gpvi3du) and NetBSD (https://twitter.com/nbtt030).


That's not as much fun. It's like installing Linux on a SPARCstation, an SGI, or an IBM RS/6000. It's possible, but just not as much as exploring the uniqueness of those machines.


Yeah, NetBSD is the same everywhere. Partition, configure, set the network, pkgin, pkgsrc, repeat. CTWM today will run fastly enough on almost any X supported arch. It's really useful, and because of that, it's boring :).

With OpenBSD the same, too. Except for some webkit browser, a DE with CWM and XTerm plus ksh's will not differ at all from a PC with the same config.


What fun is to manage your AIX desktop if you can't run smitty?


This is very cool. Is it possible to find a remake of an m68k architecture machine to run at home?


If you mean to build yourself, yes, there are several.

A few examples...

https://rosco-m68k.com/

https://github.com/74hc595/68k-nano

https://shop.mcjohn.it/en/diy-kit/46-68k-mbc.html

https://www.kswichit.com/68k/68k.html

This one is a one-off but for me it is one of the most impressive:

https://www.ist-schlau.de/

It runs EmuTOS and 68K Enhanced BASIC.


If your question was for a machine to run Atari System V Unix on, though, I'm afraid the answer is: an Atari TT. None of the m68k machines (or FPGA emulations thereof) mentioned in the other comments will run it.

Perhaps (but I didn't test) Atari System V Unix would run under the Hatari emulator.


The Firebee (http://firebee.org/) is ColdFire based, and ColdFire is pretty much m68k cleaned up for the new millennium and pretty much backwards compatible (some opcodes are different but can be translated in software, or things can be re-assembled without huge modifications.)

I'm not sure of the state of the Linux port for it, but it runs Atari TOS and EmuTOS (a GPL rewrite) and the FreeMiNT extensions which turn TOS into a multitasking POSIX compliant system that runs most GNU-type utilities.


MiSTer Project does 68000 on an NE10 FPGA.


Can it do a 68030? Probably yes, but I don't have one.


This one is ready made: http://www.apollo-core.com/v4.html


But not suitable for anything Unix or Unix related, because it has no MMU (and likely won't ever have an MMU).


Tangential, but there's also a port/recreation of Linux/Unix on the C64 that's fun to play with in an emulator called LUnix (it's not just a hacker tool :) ).

https://en.wikipedia.org/wiki/LUnix




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: