So you can easily have a stab at compiling code. I added a simple file-manager, in C, along with other sources, to give a useful demo. (Of course I spend more time writing code in Z80 assembler, or Turbo Pascal, rather than C).
The author has a followup post here for thos interested:
>The Aztec C compiler would have originally be distributed on floppy disks, and is very small by moden standards.
If I remember correctly, Aztec C was from Mark Williams. It was also the basis for the c Compiler that came with Coherent OS.
But yes, things were far easier in the 80s, even on Minis which I worked on back then. These days development is just a series of Meetings, Agile Points, Scrums with maybe 2 hours of real work per week. Many people now tend to do their real work off-hours, a sad situation.
But I am looking for 1 more piece of hardware, then I can set up a DOS Machine to play with myself :)
>The Aztec compiler pre-dates ANSI C, and follows the archaic Kernigan & Ritchie syntax
I still do not like ANSI C standards after all these years.
While we're ranting don't forget developers in the 80's didn't sit in a noisy open space!
This was totally me ~15 years ago in a Scrum place with an open floor plan, doing most of my work after everyone left in the evening or on holidays because it was quiet and I could finally get some stuff done. I wrote big pieces of the product by myself.
My first C compiler was on a VAX. I did have some C compiler for my ZX Spectrum at some much later point but I don't remember doing much with it. Then a series of compilers for PCs. One random memory is some sort of REPL C, maybe Interactive-C or something? But pretty quickly it was Microsoft and Borland.
EDIT: On a more serious note re: meetings and such. Part of the difference is that working in much larger teams and projects becomes less efficient and requires more communication. Mature projects also require less of the builder thing and more of the maintainer thing. Software lasts a long time and inevitably maintenance becomes the work most people end up doing.
That sounds familiar so I looked it up[0]. I used Mark Williams C compiler on the Atari ST--eventually settling on Megamax C as it ran better on my small floppy-based machine.
Computing was a smaller world back then, the company was founded by Robert Swartz (father of Aaron Swartz) and named after his father William Mark Swartz.
These days development is just a series of Meetings, Agile Points, Scrums with maybe 2 hours of real work per week.
Think about early video game development at large companies: One person (maybe two), six months. The company gave them room to practice their art, and the result sold a million copies.
These days everyone wants to cosplay Big Tech and worship abstraction layers, so you can't get all of the "stakeholders" in the same meeting in six months.
> Many people now tend to do their real work off-hours, a sad situation
It's the only practical approach when offices are designed to be distracting and productivity-killing, meetings are incessant, and groupware delivers a constant stream of interrupts which one is expected to monitor and respond to quickly - especially while working remotely.
Yup. Let's C was the cut-down version of MWC86, with no large-model support. This limited you to 64K code and 64K data. I got a copy of it one Christmas, but never used it much because of this limitation.
Back in the early 90's, before Linux took off, I ran Coherent. It came with incredible documentation, and I still remember the huge book with the shell on it.
And you're absolutely right about all the agile bull...
Personally I see the use of a cross-compiler and other dev tools on a bigger machine as even more retro than running them in an 8-bit micro because it is what many software vendors did at the dawn of the microcomputer age.
Which is crazy fast not to mention the only 8-bit architecture that got extended to 24-bit addressing in a sane way with index registers. (Sorry the 65816 sucks)
> because it is what many software vendors did at the dawn of the microcomputer age.
They really didn't have any choice if they wanted to actually accomplish something.
The 8-bit machines of the day, CP/M running 1-2MHz 8080s, 2-4MHz Z80, with no memory, and glacial disk drives (with not a lot of capacity).
Go ahead and fire up a CP/M simulator that lets you change the clock rate, and dial it down to heritage levels (and even then it's not quite the same, the I/O is still too fast). Watching the clock tick by as you load the editor, load the file, make your changes, quit the editor, load the compiler, load the linker, test the program, then back to the editor. There is friction here, the process just drrraaagggsss.
Turbo Pascal was usable for small programs. In memory editor, compiling to memory, running from memory. Ziinng! Start writing things to disk, and you were back to square one. The best thing Turbo did was eliminate the linking step (at the cost of having to INCLUDE and recompile things every time).
It was a different time.
As someone who lived through that, we simply didn't know any better. Each generation got incrementally faster. There were few leaps in orders of magnitude.
But going back, whoo boy. Amazing anything got accomplished.
I got paid to develop some software for a teacher at my school and wrote it in some kind of basic (GWBASIC?) for my IBM PC AT clone, then found out she had a CP/M machine.
I had just read in Byte magazine that there was a good CP/M emulator that ran several times faster than any real CP/M system already in 1988 or so. So I used that software to run a CP/M environment and port the code to some BASIC variant there.
One of the first CP/M C's was BDS-C. It's claim to fame was that it compiled the source in-memory, so it was at least that part was nice and fast.
Certainly compared to Whitesmiths C for CP/M, and not just for the $700 price vs $150 for BDS-C. Whitesmiths was real, official C, direct from P. J. Plauger and V6 Unix. But each compile went through many, many, many passes on the poor floppy (including pseudo-assembly "A-Natural" for the 8080 that then translated to real assembly). Everybody complained that while very professional, it just took too long to go through the cycle.
Contemporary BYTE recommendation was to develop & iterate on BDS-C, then at the end re-compile on Whitesmiths to squeeze the best performance.
Which if you come to think of it, fits quite naturally 40 years later, in having combined JIT / AOT toolchains for most compiled languages.
Pity that only Visual C++ seems somehow close to Energize C++ and Visual Age for C++ v4, for that kind of incremental development experience. Live++ and ROOT aren't that widespread.
Also D has a similar approach, use dmd for development, ldc or gdc for release.
> They really didn't have any choice if they wanted to actually accomplish something.
microsoft in particular did their initial development of altair basic on harvard's pdp-10, using pdp-10 assemblers and debuggers; not only did they not have an altair, nobody had an altair
but an awful lot of pc software was written on pcs. woz wrote integer basic on the apple. bds c was assembled on cp/m. probably the majority of pc developers didn't have access to a bigger computer
those who did had an advantage, but sometimes it backfired. i remember the ucsd p-system as being unbearably slow, and i suspect that part of the reason was that its authors didn't empathize enough with their users' frustration to commit the efficiency hacks used by systems like eumel and basic-80
I gave up on Android side projects, however during the time I still cared, I always had the impression the Android team must use gamming rigs maxed up for their development, our desktops never seem to be good enough, and the irony is that I can confortably do plain Java development on my oldie 2009 dual core laptop with 16 GB.
Heh. My favorite was at my first job (late 1980s). A small Xenix system was used for development, with PCs as terminals. Since the Xenix box didn't have that much horsepower, our compiles involved shooting intermediate code down the serial line, having the compiler second pass run on the PC, and shoot the .OBJ back up the line for linking - yes, it was faster to do that. We had our own compilers, which were dreadfully obsolete by that time; for instance, the C compiler used UNIX V6-style arithmetic, with library functions to deal with calculations for types other than int.
That was the easy part, coming up with algorithms, data structures (what is that?), and the actual hardware and OS mappings, was already beyond that book.
Until I got hold of Input magazines collection, I was pretty much stuck with my Timex 2068 manual, and its set of demos.
It is like learning a foreign language from a dictionary without anything else, until someone else shows the structure of the sentences.
So true. My first parser was for a little text adventure I wrote. I didn't know parsing was a thing people might write about, so I just muddled through some text manipulation in Atari BASIC it until it worked. Even sorting was something you just had to figure out - I managed to come up with bubble sort on my own, presumably like many others did in those days, unaware that sorting was well-trodden territory.
> There's no obvious way to create a lower-case filename on CP/M
That's because the FAT file system used by CP/M didn't allow lower case letters, at all. In this case "no obvious way" == "impossible".
The stack problems mentioned were real. The stack size was set at compile time, and there was no way to extend it. Plus the stack was not just used by your software, but also hardware interrupts and their functions.
> That's because the FAT file system used by CP/M didn't allow lower case letters, at all.
Sure it did. Just start Microsoft BASIC on CP/M, type a program and save it as "hello". It will appear in the directory as "hello.BAS". Of course the CCP, the
console command processor, will convert all file names to upper case, so you can neither type nor copy nor erase the file, but still it exists. You can even load it from MBASIC using LOAD.
You can have any characters you like in your CP/M file names. Sometimes I ended up with file names consisting of all blanks. I usually used a disk editor to deal with those, but there were lots of more convenient tools for the job.
Many of the complaints by the author are in the context of differences between C today and C back then, but back when CP/M was in common use, C compilers typically did not do much optimization, and K&R C was all there was.
I did not use Aztec C until a few years after I switched from CP/M to DOS, but I really liked it, and used it for several 68k bare-metal projects. I did poke around with BDS C on CP/M, but was immediately turned off by the lack of standard floating point support. (It did offer an odd BCD float library.)
> Many programmers, including myself, have gotten out of the habit of doing this on modern systems like Linux, because a malloc() call always succeeds, regardless how much memory is available.
What I develop I always make sure I test on NetBSD and OpenBSD. That keeps me honest and those systems will find issues that Linux does not care about. I found many issues by testing on those systems.
Also, ignoring malloc() returns is dangerous if you want to port your application to a UNIX like AIX.
Ignoring failures is a bad idea, but in many applications quitting on malloc() retiring NULL is the most sensibile thing to do. Many, but not all kinds of applications.
This brought back some memories. Back in the day I couldn't afford the Aztec compiler (or it wouldn't fit onto my dual floppy 48K Heathkit H89, can't remember which). I ended up buying Leor Zolman's BDS C compiler. Just looked him up and it looks like he's still around!
Funny how most of the article reads (to me) "back in the days things were done in the obvious way, while now everything is weird". In other words I still program like in the 1980's. :)
CP/M programming is a lot of fun, even these days! I have a growing collection of retro machines running CP/M, my latest compiler has a CP/M backend, and I have even written a book about the design of a CP/M compiler: http://t3x.org/t3x/0/book.html
There has been an incredible amount of principles and practices added to our profession. Most of which are silly. Like Clean Code which is just out right terrible in terms of causing CPU cache misses as well as getting you into vtable for your class hierarchies. Most modern developers wouldn’t know what a L1 cache is though, so they don’t think too much about the cost. What is worse is that people like uncle Bob haven’t actually worked in programming for several decades. Yet these are the people who teach modern programmers how they are supposed to write code.
I get it though, if what you’re selling is “best practices” you’re obviously going to over complicate things. You’re likely also going to be very successful in marketing it to a profession where things are just… bad. I mean, in how many other branches of engineering is it considered natural that things just flat out fail as often as they do in IT? So it’s easy to sell “best practices”. Of course after three decades of peddling various principles and strategies and so on, our business is in even worse state than it was before.
In my country we’ve spent a literal metric fuck ton of money trying to replace some of the COBOL systems powering a lot of our most critical financial systems. From the core or our tax agency to banking. So far no one have been capable of doing it, despite various major contractors applying all sorts of “modern” strategies and tools.
The issue here is all these caches. Back in my day we didn't have caches and memory access time was deterministic - and expensive! We kept things in our 4-8 registers and we were happy with it. Programs larger than that weren't meant to be fast!
In reality those caches are going to be relatively meaningless except for short bursts of speed, because the 100,000 API calls and user/kernel switches, that windows does because of absurd abstractions, that happen in the time slice your program isn't running will destroy any cache coloring you attempt to code for.
Unless you’re doing your computation on DDR5 ram you’re easily looking running your code 100 times slower when you’re hitting L1 and L2 cache misses. Add to this that even a relatively simple loop over a 1000 entities where you alter 4 properties on each will run 20 times slower with a class hierarchy compared to a flat structure and you’re quickly adding a large performance loss to what you’re already paying for your 100.000 api calls.
Which might be worth it if something like Clean Code by the book was actually easier to read, maintain or do any form of team work on. I’d wager it’s not just your hardware which is going to have memory issues when you split that code out over 9 files and 3 projects though. It’s very likely that your own memory load struggles to cope, especially if it’s not code you’ve worked on recently.
Yes and "software written by some competent dev" is a thing that stops scaling after an org reaches 100s or 1000s of devs.
Management then moves to a model of minimizing outlier behavior to reduce risk of any one dev doing stupid things. However this process tends to squeeze the "some competent dev" types out as they are outliers on the positive side of the scale..
True, but maybe we should utilise principles which don’t suck. Things like onion architecture, SOLID, DRY and similar don’t appear to scale well considering software is still a mess. Because not only can’t your hardware find your functions and data, your developers can’t either.
It’s a balancing act of course, but I think a major part of the issue with “best practices” is that there are no best practices for every thing. Clean Code will work well for somethings. If you’re iterating through a list of a thousand objects it’s one and a half time slower than a flat structure. If you were changing 4 properties in every element it might be 20 times less performant though. So obviously this wouldn’t be a good place to split your code out into four different files in 3 different projects. On the flip side something like the single responsibility principle is completely solid for the most part.
Maybe if people like Uncle Bob didn’t respond with “they misunderstood the principle” when faced with criticism we might have some useful ways to work with code in large teams. I’d like to see someone do research which actually proves that the “modern” ways work as intended. As far as I’m aware, nobody has been able to prove that something like Clean Code actually works. You can really say the same thing for something like by the book SCRUM or any form of strategy. It’s all a load of pseudo science until we have had evidence that it actually makes the difference it claims to do.
That being said. I don’t think it’s unreasonable to expect that developers know how a computer works.
> In my country we’ve spent a literal metric fuck ton of money trying to replace some of the COBOL systems powering a lot of our most critical financial systems. From the core or our tax agency to banking. So far no one have been capable of doing it, despite various major contractors applying all sorts of “modern” strategies and tools.
To be fair, it's possible that the current systems are just poorly documented. All the best strategies in the world are hopeless against poor documentation/spec work.
This is probably the case for some, but not for all of them. There have also been attempts at completely replacing something like our digital registration for elections system. Basically every adult is issued a voter card when we have elections, in the “olden days” you’d register at tables with people with big voter books, where they’d need time to find you which can generate some rather large queues in rush hour. With the digital system every voting card has a barcode as well as the other info and they can just scan your barcode which is much faster.
Anyway the old thing was build when the public owned their own IT. It’s been something like 25 years since that was privatised and nobody has been capable of replacing that old system, meaning that the old organisation which is now a private company has a monopoly. Which is against our law.
There is really not a lot to it technically. But apparently it’s proving impossible to replace the mainframe way of dealing with it through the CISC input terminal thing. I have no idea why, we’ve had some of our biggest IT suppliers taking turns at cracking it and nobody has been able to so far. I think the ultimate “must not break” deadline was 10 years ago.
This was the “easiest” example. A lot of the others have decades of stuff build on top of them. There is the COBOL core and what has been hard for people to replace here is the bi-temporal data. But on top of the maintain there is a myriad of different Java services (only Java if you’re lucky) which turn the data into something which can be worked with and consumed by well, http.
Not even on 16 bit home machines, which is why any serious game, or winning demoscene entries, were written in Assembly, until we reached the days of 486 and DOS extenders.
As a read through the Amiga and PC literature of the time will show.
The problem here is much more the unix wars and a lack of confidence in BSD under legal fire rather than a lack of ability. The principal concern of unix vendors of the early PC era was to maintain their market share in mini and mainframe product sectors rather than growth into the consumer market. This spurred a rewrite of BSD fragments tied to the legacy unix codebase which fully portablized C and the GCC downstream projects which ended up benefiting the weird hobby OS linux disproportionately, and had it not had to be written from scratch we may have ended up with a wonderful 286-BSD rather than a 486-BSD, which at the time was still not fully clean room foss and unburdened. This was a time when large customers of OS products were trying to squeeze all the performance juice out of the existing systems instead of looking at new paradigms. We have things like the full SAST and warn-free release of sunOS around this time, where Sun was focused on getting a rock stable platform to then optimize around rather than efforts to produce products for the emerging Micro market. We can see that the concept of a portable unix system and c library as early as Xenix on the Apple Lisa in 1984. That's only 3 short years after the IBM collaboration for PC-DOS, showing even a rookie uncoordinated and low technical skill team such as microsoft (Paraphrasing Dave Cutler, chief NT kernel lead - Zachary, G. Pascal (2014). Showstopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft. Open Road Media. ISBN 978-1-4804-9484-8).
Xenix was my introduction to UNIX, I wouldn't claim it would win any performance price, specially when considering graphics programming.
Also my first C book was "A book on C", which had a type in listing for RatC dialect, like many others in those early 1980's, which were nothing more than a plain macro assembler without opcodes, for all practical purposes.
Compiler optimizations in those 8 and 16 bit compilers were what someone nowadays would do in a introduction to compilers, as the bare minimum, like constant propagation and peephole optimizations.
Just like on MS-DOS side, I did plenty of stuff on Turbo BASIC, Turbo Pascal, Turbo C (quickly replaced by Turbo C++), and Clipper, until Windows 3.x and OS/2 came to be.
Small utilities, or business applications, without big resources demands.
By then we were already in Windows 95 and Windows NT territory, with OS/2 still waving on the side, and game developers being shown the door to WinG, DirectX's percusor.
As I experienced that era, C wasn't really a practical language choice on 8-bit systems. Ok yes you could get a C compiler but it would typically need overlays hence be very slow. Assembler was pretty much where it was at on that generation of systems, or special-purpose languages such as BASIC and PL/M.
C worked ok on a pdp-11/45, but that had 256K of memory and 10s of MB of fixed disk. That level of hardware didn't appear for micro systems until the 68k generation, or I suppose IBM PC, but I don't remember the PC being too important in C coding circles until the 386, much later.
Yeah, that was indeed the case, while I did some C and C++ even on MS-DOS, it was Assembly, Turbo BASIC, Turbo Pascal and Clipper where I spent most of my time.
Even during my early days coding for Windows 3.x, I was doing Turbo Pascal for Windows, before eventually changing into Turbo C++ for Windows, as writing binding units for Win16 APIs, beyond what Borland provided, was getting tiresome, and both had OWL anyway.
As other commenters have said, C didn't actually generate fast programs for 8-bit processors, or even 16-bit processors for a long time. C is a poor fit for most of them, so assembly language was the only way to go.
A contemporary source is the opinionated "DTACK-Grounded" newsletter from 1981-1985. http://www.easy68k.com/paulrsm/dg/ Hal Hardenbergh raved about the fast 68000 chip and it's wonderfully easy assembly, but lamented that everyone switched to "portable" Pascal and C to write 16-bit programs so they seemed even slower than 8-bit ones. His favorite example was a direct comparison: Lotus 1-2-3, written in 8088 assembly, vs Context MBA with the same features but written in Pascal for portability. 1-2-3 was MUCH faster than Context on the PC, and no one remembers Context today. Or the $16,000 Unix-based AT&T workstation whose floating-point benchmarks are beaten by a $69 VIC-20. (Obviously due to the C-written runtime, which even followed the C standard of promoting all single precision calculations to double so single was no faster!)
His opinion of C was "slightly-disguised PDP/11 assembly". Not too bad for the 68000, but a terrible fit for the 8088 or Z80.
I did a shedload of programming in CP/M back in the 80s, and frankly I'd rather do it in Z80 assembler (assuming we were targeting Z80-based systems) than the rather poor compilers (not just C compilers) that were available. Using a compiler/linker on a floppy-based CP/M machine was quite a pain, as the compiler took up a lot more space than an assembler, and was typically much slower.
I would be more interested to see how modern techniques could improve the then-state of the art.
A lot of the modern stack is layers of abstraction, which probably wouldn't be appropriate for such limited machines, but maybe superoptimizers and so on, and just more modern algorithms, etc, could help show what's really possible on these old machines. Sort of retro demoscene, but for useful apps.
I remember using Aztec after using Software Toolworks C for a a few years. It was incredibly advanced in terms of standard C at the time. It was the first time I could just type code from the "C Programming Language" in and it would work unchanged.
It's been a lot of years, but as I recall, the raw I/O devices had one set of names, and the logical devices had another. So things like STAT PUN:=PTP: (if I remembered the syntax correctly) would set the logical "punch" device to be the physical paper tape punch, which was the default. I may also be confusing CP/M I/O redirection syntax (which only worked if your BIOS supported it), with DEC RT-11 syntax. It has been over 40 years since I have used either one.
Altho idiomatic that is perhaps slightly confusing because K&R 2nd ed uses the modern way of specifying parameters. I would prefer to say "pre ANSI C" or something of that kind.
You would be rewriting history if we changed that now. It has been referred to K&R style C since the ANSI standard. The second edition of the C Programming Language was ANSI. My copy of the second edition has "based on the draft-proposed ANSI C" on the cover, but later ones just have "ANSI C". I think mine is almost identical to the ANSI version.
Every copy of K&R that uses ANSI has ANSI written somewhere on the cover. I've seen the first edition, and the content is pretty similar if not identical, save for the ANSI changes. But it is all in the K&R style.
minor 1000× error: 'CP/M systems rarely had more than 64Mb of RAM' should read 'CP/M systems rarely had more than 64 kibibytes of RAM' (because memory addresses were 16 bits and there wasn't much demand for bank-switching in cp/m's heyday, though later 8-bit machines like the nes and the msx did use bank-switching extensively)
(disclaimer, i never programmed in c on cp/m, and although i used to use cp/m daily, i haven't used it for about 35 years)
he's using aztec c, but anyone who's considering this needs to know that aztec c isn't under a free-software license. bds c is a properly open-source alternative which seemed to be more popular at the time (though it wasn't open source then)
> This compiler is both the MS-DOS cross-compiler and the native mode CP/M
80 Aztec CZ80 Version 1.06d (C) Copyright Manx Software Systems, Inc. and
also includes the earlier Aztec CZ80 Version 1.05 for native mode CP/M 80.
I cannot provide you with a legally licenced copy.
> I herewith grant you a non-exclusive conditional licence to use any and
all of my work included with this compiler for whatever use you deem fit,
provided you do not take credit for my work, and that you leave my
copyright notices intact in all of it.
> I believe everything I have written to be correct. Regardless, I, Bill
Buckels...
but https://en.wikipedia.org/wiki/Aztec_C explains that manx software 'was started by Harry Suckow, with partners Thomas Fenwick, and James Goodnow II, the two principal developers (...) Suckow is still the copyright holder for Aztec C.'
so it's not just that the source code has been lost; the licensing situation is basically 'don't ask, don't tell'
bds c comes with some integration with an open-source (?) cp/m text editor whose name i forget, so you can quickly jump to compiler errors even though you don't have enough ram to have both the compiler and the editor in memory at once. other ides for cp/m such as turbo pascal and the f83 forth system do manage this. f83 also has multithreading, virtual memory, and 'go to definition' but it's even more untyped than k&r c
bds c is not quite a subset of k&r c, and i doubt boone's claim that aztec c is a strict subset of k&r c as implemented by gcc
a thing that might not be apparent if you're using a modernized system is how constraining floppy disks are. the data transfer rate was about 2 kilobytes per second, the drive was obtrusively loud, and the total disk capacity was typically 90 kilobytes (up to over a megabyte for some 8-inchers). this means that if a person needed data from the disk, such as wordstar's printing overlay, you had to request it and then wait for the disk to find it. so it wasn't a good idea to do this for no user-apparent reason
with respect to
int elems[5][300];
...
int i, j;
for (i = 0; i < m; i++)
{
for (j = 0; j < n; j++)
{
int elem = elems[i][j];
... process the value ...
}
}
if i wanted efficiency on a compiler that didn't do the strength-reduction for me, i would write it as
int elems[5][300];
...
int i, *p, *end, elem;
for (i = 0; i < m; i++) {
end = elems[i+1];
for (p = elems[i]; p != end; p++) {
elem = *p;
... process the value ...
}
}
this avoids any multiplications in the inner loop while obscuring the structure of the program less than boone's version
cp/m machines are interesting to me as being a good approximation of the weakest computers on which self-hosted development is tolerable. as boone points out, you don't have valgrind, you don't have type-checking for subroutine arguments (in k&r c; you do in pascal), the cpu is slow, the fcb interface is bletcherous, and, as i said, floppy disks are very limited; but the machine is big enough and fast enough to support high-level languages, a filesystem, and full-screen tuis like wordstar, supercalc, turbo pascal, the ucsd p-system, etc.
(second disclaimer: i say 'tolerable' but i also wrote a c program in ed on my cellphone last night; your liver may vary)
on the other hand, if you want to develop on a (logically) small computer, there are many interesting logically small computers available today, including the popular and easy-to-use atmega328p; the astounding rp2350; the popular and astonishing arm stm32f103c8t6 (and its improved chinese clones such as the gd32f103); the ultra-low-power ambiq apollo3; the 1.5¢ cy8c4045fni-ds400t, a 48-megahertz arm with 32 kibibytes of flash and 4 kibibytes of sram; and the tiny and simple 1.8¢ pic12f-like ny8a051h. the avr and arm instruction sets are much nicer than the z80 (though the ny8a051h isn't), and the hardware is vastly cheaper, lower power, physically smaller, and faster. and flash memory is also vastly cheaper, lower power, physically smaller, and faster than a floppy disk
F83 is an astonishing system. If you want to see the remnants of just a boatload of work, dig deep into the F83 system.
F83 is a top drawer Forth system. It implements the Forth 83 standard, and it’s in the public domain.
It runs on CP/M and MS-DOS.
It has the editor, a single step debugger, virtual memory, multi tasking, an assembler, full source code, and it is self hosting (on top of its host OS) via its meta-compiler. But you could readily port it to a system with no real OS, needing little more than console I/O and some kind of block device.
It’s a hybrid block file based system. Historically you would just hand entire block volumes that would then be managed by Forth directly. Here, you create pre-allocated files that are then used in a customary Forth block style. It did not use text files for source code. It also has shadow screens for documentation. Shadow screens are where you divide the block volume in two, and the first half is source code, and the second is documentation. So if you have a 200 block volume, then screen 4 is code and screen 104 is documentation.
If you want to see the source code of a word, just type VIEW <word> and it will show you the screen that defines the word. The compiler embeds the source screen in each definition. A simple command will bring up the shadow screen to show the documentation.
It also has a word decompiler, and a threaded dictionary for better search performance. It has hundreds of words.
The real trick is imagining the work that went into this system. The bootstrapping process that converted this from a likely hand keyed, assembly based FIG Forth to what became F83. Starting with a pure, block screen based system to one that works with a file system, shifting code around on disk with little more than offsets and a copy routine that’s little more than a block move.
On a slow, 2 floppy 8080, where resetting the system was the routine way to get back to a known state. Using a powerful, yet very dangerous language, rife with traps. It’s like doing self surgery with a mirror.
The holes they must have blasted in the floor trying to avoid shooting themselves in the foot. Repeatedly. I’ve spent hundreds of hours studying this system, it’s really an amazing effort.
Forth! I was programming in those days myself (though only a little bit on CP/M), and was getting surprised that many people here seemed to think the only alternatives were C or assembly. I learned to program in Basic, then learned some assembly, and finally discovered Forth circa 1984. It was perfectly suited to developing on 8-bit machines -- about as powerful as C, trivial to compile, easy to drop into assembly if you needed to optimize a low-level function.
If I remember correctly, I didn't learn C until 1989, when I was working on 16-bit machines...
i agree that forth is a pretty bug-prone language. it's only about as low-level as c (less type-checking, but more metaprogramming) but far more bug-prone. more so than assembly i think. but i haven't written a lot of forth; i wrote this game in forth last weekend: https://asciinema.org/a/672405
(i might actually try porting it to f83)
incidentally f83 also apparently ran on the 68000, but generally with a 32k limit
tom zimmer's f-pc was an ms-dos forth with all of f83's features but able to use the full 640 kibibytes of ram, which f83 couldn't
one thing that 8080 floppy systems excelled at was resetting. you could reboot cp/m in a second or two
PL/M is a less transferable skill and a ‘dead’ language. I think Gary Kildall promotes PL/M but honestly C is the best for portability and popularity followed by Forth and Pascal.
Yeah C does not work well on these odd 8-bit ISAs. Pascal, basic, and PL/M (and fortran?) seem to have been way more common and Pascal environments on these were really on the edge of what the contemporary hardware could handle.
I don't ever recall seeing PL/M compilers advertised by anyone back in the day. I have a feeling that the few that existed were offered at "meet our sales guy at the golf course" pricing.
PL/M was developed by Gary Kildall for Intel, and the only compilers I've used ran under Intel's ISIS family of operating systems, originally on Intel Intellec MDS hardware, and only much later under first-party ISIS emulation on MS-DOS.
AFAIK, Intel never marketed its first-party development systems to hobbyists, so you're probably in the ballpark with "golf course" pricing.
My take it was the other away around. In its own strange way C was portable to machines with unusual word sizes like the DEC PDP-10 with 36 bit words. I used C on Z-80 on CP/M and on the 6809 with Microware’s OS-9.
In the 1980s there were books on FORTRAN, COBOL and PASCAL. I know compilers for the first two existed for micros but I never saw them, these were mainly on minicomputers and mainframes and I didn’t touch them until I was using 32-bit machines in college
There were academics who saw the popularity of BASIC as a crisis and unsuccessfully tried to push alternatives like PASCAL and LOGO, the first of which was an unmitigated disaster because ISO Pascal gave you only what you need to work leetcode problems, even BASIC was better for “systems programming” because at least you had PEEK and POKE though neither language would let you hook interrupts.
Early PASCALs for micros were also based on the atrociously slow UCSD Pascal. Towards the end of the 1980s there was the excellent Turbo Pascal for the 8086 that did what NiklausWirthDont and I thought was better than C but I switched to C because it portable to 32-but machines.
I’d also contrast chips like the Z-80 and 6809 which had enough registers and address modes to compile code for and others like the 6502 where you are likely to resort to virtual machine techniques right away, see
I saw plenty of spammy books on microcomputers in the late 1970s and early 1980s that seemed to copy press releases from vendors and many of these said a lot about PL/M being a big deal although I never saw a compiler, source code, or knew anybody who coded it.
My own experience in Turbo Pascal started with (I think) version 4 when I got an 80286 machine in 1987. In that time frame Borland was coming out with a new version every year that radically improved the language, it got OO functionality in 5.5, inline assembly in 6, etc. I remember replacing many of the stdlib functions such as move and copy w/ ones that were twice as fast because the used 16 bit instructions that were faster on the 80286. With the IDE and interactive debugger it was one my favorite programming environments ever.
Having been a Borland's compilers fanboy since Turbo BASIC, all the C++ and TP versions for MS-DOS, a few ones on Windows 3.x, Borland C++ 4.5, and first editions of Delphi and C++ Builder, until their management messed up, it kind of sadens me the zig-zag turns we have had since the 2000 with VMs and interpreted languages, until we finally got the renaissance of AOT toolchains.
If only Native AOT was as easy to produce a binary as a plain Turbo Pascal 6 project using Turbo Vision.
Microsoft had FORTRAN and COBOL compilers for CP/M. I have used them on both Intel 8080 and Zilog Z80.
The MS FORTRAN compiler was decent enough. It could be used to make programs that were much faster than those using the Microsoft BASIC interpreter.
Even if you preferred to write some program in assembly, if that program needed to do some numeric computations it was convenient to use the MS Fortran run-time library, which contained most of the Fortran implementation work, because the Fortran compiler generated machine code which consisted mostly of invocations of the functions from the run-time library.
However, for that you had to reverse-engineer the library first, because it was not documented by Microsoft. Nevertheless, reverse-engineering CP/M applications was very easy, because an experienced programmer could read a hexadecimal dump almost as easy as the assembly language source code. Microsoft used a few code obfuscation tricks, but those could not be very effective in such small programs.
> There were academics who saw the popularity of BASIC as a crisis...
In particular, a pair of academics at the CWI had their own alternative, ABC, and got help in implementing it from a young Guido van Rossum; this experience presumably came in handy when he created his own scripting language...
https://github.com/skx/cpmulator/
Alongside that there is a collection of CP/M binaries, including the Aztec C compiler:
https://github.com/skx/cpm-dist/
So you can easily have a stab at compiling code. I added a simple file-manager, in C, along with other sources, to give a useful demo. (Of course I spend more time writing code in Z80 assembler, or Turbo Pascal, rather than C).
The author has a followup post here for thos interested:
* Getting back into C programming for CP/M -- part 2 * https://kevinboone.me/cpm-c2.html