As the article notes, DEC's OSes largely dodged the bullet by having good workarounds and in-kernel implementations for the missing instructions.
Open-source VAX OSes weren't so lucky. BSD's libm used EMOD pretty heavily (for modf() and the like), and this caused problems in unexpected places if you happened to be running on a newer machine that didn't have these instructions (stuff like: awk would crash!). So the OSes had to follow suit as well, at least for the instructions that libraries / compilers would emit (which fortunately excluded most uses of the G and H floating types). The documentation available at the time was okay but.. imprecise.
The article also mention EDIV, which caused the most hilarious bug explanation I ever heard.
I worked on a VMS device driver in 1990. We had to resolve every single crash, because DEC support tracked the outcome of each crash dump file.
Our driver was crashing a VAX 9000 at Abbot Labs, a nightmare because it was a mainframe-class machine. In every crash dump I could show that the registers contained impossible values for the code sequence, always traced back to code involving an EDIV instruction (used to get a remainder).
DEC decided the problem was "alpha particles penetrating the encapsulant". At first I thought they were joking, but they were serious. They replaced the water cooling module around the CPU. That didn't fix the problem, so they pushed it back to us.
After much back and forth, they realized it was a microcode bug in the EDIV instruction, and their microcode patch fixed it.
I wonder what VAX would be capable of today if they had the same level of resources as e.g. Intel. From what I've seen there are some super-powerful instructions, but the opcode map[1] is extremely irregular - even x86 has an octal structure to it - so superscalar decoding would be far more difficult. It's also not as compact of an encoding as x86, so the code density would be lower; x86, despite being CISC, tends to have more frequently used instructions be shorter, e.g. ~1/4 of the 1-byte opcode map is register-register/register-memory ALU operations.
Does anyone know the inside story behind the Titan [1], and what ever became of it? Brian Reid showed me a room full of them when I visited him at DECWRL in Palo Alto by the greasy transformer vault [2], and they seemed vastly majestic and toasty warm. I bet they really kept that rancid grease hot!
But reading http://web.eece.maine.edu/~vweaver/papers/iccd09/iccd09_dens..., Alpha didn't have high code density. Caveat here is that they may have been better at hand-optimizing CPUs they are familiar with or where information is easier to Google. That would favor x86, for instance. Note, however, that they get way smaller code for PDP-11 and VAX than for Alpha.
Due to the way instruction fetch and dispatch worked on the first few generations of the Alpha, you had to put in lots of NOPs to get decent performance.
ctrl+f "FUNDAMENTAL PROBLEM" to find his writeup of the actual difficulties of fast VAX implementations, including ways that x86 is easier than VAX. The big whammy is string ops where the length operand can be indirectly-addressed.
x86 string ops, which are a lot simpler, were slow until pretty recently.
The best thing I remember was PALcode: a nice cross between microcode and assembler. So many uses for that: accelerators, atomic operations, security primitives, and so on. I really wish the x86's had that feature instead of their SMM, etc garbage.
Two years after I logged a bug in Apple's MPW C compiler, I received a developer CD with release note that explained how to reproduce my bug but then said "don't do that".
Mac C compilers added "pascal" as a reserved word, this enabled one to link to toolbox apis that had the pascal calling convention.
I wrote a c++ test tool for MacTCP in which a member function returned a pointer to a pascal C function, for use as a callback from the network driver.
My first attempt produced faulty machine code. While regressing the bug I found that increasing the lengths of the names of the member function or of its parameters would crash the compiler. Not the resulting binary - the compiler itself.
On the old mac os that took down the entire machine.
He didn't really change careers. He built a 32-bit processor for Computervision fairly soon after leaving Data General and has done plenty of stuff since [1].
Open-source VAX OSes weren't so lucky. BSD's libm used EMOD pretty heavily (for modf() and the like), and this caused problems in unexpected places if you happened to be running on a newer machine that didn't have these instructions (stuff like: awk would crash!). So the OSes had to follow suit as well, at least for the instructions that libraries / compilers would emit (which fortunately excluded most uses of the G and H floating types). The documentation available at the time was okay but.. imprecise.
source: I wrote the EMOD implementation for OpenBSD/vax a long time ago; POLY had already been done by NetBSD. It's still there! http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/...