You're missing the point. Such "tricks" were essential to fit the required code into the necessary space. The same thing still happens (although rarely) when fitting required functionality into limited devices such as FPGAs and PICs.
You're not aiming for readable, maintainable code. You're trying to get the cheapest device, and then squeezing the essential into what little space you get. Such "tricks" as jumping into the middle of instructions are unavoidable.
Yeah, tricks like that are still done for maximum speed for things like fitting as much data into structs/classes.
Like assuming (or enforcing) items are allocated to 4-byte memory boundary addresses, so that you can mask the first two bits of a pointer and use it for storing flags.
Similarly, you can pre-allocate tree nodes (left and right children) in a continuous array, and then only store a single pointer to the left node, and the right node will be the address of the left node + 1, so you save 8-bytes in the struct/class.
This allows more items into the processor cache line.
> They had enough spare bytes to not have to pull
> that trick. It'd have been better to clean up some
> algorithms somewhere or reuse some code.
I assume from your clear and unequivocal statement that you have first hand knowledge of that. I'll bow to your better information.
> Note: I've worked with very memory constrained
> systems in assembly before
As have I.
> (actually hand assembled code on paper as well).
That's how I started, although largely I ended up writing directly in machine code since it was quicker, and after I found my third bug in the assembler I had occasional access to I gave up on writing mnemonics at all. It was only much later when I had other people to communicate with that I went back to writing assembly.
And I remember writing code that really, really needed to do things like jumping into the middle of instructions.
Thank you - useful. Perhaps they really didn't need to use that specific trick on that specific occasion. It might be interesting for someone to comb through and find out if they used it lots of times, or just a few. perhaps they simply got into the mindset and used it because they could, in anticipation of needing it.
Certainly that sort of thing is easier to do first time round, rather than having to go round again to find bytes when you find later that you need them. It becomes a habit, much like these days it's a habit to layout code clearly, name variables carefully, and comment tricky code.
A possibility and a fair one! I might write something to scan through looking for jumps that jump inside opcodes (added to list of projects which I will do one day).
"They had enough spare bytes to not have to pull that trick"
Hogwash. When you're writing an interepter for a low-specced system you have no truly spare bytes, because every one you take is one that programs running on your interpreter cannot have.
That doesn't make sense to me at all. Sure, pick a larger part, pay more money, and don't squeeze your code. It's a decision to be made, but don't label the decision for a smaller part as "wrong" until you know the trade-offs being made.
When you talk about huge volumes, that cent or two for the part can make a significant difference. Really, it can. I've been there. I've been part of a team that commissioned silicon and had to contribute to the decision about how much programming space to have. Compromises like this can make a very large difference to the economics.
I think most people these days tend to just play suppliers off against each other until the price is right if the volume is enough. Farnell bend 10% instantly if you mention RS for example choke innocent whistle.
You're not aiming for readable, maintainable code. You're trying to get the cheapest device, and then squeezing the essential into what little space you get. Such "tricks" as jumping into the middle of instructions are unavoidable.