This sounds like architecture astronautics. Lots of things that sound good in theory, but there's no analysis of how useful they would actually be in practice, what actual tradeoffs there are with current designs, etc. Lots of wishlist features.
Fixed size instructions make wide instruction machines possible, they don't have much more use. Variable size instructions do make them nearly impossible, does not matter if it's easy to calculate the size of not. There's no compromise to get here. (If you really want wide instructions is another can of worms.)
32 registers (31+sp) is so old school that I'm wondering if the number isn't just random. Also, zeroing the unused part of a register has its share of problems.
Taking ip out of the generic register set has its share of problems, and it's not hard to know if you are branching. It fixes a not-a-problem.
I think the 32 register requirement came from the original RISC studies where they analyzed how many function arguments and temporaries were needed by typical programs of the time.
The instruction set should represent a suitable compromise between the RISC principle that enables fast decoding, and the CISC principle that makes more efficient use of code cache resources
I doubt these have been the real issues behind instructions sets for at least 15 years.
I'm a fan of the Mill processor design, I'd love to see that architecture take off - a total rethink of the relationship of instruction sets to silicon.
I just love it when a designer takes an understood problem space and alters the rulebook to give himself advantages that his competition would never allow themselves to have. The Mill's design seems like that. It's totally bananas, each chip family produced will have randomly assigned opcodes assigned for a tailored set of operations.
Beyond getting working chips, the Mill's biggest hurdle will be to get compiler toolchains to target it's ISA. Such a huge gamble.
> "The ABI, object file format, etc. should be standardized as far as possible in order to allow the same code to be compatible with different operating systems and platforms. This would make it possible, for example, to use the same function libraries in different operating systems."
So what are we talking about here? Sounds like by standardising on a higher level of abstraction, you're free to make more changes at a lower level. What does that mean in practice? Do compilers target this standardised abstraction layer rather than machine code? How is this better than the JVM/CLR approach?
I was trying to answer a technical question; perhaps I misinterpreted it? Anyway, if you can get a new CPU architecture through at all, you can probably get new calling conventions at the same time. Make sure your reference C compiler generates compatible code on all platforms, or something.