Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Proposal for an ideal extensible instruction set (agner.org)
29 points by nkurz on Dec 28, 2015 | hide | past | favorite | 13 comments


This sounds like architecture astronautics. Lots of things that sound good in theory, but there's no analysis of how useful they would actually be in practice, what actual tradeoffs there are with current designs, etc. Lots of wishlist features.


Most of the requirements described there are met by RISC-V: http://riscv.org


Agner already points out what he likes/dislikes about RISC-V: http://www.agner.org/optimize/blog/read.php?i=421#428


Hum...

Fixed size instructions make wide instruction machines possible, they don't have much more use. Variable size instructions do make them nearly impossible, does not matter if it's easy to calculate the size of not. There's no compromise to get here. (If you really want wide instructions is another can of worms.)

32 registers (31+sp) is so old school that I'm wondering if the number isn't just random. Also, zeroing the unused part of a register has its share of problems.

Taking ip out of the generic register set has its share of problems, and it's not hard to know if you are branching. It fixes a not-a-problem.

The article is not well thought-off.


I think the 32 register requirement came from the original RISC studies where they analyzed how many function arguments and temporaries were needed by typical programs of the time.


The IP should not be visible. IP, actually, is implementation detail and its visibility can hinder future performance or extensibility.

For example, when you are implementing OoO arch, you will have to deal with the possibility of writes to IP. That will not save power and estate.


The instruction set should represent a suitable compromise between the RISC principle that enables fast decoding, and the CISC principle that makes more efficient use of code cache resources

I doubt these have been the real issues behind instructions sets for at least 15 years.


I'm a fan of the Mill processor design, I'd love to see that architecture take off - a total rethink of the relationship of instruction sets to silicon.

I just love it when a designer takes an understood problem space and alters the rulebook to give himself advantages that his competition would never allow themselves to have. The Mill's design seems like that. It's totally bananas, each chip family produced will have randomly assigned opcodes assigned for a tailored set of operations.

Beyond getting working chips, the Mill's biggest hurdle will be to get compiler toolchains to target it's ISA. Such a huge gamble.


> "The ABI, object file format, etc. should be standardized as far as possible in order to allow the same code to be compatible with different operating systems and platforms. This would make it possible, for example, to use the same function libraries in different operating systems."

So what are we talking about here? Sounds like by standardising on a higher level of abstraction, you're free to make more changes at a lower level. What does that mean in practice? Do compilers target this standardised abstraction layer rather than machine code? How is this better than the JVM/CLR approach?


He's talking about making the actual machine code standardized/compatible across OS's.


That is not a technical, but a social or economic problem though. How would you get Microsoft and Apple and Linux to use the same calling convention?


I was trying to answer a technical question; perhaps I misinterpreted it? Anyway, if you can get a new CPU architecture through at all, you can probably get new calling conventions at the same time. Make sure your reference C compiler generates compatible code on all platforms, or something.


> making the actual machine code standardized/compatible across OS's

I can't think of any advantage to this. Compilers already abstract this away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: