Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

Basically, x86 uses op caches and micro ops which reduces instruction decoder use, the decoder itself doesn't use significant power, and ARM also uses op caches and micro ops to improve performance. So there is little effective difference. Micro ops and branch prediction is where the big wins are and both ISAs use them extensively.

If the hardware is equal and the designers are equally skilled, yet one ISA consistently pulls ahead, that leads to the likely conclusion that the way the chips get designed must be different for teams using the winning ISA.

For what it's worth, the same is happening in GPU land. Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W (!).

That same M1 also smoked an Intel i9.



ARM doesn't use micro-ops in the same way as x86 does at all. And that's not the only difference, e.g. x86 has TSO.

I'm not saying the skill of the design team makes zero difference, but it's ludicrous to say that the ISA makes no difference at all.

The claims about the M1 Ultra appear to be marketing nonsense:

https://www.reddit.com/r/MachineLearning/comments/tbj4lf/d_a...


> Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W

That's not true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: