Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It looks like Intel was cutting corners to be faster than AMD and now all those thigs come out. How much slower will all those processors be after multiple errata? 10%? 30%? 50%?

In a duopoly market there seems to be no real competition. And yes I know that some (not all) bugs also happen for AMD.



> And yes I know that some (not all) bugs also happen for AMD.

Some of these novel side-channel attacks actually even apply in completely unrelated architectures such as ARM [1] or RISC-V [2].

I think the problem is not (just) a lack of competition (although you're right that the duopoly in desktop/laptop/non-cloud servers for x86 brings its own serious issues, I've written and ranted more often than I can count [3]), it rather is that modern CPUs and SoCs have simply become so utterly complex and loaded with decades worth of backwards-compatibility baggage that it is impossible for any single human, even a small team of the best experts you can bring together, to fully grasp every tiny bit of them.

[1] https://www.zdnet.com/article/arm-cpus-impacted-by-rare-side...

[2] https://www.sciencedirect.com/science/article/pii/S004579062...

[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


>Some of these novel side-channel attacks actually even apply in completely unrelated architectures such as ARM [1] or RISC-V [2].

Possible? Yes. But far less likely.

Complexity carries over and breeds bugs. RISC-V is an order of magnitude simpler than ARM64, which in turn is an order of magnitude simpler than x86.

And it is so w/o disadvantage[0], positioning itself as the better ISA.

0. https://news.ycombinator.com/item?id=38272318


So no saving grace from the ISA… humans just lost ground on CPU design, and I suspect the situation will worsen when AI will enter the picture.


> and I suspect the situation will worsen when AI will enter the picture.

For now, AI lacks the contextual depth - but an AI that can actually design a CPU from scratch (and not just rehashing prior-art VHDL it has ... learned? somehow), if that happens we'll be at a Cambrian Explosion-style event anyway, and all we can do is stand on the sides, munch popcorn and remember this tiny quote from Star Wars [1].

[1] https://www.youtube.com/watch?v=Xr9s6-tuppI


Once AI can create itself, we will most likely be redundant.


Not sure what other errata you're referring to, but this looks like an off-by-one in the microcode. I would expect the fix to have zero or minimal penalty.


It's not clear to me this fix will have any performance impact. I strongly suspect it will be negligible or zero.

This seems like a "simple" bug of the type that people write every day, not deep architectural problems like Spectre and the like, which also affected AMD (in roughly equal measure if I recall correctly).


Parent commenter might be thinking of Meltdown, a related architectural bug that only bit Intel and IBM PPC. Everything with speculative execution has Spectre[0], but you only have Meltdown if you speculate across security boundaries.

The reason why Meltdown has a more dramatic name than Spectre, despite being the same vulnerability, is that hardware privilege boundaries are the only defensible boundary against timing attacks. We already expect context switches to be expensive, so we're allowed to make them a little more expensive. It'd be prohibitively expensive to avoid leaking timing from, say, one executable library to a block of JIT compiled JavaScript code within the same browser content process.

[0] https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-d...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: