Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My point is that for a sufficiently large user, you can probably use enough of the 128 cores by yourself alone, that it's more worthwhile to do that and turn off these mitigations : both because it removes a whole class of threats, and also because the mitigations tend to have a non-negligible performance impact, especially when first discovered, on chips that haven't been designed to protect against them.


I very much agree with that. The reality is that cloud providers can replace entire machines with only a small latency blip in your application (or at least GCP can), so if you are doing things like buying 2 core VMs 64 times to avoid losing more than 1% capacity when a machine dies, you probably don't actually need to do that. You could get a 128 core dedicated machine, and then not share it with anyone, and your availability time in that region/AZ probably wouldn't change much.

That said, machines are really monstrously huge these days, and it can be hard to put them to good use. You also miss out on cost savings like burstable instances, which rely on someone else using the capacity for the 16 hours a day when you don't need it. It's a balance, but I'd say "just buy a computer" would be my starting point for most application deployments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: