I have to believe there are hardware engineers out there who know locking people out of their devices is essentially bad, and so they leave in those tweezer based attacks on purpose.
Although, designing against physical attacks is very difficult, so I guess there’s no need to imagine a good-hearted conspiracy of conscientious hardware folks.
The fundamental operation in hardware engineering is the digital signal, pulling a pin to one or zero - which is all the tweezer attack does. It's comparable to writing a byte of memory. Imagine how hard software security would be if your adversaries could write arbitrary data to your process: there's no ASLR or even an MMU to randomize trace layouts on physical circuit boards.
Well yes, but there is a difference between a signal being accessible on a PCB trace I can see with my eyes, vs it being accessible only on the inside of a 7nm silicon die.
There is a reason why a lot of system integrate the security processor on the same piece of silicon whose state the security processor is meant to protect.
The reason discrete TPMs exist is supposed compliance with crypto standards, and physical protection against key extraction, but they sort of miss the forest before the trees. What matters to users is the protection of their data, not the TPM's secrets, and discrete TPMs arent very good at the former.
There’s some value to being able to lock a device against somebody who physically has control of it. Like it is nice that stolen iPhones have reduced value.
But there’s a pretty big social harm to locking people out of their devices, like the generation of tech-illiterate kids growing up that haven’t been allowed to break their computers well enough to learn anything about them.
Although, designing against physical attacks is very difficult, so I guess there’s no need to imagine a good-hearted conspiracy of conscientious hardware folks.