This touches on what I think is one of the more interesting technological problems that we have to worry about right now. As a disclosure, most of the reason I think that is because I'm a PhD student doing research on hardware security.
Anyways, there has been a lot of interesting discussion on the topic. DARPA has had at least two programs dedicated to trying to solve this problem, IRIS and TRUST [1]. Both of them seemed to be more interested in tampering by third-parties, perhaps because it's not in their best interest to accuse the people designing their ICs of attacking them.
In the long run, verifying the functionality and intentions of software and hardware are probably roughly the same problem, with no clear solution to either in the foreseeable future.
> In the long run, verifying the functionality and intentions of software and hardware are probably roughly the same problem
Both require trusting the source code (a languages problem), as well as trusting the translator. In the case of software, the translator is an end-user accessible compiler/interpreter which is itself more software, thus recursively auditable.
In the case of hardware, the translator is an entire institution, which can only be trusted if you have recourse against said institution. As an individual end-user (uber alles) can then never fully trust their hardware, it makes sense to draw a line in the sand and proceed from that assumption.
(and suuure, put a picture of a pic16f84, the chip that started the revolution of microcontroller DIY, at the top of an article on dodgy hardware..)
> This touches on what I think is one of the more interesting technological problems that we have to worry about right now. As a disclosure, most of the reason I think that is because I'm a PhD student doing research on hardware security.
If you are a PhD student and aren't working on something you consider 'one of the more interesting ... problems' then you are doing it wrong. In my opinion. You seem to be doing it right.
> perhaps because it's not in their best interest to accuse the people designing their ICs of attacking them.
Would you agree that there is some value in having the capability to detect these attacks? Whether you trust the vendors or not, you need them to be aware than you can check their work, at least to encourage them to follow good security practices internally.
In the long run, verifying the functionality and intentions of software and hardware are probably roughly the same problem, with no clear solution to either in the foreseeable future.
[1] http://www.wired.com/dangerroom/2011/08/problem-from-hell/