There's something weird here that reminds me of A.I. and the whole idea of a Turing test. There's an assumption that you can understand how complicated a behavior is and from that make a judgement on the creation of it.
I act differently if I see you watching me so I must have a theory of the mind of the watcher but then again it could just be instinct and learned response (or a very simple algorithm) that's causing the behavior.
My skepticism about general purpose AI is somehow rooted around this in a way I can't describe. It's like AI would have to be a magic trick so good that even the magician doing the trick couldn't understand it because if they could the trick wouldn't work.
I act differently if I see you watching me so I must have a theory of the mind of the watcher but then again it could just be instinct and learned response (or a very simple algorithm) that's causing the behavior.
My skepticism about general purpose AI is somehow rooted around this in a way I can't describe. It's like AI would have to be a magic trick so good that even the magician doing the trick couldn't understand it because if they could the trick wouldn't work.