When someone says "AIs aren't really thinking" because AIs don't think like people do, what I hear is "Airplanes aren't really flying" because airplanes don't fly like birds do.
This really shows how imprecise a term 'thinking' is here. In this sense any predictive probabilistic blackbox model could be termed 'thinking'. Particularly when juxtaposed against something as concrete as flight that we have modelled extremely accurately.
that depends, if you explain the rules of the game you're playing and give the dice a goal to win the game, do they adjust the numbers they reveal according to the rules of the game?
That's a fallacy of denial of the antecedent. You are inferring from the fact that airplanes really fly that AIs really think, but it's not a logically valid inference.
Observing a common (potential) failure mode is not equivalent to asserting a logical inference. It is only a fallacy if you "P, therefore C" which GP is not (at least to my eye) doing.
Whenever someone paraphrases a folksy aphorism about airplanes and birds or fish and submarines I suppose I'm meant to rebut with folksy aphorisms like:
"A.I. and humans are as different as chalk and cheese."
As aphorisms are a good way to think about this topic?