> You appear to assume that current "AI" is able to "understand" and "think". What makes you so sure?
"To understand" and "to think" are two very different things. Understand means more or less to encode and compress effectively from a perspective of such a system - there's quite a bit of evidence that they do that.
As for "thinking" that is impossible for LLMs as thinking is an action - and LLMs aren't agents that can plan and take action.
Actually AlphaGO and AlphaZero were agents capable of thinking - just in a extremly simplistic world which is the game of Go, Shogi or Chess. But they had a world model (which was fully known as it was for simple games) and a way to plan the action they will take by evaluating what impact will they have upon the world and how beneficial it will be for them.
Just that extending that system/agent to the real world is very hard.
"To understand" and "to think" are two very different things. Understand means more or less to encode and compress effectively from a perspective of such a system - there's quite a bit of evidence that they do that.
As for "thinking" that is impossible for LLMs as thinking is an action - and LLMs aren't agents that can plan and take action.
Actually AlphaGO and AlphaZero were agents capable of thinking - just in a extremly simplistic world which is the game of Go, Shogi or Chess. But they had a world model (which was fully known as it was for simple games) and a way to plan the action they will take by evaluating what impact will they have upon the world and how beneficial it will be for them.
Just that extending that system/agent to the real world is very hard.