Exactly! This is why I removed this fundamental questions from my post: in this moment they don't have any clear reply and will basically make an already complex landscape even more complex. I believe that right now, whatever is happening inside LLMs, we need to focus on investigating the practical level of their "reasoning" abilities. They are very different objects than human brains, but they can do certain limited tasks that before LLMs we thought to be completely in the domain of humans.
We know that LLMs are just very complex functions interpolating their inputs, but this functions are so convoluted, that in practical ways they can solve problems that were, before LLMs, completely outside the reach of automatic systems. Whatever is happening inside those systems is not really important for the way they can or can't reshape our society.
Exactly! This is why I removed this fundamental questions from my post: in this moment they don't have any clear reply and will basically make an already complex landscape even more complex. I believe that right now, whatever is happening inside LLMs, we need to focus on investigating the practical level of their "reasoning" abilities. They are very different objects than human brains, but they can do certain limited tasks that before LLMs we thought to be completely in the domain of humans.
We know that LLMs are just very complex functions interpolating their inputs, but this functions are so convoluted, that in practical ways they can solve problems that were, before LLMs, completely outside the reach of automatic systems. Whatever is happening inside those systems is not really important for the way they can or can't reshape our society.