A lot of it is being able to make reasonable decisions under novel and incomplete information and being able to reflect and refine on their outcome.
LLMs's huge knowledge base covers for their incapacity to reason under incomplete information, but when you find a gap in their knowledge, they are terrible at recovering from it.
Well, this is what the whole debate is about isn't it? Can LRMs do "general problem solving"? Can humans? What exactly does it mean?