Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> On the other hand general problem solving is, and so far any attempt to replicate it using computer algorithms has more or less failed.

Well, this is what the whole debate is about isn't it? Can LRMs do "general problem solving"? Can humans? What exactly does it mean?



A lot of it is being able to make reasonable decisions under novel and incomplete information and being able to reflect and refine on their outcome.

LLMs's huge knowledge base covers for their incapacity to reason under incomplete information, but when you find a gap in their knowledge, they are terrible at recovering from it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: