Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't believe that's true at all. LLMs, especially reasoning models, tend to be quite good at calling out gaps in their knowledge and understanding.

LLMs also don't have the ego, arrogance and biases of humans.



If you know what an LLM is and how it is trained you'll know that it fundamentally cannot know where its gaps in understanding and knowledge are




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: