Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bobxmax
10 months ago
|
parent
|
context
|
favorite
| on:
Claude can now search the web
I don't believe that's true at all. LLMs, especially reasoning models, tend to be quite good at calling out gaps in their knowledge and understanding.
LLMs also don't have the ego, arrogance and biases of humans.
aprilthird2021
10 months ago
[–]
If you know what an LLM is and how it is trained you'll know that it fundamentally cannot know where its gaps in understanding and knowledge are
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
LLMs also don't have the ego, arrogance and biases of humans.