Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A way of thinking about what's inside any of the top LLMs right now: even if they never learn another single fact, even if they get ridiculously out of date as a result, even if they are even more riddled with errors and prone to biases than we know them to be, even if they are as prone to hallucinations as we know they they are and they never develop the capacity to cure themselves of this, they are more knowledgeable and capable of more reasoned response, despite their capacity for error, to more questions than any single human being that has ever lived.


We shouldn't choose LLMs for how many facts they support, but their capability to process human language. There is some overlap between these two though, but an LLM that just doesn't know something can always be augmented with RAG capabilities.


Picturing "LLM Jeopardy". You know, a game show.


If you ignore my capacity for error, I bet I'd put up a good score too. Hell, maybe Markov chains are smarter than LLMs by this definition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: