Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
samus
on April 1, 2024
|
parent
|
context
|
favorite
| on:
LLaMA now goes faster on CPUs
We shouldn't choose LLMs for how many facts they support, but their capability to process human language. There is some overlap between these two though, but an LLM that just doesn't know something can always be augmented with RAG capabilities.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: