Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intelligent people do not "hallucinate" in the same sense that an LLM does. Counterarguments you don't like aren't "shortcuts". There are certainly obnoxious anti-LLM people, but you can't use them to dismiss everyone else.

An LLM does nothing more than predict the next token in a sequence. It is functionally auto-complete. It hallucinates because it has no concept of a fact. It has no "concept", period, it cannot reason. It is a statistical model. The "reasoning" you observe in models like o1 is a neat prompting trick that allows it to generate more context for itself.

I use LLMs on a daily basis. I use them at work and at home, and I feel that they have greatly enhanced my life. At the end of the day they are just another tool. The term "AI" is entirely marketing preying on those who can't be bothered to learn how the technology works.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: