Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But this comes along with lots of behaviour that is fundamentally opposite to thinking ("hallucination" being the major example).

I find this an utterly bizarre claim given how prone humans are to make things up and firmly insist they did not.



Is this really common behaviour? I do not recognise it. Do people lie? Certainly yes. Do people misremember, or get details incorrect? Yes. But when was the last time you saw someone, say, fabricate an entire citation in a paper? People make transcription errors, they misremember dates, and they deliberately lie. But I don't think people accidentally invent entire facts.


To me, your entire claim here comes across as "hallucination". That is, I simply do not believe that you have not experienced people accidentally inventing entire facts, and so I don't believe you are genuinely unaware of people doing it.

To be clear, I'm not arguing you've made this claim in bad faith at all.

However, going back and examining my own writing, I have more than once found claims that I'm sure I believed at the time of making them, but that I in retrospect realise I had no actual backing for, and which were for that reason effectively pure fabrication.

An enduring memory of my school days was convincing the teacher that she was wrong about a basic fact of geography. I was convinced. I had also totally made up what I told her, and provided elaborate arguments in favour of my position.

To me this is innate human behaviour that I see on a regular basis. People accidentally invent entire "facts" all the time.


What little of Fox News excerpted I've seen elsewhere doesn't support your claim.


Fox News just lies. They aren't "hallucinating".


What do you imagine the difference is?


Indeed. The mere fact that we ended up with the anthropomorphic term "hallucination", rather than something purely mechanistic like "glitch", indicates that there's something about this AI pattern that feels familiar.

I'm obviously not claiming that "hallucination" is an appropriate term ("delusion" or "confabulation" are probably more apt), but there is something here that is clearly not just a bug, but rather a result of thinking being applied properly but to ungrounded premises. To my eyes, reading an AIs "hallucination" is not unlike reading the writings of a human on drugs, or with a mental condition like schizophrenia, or just of an analytic philosopher taking their made up axioms all the way to an alternate universe.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: