LLMs don’t go into a different mode when they are hallucinating. That’s just how they work.
Using the word “hallucinate” is extremely misleading because it’s nothing like what people do when they hallucinate (thinking there are sensory inputs when there aren’t).
It’s much closer to confabulation, which is extremely rare and is usually a result of brain damage.
This is why a big chunk of people (including myself) think the current LLMs are fundamentally flawed. Something with a massive database to statistically confabulate correct stuff 95% of the time and not have a clue when it’s completely made up is not anything like intelligence.
Compressing all of the content of the internet into an LLM is useful and impressive. But these things aren’t going to start doing any meaningful science or even engineering on their own.
Using the word “hallucinate” is extremely misleading because it’s nothing like what people do when they hallucinate (thinking there are sensory inputs when there aren’t).
It’s much closer to confabulation, which is extremely rare and is usually a result of brain damage.
This is why a big chunk of people (including myself) think the current LLMs are fundamentally flawed. Something with a massive database to statistically confabulate correct stuff 95% of the time and not have a clue when it’s completely made up is not anything like intelligence.
Compressing all of the content of the internet into an LLM is useful and impressive. But these things aren’t going to start doing any meaningful science or even engineering on their own.