Before one may begin to understand something one must first be able to estimate the level of certainty. Our robot friends, while really helpful and polite, seem to be lacking in that department. They actually think the things we've written on the internet, in books, academic papers, court documents, newspapers, etc are actually true. Where the humans aren't omniscient it fills the blanks with nonsense.
> Where the humans aren't omniscient it fills the blanks with nonsense
As do most humans. People lie. People make things up to look smart. People fervently believe things that are easily disproved. Some people are willfully ignorant, anti-science, anti-education, etc.
The problem isn't the transformer architecture... it is the humans who advertise capabilities that are not there yet.
There are human beings who believe absolutely insane and easily disprovable things. Even in the face of facts they continue to remain willfully ignorant.
Humans can convince themselves of almost anything. So I don’t understand your point.
Yes because working on co-pilot makes one well educated in philosophy of mind.
The standard, meaningless, HN appeal to authority. "I worked at Google, therefore I am an expert on both stringing a baroque lute and the finer points of Lao cooking. "
Gemini 3 gives a nice explanation if asked "can you explain how you don't really understand anything"
I’ve worked with many neuroscience researchers during my career. At a minimum, I’m extremely well-read on the subject of cognition.
I am not going to lie or hide my experience. The world is a fucked up place because we no longer respect “authority”. I helped build one of these systems; my opinion is as valid as yours.
Yours is the “standard meaningless” response that adds zero technical insight. Let’s talk about supportive tracing or optimization of KV values during pre-training and how those factors impact the apparent “understanding” of the resulting model.
> At this point I’d argue that humans “hallucinate” and/or provide wrong answers far more often than SOTA LLMs.
Humans are remarkably consistent in their behavior in trained environments. That's why we trust humans to perform dangerous, precise and high stakes tasks. Humans have the meta-cognitive abilities to understand when their abilities are insufficient or when they need to reinforce their own understanding, to increase their resilience.
If you genuinely believe humans hallucinate more often, then I don't think you actually do understand how copilot works.
There is a qualitative difference between humans and LLM's 'hallucinating' (if we can even apply this terminology to humans which I contend is in-appropriate).
I'd add a simple thought experiment. A poor student doing a multi-choice exam paper and subsequently achieving a poor mark lets say 30% and now a child of 10 years attempting the same paper and achieving say 50%. Looked quantitively a perspective arises attributing understanding to a child who has chanced 50% on a multi-choice paper when compared to a student having studied the subject at hand.
Qualitatively however and we know this intuitively it is certainly NOT the case that the child of 10 comprehends or understands more than the poor student.
Your response is crazy to me. Humans are known to be remarkably inconsistent in behavior. Please read ‘Thinking Fast and Slow’ or at least go back your HS psych 101 notes.
> Humans have the meta-cognitive abilities to understand when their abilities are insufficient or when they need to reinforce their own understanding
> as someone who is a sociopath completely devoid of ethics
Ah yes... the hundred thousand researchers and engineers who work at MS are all evil. Many people who've made truly significant contributions to AI have either worked directly (through MS Research) or indirectly (OpenAI, Anthropic, etc) at MS. ResNet and concepts like Differential Privacy were invented there.
What about the researchers at Stanford, Carnegie Mellon, and MIT who receive funding from companies like MS? Are they all evil sociopaths, too? Greg Hinton's early research was funded by Microsoft btw.
I originally joined MS in the early 90s (then retired) and came back to help build Copilot. The tech was fantastic to work with, we had an amazing team, and I am proud of what we accomplished.
You seem slightly confused between the people who invent technology and the assholes who use it for evil. There is nothing evil about the transformer. Humans are the problem.
T2 would have been a different movie if Miles Dyson just said to Sarah Connor: "The tech was fantastic to work with, we had an amazing team, and I am proud of what we accomplished."
You don’t know how your own mind “understands” something. No one on the planet can even describe how human understanding works.
Yes, LLMs are vast statistical engines but that doesn’t mean something interesting isn’t going on.
At this point I’d argue that humans “hallucinate” and/or provide wrong answers far more often than SOTA LLMs.
I expect to see responses like yours on Reddit, not HN.