Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did try the Google LLM thing, Bard I think it is called, about the result of a football match that has marked the sporting history of my country (Romania).

According to Bard we did manage to defeat the Swedes by two goals to one back at the 1994 Euro Championships, which, to put it bluntly, is pretty damn far from the truth (the Swedes managed to go through to the World Cup semifinals after winning on penalty shoot-outs, the score had been 2-2 after 120 minutes).

I didn’t make any further inquiries, suffice is to say that there’s no “intelligence” in the concept of LLMs to speak of as long as it can’t even correctly answer a question that non-smart tech had been able answer correctly for years.



Fact recollection is not most people’s definition of intelligence. In fact, it’s something that the only known intelligent systems are infamously bad at.


So you’re saying I used it wrong? How does that help the pro-LLM case? What should have I asked it? Some philosophical question that didn’t involve “fact recollection”?

At least this latest tech bluff is not bankrupting regular people like the crypto tech bluff had done.


I almost never use people for fact checking either, they are horrifically bad at at. But if you're fact checking you tend to have a well formed idea already that can be searched in factual databases.

If you have a more abstract idea "I'm using X programming language and I want to accomplish Y but I have Z limitation how would I do that, can you explain it and show me in code", you can get actionable information much in the same way if I asked another person that had some knowledge of the problem. I don't get perfect answers from programmers either, but I get to a solution much faster than if I'm spinning the wheel of Google returning spam sites or sites telling me something I don't really want to do.


You used the model for fact checking. These models are not good at being used as a knowledge base.


I would never use an LLM for fact checking, then you'd have to check again using something else.


Usually for asking questions about specific details, people are using RAG (Retrieval Augmented Generation) to ground the information and provide enough context for the llm to return the correct answers. This means additional engineering plumbing and very specific context to query information from.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: