Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Frankly, i don't think this is true at all. If anything I notice, for me, that I take better and more informed decisions, in many aspects of life. Think this criticism comes from a position of someone having invested alot of time in something AI can do quite well.


For me, the main question in this context would be whether the decisions are better informed or they just feel better informed. I regularly get LLMs to lie to me in my areas of expertise, but there I have the benefit that I can usually sniff out the lie. In topics I'm not that familiar with, I can't tell whether the LLM is confidently correct or confidently incorrect.


Well, AI does make errors, and never says "I don't know". That is also true of Wikipedia though. I've seen much improvement in accuracy from 3.5 to 4.5. Hallucinations can often be hashed out by a dialogue.


Wikipedia has multiple ways it tells you it doesn't know or it doesn't know for certain. Tags such as clarify, explain, confusing (all of which expand into phrases such as clarification needed etc) are abundant, and if an article doesn't meet the bar for the standard, it's either clearly annotated at the top of the article or the article is removed altogether.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: