Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In short: LLMs are plausibility engines


also known as bullshit generators


The point is that it’s plausible bullshit.

The more subtle point is that this cannot be corrected via what appears to humans as “conversation” with the LLM. Because it is more plausible that a confident liar keeps telling tall tales, than it is that the same liar suddenly becomes a brilliant and honest genius.


A human on the internet loves to argue, stand and prove a point, simply because they can. Guess what the AI's were trained on? People talking on the internet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: