Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the comment you're replying to is pretty delusional to say the least, but I disagree that they aren't empowering now. ChatGPT is an extremely useful source of education that bypasses the mess that is Google, and it's much more than just code completion tricks. gpt-4 can literally write long, complex programs that generally work the first time you run them


Ah good, let's encourage people to "learn" from the text generator that can't even be forced to not lie and misinform. I've seen plenty of cases where that "long and complex program" includes things like libraries that don't exist


I fail to see how this is any different from a human author


Authors have an idea of right and wrong, true and false. Everything they say, they have some internal idea of how "sure" they are repeating the truth, or when they are purposely misinforming or lying. Most people think misleading people is bad, and try to avoid it. And if they don't avoid it, they can be punished, ignored, discredited, etc.

It is not possible to teach anything like ChatGPT to only tell things that are the truth, because the model has no concept of that. Even if you tell ChatGPT to act like someone who only tells the truth, it can still generate falsehoods. Hallucination is a very apt word for the phenomenon as, to the model, lies and falsehoods and misleading statements are the same validity as absolute fact. They both become valid sentences. The very concept of language, as a medium of information exchange, does not provide any info about information validity. It's out of band.

When ChatGPT misleads someone, you cannot convince it to do that less, even if it """wants""" to, no matter how much you punish, encourage, require, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: