Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Nor do I have any clue what to do about it if it is an issue that needs addressing.

What happened with cigarettes? Same must happen with chat bots. There must be a prominent & visible warning about the fact that chat bots are nothing more than Markov chains, they are not sentient, they are not conscious, & are not capable of providing psychological guidance & advice to anyone, let alone those who might be susceptible to paranoid delusions & suggestions. Once that's done the companies can be held liable for promising what they can't deliver & their representatives can be fined for doing the same thing across various media platforms & in their marketing.



> What happened with cigarettes?

We established a comprehensive set of data that established correlation with a huge number of illnesses including lung cancer, to the point that nearly all qualified medical professionals agreed the relationship was causal.

> There must be a prominent & visible warning

I have no problem with that. I’m a little surprised that ChatGPT et al don’t put some notice at the start of every new chat, purely as a CYA.

I’m not sure exactly what that warning should say, and I don’t think I’d put what you proposed, but I would be on board with warnings.


That's just the thing though. OpenAI and the LLM industry generally are pushing so hard against any kind of regulation that the likelihood of this happening is definitely lower than the percentage of ChatGPT users in psychosis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: