This is what scares me the most about LLMs in my usage.
Not that I'll go crazy and kill others or myself, but that I will be deluded by the LLM telling me what I want to hear. Even though I know the risks.
I'm going through a small claims court level disagreement with a business right now, and ChatGPT has been on the face incredibly helpful for me to find information about the applicable laws and whether I have a case. On the other hand, I don't feel confident at all that it would tell me otherwise.
The problem is if you ask it to “take the other side” it will gleefully do so… and you can’t be sure it’s still just telling you what you want to hear, which is “be the ‘other side’”… in short it’s still blowing the same smoke up your as as when it was being agreeable.
Not that I'll go crazy and kill others or myself, but that I will be deluded by the LLM telling me what I want to hear. Even though I know the risks.
I'm going through a small claims court level disagreement with a business right now, and ChatGPT has been on the face incredibly helpful for me to find information about the applicable laws and whether I have a case. On the other hand, I don't feel confident at all that it would tell me otherwise.