Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve written my own chatbot interfaces on top of GPT-4 and it’s always amusing when I look at the logs and people have tried to jailbreak it to get the prompts. Usually people can get it to return something that seems legit to the user, but they’re never actually anywhere close to what the real prompt is. So take all of these with a big grain of salt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: