Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

these AI services also won't really distinguish between "user input" and "malicious input that the user is asking about".

Obviously the input here was only designed to be run in a terminal, but if it was some sort of prompt injection attack instead, the AI might not simply decode the base64, it might do something else.



It could even conceivably be both




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: