these AI services also won't really distinguish between "user input" and "malicious input that the user is asking about".
Obviously the input here was only designed to be run in a terminal, but if it was some sort of prompt injection attack instead, the AI might not simply decode the base64, it might do something else.
Obviously the input here was only designed to be run in a terminal, but if it was some sort of prompt injection attack instead, the AI might not simply decode the base64, it might do something else.