Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure that a prompt injection secure LLM is even possible anymore than a human that isn't susceptible to social engineering can exist. The issues right now are that LLMs are much more trusting than humans, and that one strategy works on a whole host of instances of the model




Indeed. When up against a real intelligent attacker, LLM faux intelligence fares far worse than dumb.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: