Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its been pretty well documented that LLMs can be social engineered as easily as a toddler. Equating the risk to that of a human employee seems wrong. I'm sure the safeguards will improve, but for now the risk is still there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: