Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You are falling into the trap that everyone does. In anthropomorphising it. It doesn't understand anything you say.

And an intern does?

Anthropomorphising LLMs isn't entirely incorrect: they're trained to complete text like a human would, in completely general setting, so by anthropomorphising them you're aligning your expectations with the models' training goals.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: