Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs just can't learn or understand from the context. The context is there to somehow statistically affect the token production but there is no real understanding. You can provide an LLM a full specification of a problem including all elements that are needed to solve it, for instance all specific functions of a programming library (that is not on the Internet). An competent programmer could read this and implement the solution straightforward. With LLMs this does not work - they still confidently continue producing wrong solutions, though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: