Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

your point is good and taken but i would amend slightly -- i dont think that "absolute truth" is itself a goal, but rather "how aware is it that it doesn't know something". this negative space is frustratingly hard to capture in the llm architecture (though almost certainly there are signs -- if you had direct access to the logits array, for example)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: