Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How are there not agents that are "instruct trained" differently. Is this behavior in the fundamental model? From my limited knowledge I'd think it'd be more from those post model training steps, but there are so many people who don't like that I'd figure there be an interface that doesn't talk like that.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: