Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think models will have to some kind of internal training to teach them they are agents that can come back and work on things.

Working on complex problems tends to explode in to a web of things that needs done. You need to be able to separate these in to subtasks and work on them semi-independently. In addition when a subtask gets stuck in a loop, you need to work on another task or line of thought, and then come back and 're-run' your thinking to see if anything changed.



The idea of reinforcement learning is that for some things it is hard to give an explicit plan for how to do something. For example, many games. Recently, DeepSeek showed that it worked for certain reasoning problems too, like leetcode problems.

Instead, RL just rewards the model when it accomplishes some measurable goal (like winning the game). This works for certain types of problems but it’s pretty inefficient because the model wastes a lot of time doing stuff that doesn’t work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: