Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you plan for such model beforehand you already might be lowering overall efficiency. You know that you can't get the whole thing you want as one piece, but you plan "one third now, another third in a couple of months and the last third in more couple of months". Overall you might end up with twice the effort, since you had to account for that split, but in the context of kernel it still makes sense because of all the complexity and many people working on it simultaneously. And of course it also depends on "splitability" of the thing you're working on.


It's actually more efficient to do it this way. When you develop in a fork, you end up having to both keep rebasing on mainline, and then on submission, you might find that large parts of the code are the wrong approach or do not meet upstream standards, and need to be rewritten.

By planning for incremental merges, you ensure that your foundation is solid and acceptable and avoid wasted work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: