Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Programming is hard to improve because it's not very simple to know what programming ideas are good and what ideas are bad. The feedback loop between decision and consequence is so long and opaque that if something works or if it doesn't work, it's not possible to easily point out why. To make things even more complicated, most good programming ideas are only good in certain situations, and terrible in others. On top of that, we're not even very good at knowing what our software is supposed to do. Building a race car is a lot more complicated than writing a simple node web app, but it's way easier to tell if you have a good race car than it is to tell you have a good node web app. If we could agree on what improved programming was, it might be a lot easier to improve. But everybody has their own opinion about what ideas are good, and in what context those ideas are good. We can't even agree about what's important to optimize. Which leads to situations where you can find highly experienced experts passionately disagreeing with each other about absolutely any give topic, and also have experts and charlatans in complete agreement, with no easy way to tell them apart. How would we even be able to tell if it has improved?


I also think that the biggest problem we are facing is, that we still haven't developed any useful metrics or measurements of quality in our field. There are some easy things like performance or resource consumption, but: Others like security, robustness, maintainability or plasticity are equally important, yet different approaches can not be compared numerically only by argument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: