Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can spike the algorithm that you think will be slow. Usually the slow algorithm is simple to implement and test.

If you’re wrong about predicting the speed of the algorithm then you’ve got needlessly complicated code for the rest of the life of the project.

People are wrong about the speed of algorithms all the time. Often O(n) and O(n^2) take the same amount of real time because the computer spent 99% of the time getting n from the database server.

Your algorithm written in C is often slower than the equivalent Python because the bytecode compiler did something clever.

I’ve spent a lot of time speeding up legacy code. It’s usually much easier than you think, and it’s slow for reasons that wouldn’t have been obvious to the original authors.

In fact it’s usually slow because the codebase got so complicated that the original author of the slow code couldn’t reason about it any more. But it’s easy for me to fix because I have a concrete example that is “too slow” so I can debug it by running the code and observing the slow points.



I love it when I find an actual performance problem with code that doesn’t hit the disk or network. Optimizing is super fun, but it’s so rarely needed.

My last time was last year. I noticed that our metrics service, which had been running for a couple years, was now often using 100% cpu. That can cause issues for other services on that node, so I looked into it. Turns out the Go code I wrote to parse Prometheus metrics was a little clunkier than you’d like for the sheer amount of metrics coming from some monitor (I think I’d just upgraded cadvisor or something). I tried getting it to return fewer metrics, but couldn’t figure it out.

So I spent an afternoon and rewrote my parser. My hypothesis was that I could make it faster by allocating fewer strings, which turned out to be correct. End result was nearly 10x faster. 10/10, super fun and satisfying.

I’ve got about a half dozen other things I’d love to optimize with our current system, but for now I can’t justify the cost.


That’s a good opportunity to run a profiler and it will tell you pretty quick which things are taking the most time/cpu/memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: