Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Meh, I think you are over-inflating the benefits of a debugger. Everything you described can be achieved through proper testing.

You're falling into the trap of "one tool to solve all problems." Why waste your time writing tests to "gauge efficiency" when a profiler tells you more, more effectively?



The test will persist and always be there to test again where as a gauge will be a one time thing.

What happens next week when someone does an x += hugeBlockOfText in a loop?

I'd rather have a set goal and a test/process that validates it over a one time event where human error is involved. You want X process a million records in under three seconds? Build a test.

I'm not advocating one tool. I'm simply saying that everything described so far is better solved with a test first approach over a debug first approach. Build a test to replicate the problem. Solve the problem. Keep the test to prove that the problem is solved.


> What happens next week when someone does an x += hugeBlockOfText in a loop?

Then it will show up in your profiling? If you're doing something perf-critical it is absolutely insane not to be running it through a profiler suite on a regular basis. Your continuous integration system can (or should) be completely capable of replaying real or synthetic activity in order to demonstrate real-world hotspots.

Tests only find what you already want to find. Performance concerns are much fuzzier than that (unless you want to be writing "performance tests" for literally-literally everything, which you're welcome to do but I have better things to do than that).


Except that it's tough to make performance and efficiency tests actually persist. The expected test results have to be keyed to the particular test environment and so those tests can't really be portable to other computers. And then every time you upgrade or change the test environment you have to modify the tests with different expected results.

The only way to make such tests really persist is to build in some kind of fixed known benchmark to evaluate baseline performance in the test environment, and then evaluate the software under test relative to that benchmark. This is a huge extra effort and hard to get right.


And when the test tells you the problem has come back? How do you debug it then?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: