Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While writing small, easily testable code is a good practice it isn't sufficient. Running your code in a debugger and actually watching things happen exposes a lot of bugs and inefficiencies that tests don't tend to catch. Getting that dynamic view of control flow and state is a huge advantage for improving quality.


Meh, I think you are over-inflating the benefits of a debugger. Everything you described can be achieved through proper testing.

Want to know if something is efficient? Then build a test to gauge it. That way if it changes and becomes less efficient in the future your test will break. That's a much more solid approach than simply walking through the code a couple times and noticing something out of place.


> Meh, I think you are over-inflating the benefits of a debugger. Everything you described can be achieved through proper testing.

You're falling into the trap of "one tool to solve all problems." Why waste your time writing tests to "gauge efficiency" when a profiler tells you more, more effectively?


The test will persist and always be there to test again where as a gauge will be a one time thing.

What happens next week when someone does an x += hugeBlockOfText in a loop?

I'd rather have a set goal and a test/process that validates it over a one time event where human error is involved. You want X process a million records in under three seconds? Build a test.

I'm not advocating one tool. I'm simply saying that everything described so far is better solved with a test first approach over a debug first approach. Build a test to replicate the problem. Solve the problem. Keep the test to prove that the problem is solved.


> What happens next week when someone does an x += hugeBlockOfText in a loop?

Then it will show up in your profiling? If you're doing something perf-critical it is absolutely insane not to be running it through a profiler suite on a regular basis. Your continuous integration system can (or should) be completely capable of replaying real or synthetic activity in order to demonstrate real-world hotspots.

Tests only find what you already want to find. Performance concerns are much fuzzier than that (unless you want to be writing "performance tests" for literally-literally everything, which you're welcome to do but I have better things to do than that).


Except that it's tough to make performance and efficiency tests actually persist. The expected test results have to be keyed to the particular test environment and so those tests can't really be portable to other computers. And then every time you upgrade or change the test environment you have to modify the tests with different expected results.

The only way to make such tests really persist is to build in some kind of fixed known benchmark to evaluate baseline performance in the test environment, and then evaluate the software under test relative to that benchmark. This is a huge extra effort and hard to get right.


And when the test tells you the problem has come back? How do you debug it then?


You're missing the point. Sure you can build a test to gauge whether your code meets some arbitrary level of efficiency. However the process of dynamically stepping through your code in a debugger engages a different part of your brain than inspecting static code or writing tests.

I can't offer any hard evidence to prove this. But personally I've found that keeping the discipline of always stepping through my code in a debugger — even when it seems to be working correctly — leads to improved results. Try it for a few weeks with a good interactive debugger and I think you'll see what I mean.


I don't reach for a debugger often, instead relying on sanity checks and traces. But when I do reach for a debugger, it's because I am so completely confounded by what is going on, and I have started questioning even the most basic of assumptions, that I just want to watch the damn thing execute.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: