I grudgingly agree. It is frustrating but something created while using an IDE tends to create specific ways of thinking.
For example, most IDE developers tend to love their debuggers... want to learn something? Just run the debugger! Want to debug something, hey run that debugger and watch 'dem breakpoints!
When I code without a debugger I tend to write smaller, easier testable code because I lack a real way to step through a convoluted process. I feel this encourages a simpler application (but I have no study to prove that).
Another example is most Java/C# work tends to be in the center of the unknown universe. You are editing something twelve layers deep in a mess of public/private/protected objects and you aren't sure if the item you should call is the MarginalFlowStateController or the ControllingMarginalFlowState... with an IDE you would just open up both of them, their unit tests, and any documentation until you discover which one you should be calling.
It bothers me because I really enjoy my vi/tmux development environment where every time I :w a file and run a test I am running on virtually the exact same environment as dev/qa/pre/prod.
While writing small, easily testable code is a good practice it isn't sufficient. Running your code in a debugger and actually watching things happen exposes a lot of bugs and inefficiencies that tests don't tend to catch. Getting that dynamic view of control flow and state is a huge advantage for improving quality.
Meh, I think you are over-inflating the benefits of a debugger. Everything you described can be achieved through proper testing.
Want to know if something is efficient? Then build a test to gauge it. That way if it changes and becomes less efficient in the future your test will break. That's a much more solid approach than simply walking through the code a couple times and noticing something out of place.
> Meh, I think you are over-inflating the benefits of a debugger. Everything you described can be achieved through proper testing.
You're falling into the trap of "one tool to solve all problems." Why waste your time writing tests to "gauge efficiency" when a profiler tells you more, more effectively?
The test will persist and always be there to test again where as a gauge will be a one time thing.
What happens next week when someone does an x += hugeBlockOfText in a loop?
I'd rather have a set goal and a test/process that validates it over a one time event where human error is involved. You want X process a million records in under three seconds? Build a test.
I'm not advocating one tool. I'm simply saying that everything described so far is better solved with a test first approach over a debug first approach. Build a test to replicate the problem. Solve the problem. Keep the test to prove that the problem is solved.
> What happens next week when someone does an x += hugeBlockOfText in a loop?
Then it will show up in your profiling? If you're doing something perf-critical it is absolutely insane not to be running it through a profiler suite on a regular basis. Your continuous integration system can (or should) be completely capable of replaying real or synthetic activity in order to demonstrate real-world hotspots.
Tests only find what you already want to find. Performance concerns are much fuzzier than that (unless you want to be writing "performance tests" for literally-literally everything, which you're welcome to do but I have better things to do than that).
Except that it's tough to make performance and efficiency tests actually persist. The expected test results have to be keyed to the particular test environment and so those tests can't really be portable to other computers. And then every time you upgrade or change the test environment you have to modify the tests with different expected results.
The only way to make such tests really persist is to build in some kind of fixed known benchmark to evaluate baseline performance in the test environment, and then evaluate the software under test relative to that benchmark. This is a huge extra effort and hard to get right.
You're missing the point. Sure you can build a test to gauge whether your code meets some arbitrary level of efficiency. However the process of dynamically stepping through your code in a debugger engages a different part of your brain than inspecting static code or writing tests.
I can't offer any hard evidence to prove this. But personally I've found that keeping the discipline of always stepping through my code in a debugger — even when it seems to be working correctly — leads to improved results. Try it for a few weeks with a good interactive debugger and I think you'll see what I mean.
I don't reach for a debugger often, instead relying on sanity checks and traces. But when I do reach for a debugger, it's because I am so completely confounded by what is going on, and I have started questioning even the most basic of assumptions, that I just want to watch the damn thing execute.
When I code without a debugger I tend to write smaller, easier testable code because I lack a real way to step through a convoluted process. I feel this encourages a simpler application (but I have no study to prove that).
I was talking to a programmer who used to have to hand in punch cards and wait a day to get results. He said people tended to get it right first time in those days.
For example, most IDE developers tend to love their debuggers... want to learn something? Just run the debugger! Want to debug something, hey run that debugger and watch 'dem breakpoints!
When I code without a debugger I tend to write smaller, easier testable code because I lack a real way to step through a convoluted process. I feel this encourages a simpler application (but I have no study to prove that).
Another example is most Java/C# work tends to be in the center of the unknown universe. You are editing something twelve layers deep in a mess of public/private/protected objects and you aren't sure if the item you should call is the MarginalFlowStateController or the ControllingMarginalFlowState... with an IDE you would just open up both of them, their unit tests, and any documentation until you discover which one you should be calling.
It bothers me because I really enjoy my vi/tmux development environment where every time I :w a file and run a test I am running on virtually the exact same environment as dev/qa/pre/prod.