I've read the article. Obviously I haven't seen the actual problem, but as an experienced developer, I feel that:
NaN bug doesn't sound like it needed logging
Mutability bug doesn't sound like it needed logging
Dependencies and versions bug doesn't sound like it needed logging
The Documentation and versions bug doesn't sound like it needed logging
Obviously we're not seeing the whole picture or the whole bug. But that's 4/5. Most of those sound like they needed a simple step-through.
For example, in the documentation and versions async.js bug, he added "extremely verbose" logging where it appears that a simple step-through would have immediately revealed to him the passed arguments were wrong and the signature of the method didn't match the documentation. Which would have made him realize he had the wrong version.
The only one where I would expect to have to use logging is the tests bug.
Interactive debugging is a useful tool, but it can encourage lazy reasoning. People just write something and step through until they notice some change they don't expect. That's fine and everything, but talking about "crutches" and tooling, it sounds like you may be a little biased toward big-IDE style development that makes the debugger the automatic answer v. some reasoning over the code and extra logging of potential trouble spots. There's nothing wrong with logging output v. stepping through in an interactive debugger per se.
Logging is lazy reasoning. Stepping through is simply seeing what is happening with a bug you can replicate. Stepping through is the step in debugging after reading the code.
Even after you've read the code and think you've spotted the problem, you should step through to 100% confirm you've understood the problem.
It's simply confirming the bug and the solution, it's good science, it's good practice. It's empirical.
I disagree with that assessment. Users reported the order was wrong, he simply had to look at those users and their live data to replicate it.
Even in those scenarios, I rarely have to look at live data, reading the code is often enough to identify it or playing around a little bit. The patience to replicate a bug before trying to fix it.
This is the essence that sets apart a good debugger from a bad one. A bad one resorts to littering the code with print/log statements because either they can't replicate the problem or they simply can't run the code in their head.
I'm being unfair, it's not even that, they simply need more experience. When I was younger I remember complaining that exception stacks were utter gobbledigook, utterly useless. Now, I read one and often know exactly what the bug is. I was completely and utterly wrong, impatient and unwilling to admit my ignorance. At the time I thought I was wise and knowing. They're very useful, I simply didn't understand them.
My "simply" from above belies the years of experience I have.
I have had times when I used logging and then realized I should use a debugger instead.
I have had times when I used a debugger and then realized I was never going to find the problem that way; I needed a larger amount of information that only logging could provide.
I've only been programming about 30 years so I probably haven't hit my peak yet, but my guess is that even with more experience I'll still want to use both tools.
For example, in the documentation and versions async.js bug, he added "extremely verbose" logging where it appears that a simple step-through would have immediately revealed to him the passed arguments were wrong and the signature of the method didn't match the documentation. Which would have made him realize he had the wrong version.
The only one where I would expect to have to use logging is the tests bug.