You can also run arbitrary code at any breakpoint (at least for Java with Eclipse).
Want to check the content of a buffered image? Just run ImageIO.write(...) while the application is stopped. Want to check the current working directory? Paths.get(".").toAbsolutePath()
Heck you can even add code as breakpoint condition (which does not need to stop at the breakpoint), e.g. for some on the fly print debug. Changing values does not have to be a manual process but can also be injected as breakpoint condition.
It is also possible to do most of that with Python in Pycharm (except maybe jump behind?).
This is why going from Python to Go can feel like going backward. IMO, even compiled languages should have an "interpreted mode" for development. Once the development is complete, the code can still be compiled to get the benefits of runtime performance.
It's mostly a matter of engineering, Visual C++ has had the ability to change function bodies and continuing running from the point of a crash (as long as data wasn't badly corrupted) since at least Visual Studio 6 (Yes, back in 1998!!!).
the power of debugging in interpreted languages is so far unbeatable. I have a shortcut for "import pudb; pudb.set_trace" in my IDE
(pudb is pdb with UI)
I used to use OzCode in C# around 2014 or so, and from my memory it was better than anything available today for an interpreted language.
Just looked it up again since it's been so long. Looks like DataDog acquired them and are ending the product for an in-house replacement, quite a shame.
Yeah, I've found the Elixir debugger to be pretty useless when debugging multiple processes. I make have use of logging in such cases, and the language abstractions are usually easy enough to understand that I don't really miss a debugger. A huge benefit is that because I'm usually working w/ data structures that don't obfuscate the state within objects/processes, I can look at data the data and just figure out what's going on. The primary reason I want a debugger in Ruby is because some mysterious object from who-knows-where is hiding all the state & I'm just trying to figure out where the heck it came from, let alone what's actually in there.
I just remembered the Tracepoint feature though (it's been a while since I was working in C++ with visual studio). Even when you can't pause execution to debug it's still better to add your tracing in the debugger rather than having to make code changes.
I was always doing stuff on windows/macOS/iOS/Android client libraries so I was always jealous of linux systems programmers who get to leverage Mozilla's rr tool. That's the coolest debugging tool I've ever seen.
Sadly rr cannot work on software that makes any kind of call to GPUs, so its usefulness is pretty limited on a lot of larger software. But when it's usable, imo it's amazing.
If the UI is not GPU accelerated while you are recording, rr will work fine. You can even have GPU calls in the code while recording, they just can't be on code paths that are taken. So a simple command line flag that makes the code choose to use CPU rendering instead of GPU rendering should be fine.
That said, GPU rendering is one of the things that deterministic replay could be most useful for, so it's unfortunate that this doesn't work in rr for now.
In programming environments with very powerful debuggers like .NET this is relatively common since it allows you to do a lot of stuff at fly.
Change values, evaluate expressions, change function's code, jump ahead and behind, etc, etc.
Once you try this you'll never want to go back to print-debugging (except for specific cases)