Why do people complain about exceptions being "non-deterministic"? Malloc is not deterministic either, so in theory it causes bloat and unexplained slowdowns everywhere.
There are two ways that exceptions create problems not present with other error-handling approaches.
One is that the effective behavior of a function can be changed with no change to the function itself because of an exception thrown "over its head" from something it calls to something that called it. This makes both human and machine analysis significantly more difficult.
The other, somewhat related, has to do with performance. It's easy to do a function-by-function analysis of runtime in error and non-error cases, and those analyses are readily composable. With exceptions, it's much harder to predict how much time will be spent unwinding through M functions, calling M*N destructors, etc. This is exacerbated by the fact that exception-oriented languages also tend to encourage idioms that involve many more possibly-expensive object creations/deletions (including the exception object itself). "Zero cost abstraction" is true much less often than people seem to think.
If you're concerned with ensuring that a given piece of code will never exceed a certain execution-time bound, your job becomes at least an order of magnitude harder with exceptions than with plain old return codes, even if the two seem equivalent at some higher conceptual level.
>This makes both human and machine analysis significantly more difficult.
Given that the C++ standard throw exceptions in many places, and is only adding more in future versions, all this needs checked anyways. So this argument is a red herring. Proper exception handling is not optional for robust C++ code, since anyone under you can throw when someone makes feature changes or uses new features that throw in the future.
>your job becomes at least an order of magnitude harder with exceptions than with plain old return codes
No it doesn't. I routinely write high performance C++ code, often using hand crafted assembly where needed. You learn what can throw, what cannot, and don't put exceptions in the hot path.
How often are you really writing crazy optimized code where anything exceptional can happen? I think never. No allocations, no files being pulled from under you, no using reousrces you don't have properly set up before you enter the hot path.
>calling M*N destructors
What does this mean? Are your objects in an object soup of interdependencies? Use proper system decomposition and statements like this would never make sense.
> harder with exceptions than with plain old return codes
Suppose you have functions calling each other 10-20 deep, acorss a library or two, and the bottom guy in LibA returns an error code in the style of LibA. Somewhere up the stack a function using libA also touches LibB and your own stuff, all of which can error.
Now at the top, and along the entire path, you need to marshall error codes or change them, adding lots more code in the hot path, all checking and passing, and hoping someone doesn't make a change somewhere, because now you get to revisit every one of those paths to ensure new codes get transcribed....
The code is slower, has more branching, is harder to maintain (this adding bugs), than simply throwing when something fails at the bottom and handling it where appropriate.
So returns codes in fact make code messier, more error prone, and certainly larger, overflowing caches more often than simply using modern exceptions.
> You learn what can throw, what cannot, and don't put exceptions in the hot path.
That puts you light-years ahead of most C++ programmers I've encountered.
> What does this mean? Are your objects in an object soup of interdependencies?
Not my code; other people's. We're talking about the typical programmer here, not the outliers.
> at the top, and along the entire path, you need to marshall error codes or change them....
Yeah, it's a pain, but that's totally not the point. Ease of verification and ease of later modification are orthogonal.
> The code is slower, has more branching, is harder to maintain (this adding bugs), than
"Simply handling..." How cute. What about proving that it's handled, not just sometimes but every time? What about the legions of programmers who "forget" to handle some exceptions? What about the dozens of aborted processes that I see every day while running one of the world's biggest storage services at a company famous for its coding interviews, that result from such sloppiness? Never saw so much garbage in the many projects I've worked on that didn't use exceptions.
Look, I'm not saying that return codes are all sunshine and roses. Neither are exceptions. I'm making a very specific point that exceptions make code harder to verify for either correctness or bounded performance, because somebody asked. Maybe you prefer exceptions and that's fine, but that's an advantage they don't have. These unfounded claims about non-exception code being so much slower or less maintainable (despite your own claim to write such code for high performance) add nothing to the conversation.
>We're talking about the typical programmer here, not the outliers.
You're relying on the typical programmer to handle all error codes, pass them all throughout all the levels of an application, and not miss any, yet you don't want to use exception handling and train them to use that? Its's vastly less error prone.
>What about proving that it's handled, not just sometimes but every time?
Plenty of analysis tools do this such as Coverity. It's been done for literally decades.
Which is easier? Handling error codes each and every time, or ensuring that there are some top level exception catchers in important places, or simply one at the top, logging failures?
And in the previous section you were arguing about writing highly optimized code - this is not the domain of "typical programmers". Pick your goalposts.
>Yeah, it's a pain, but that's totally not the point.
If it's painful, it's less likely to get done. This is true for typical and advanced programmers. If it's painful, it's also more error prone.
>What about the legions of programmers who "forget" to handle some exceptions?
Simple to handle at a higher level, log and exit. Automates testing can easily catch them all for you, since you can have programs locate every throwable place in C++ and inject them.
Good luck automated testing all possible error codes, which are not easily programmatically discoverable.
You also didn't address that C++ already is full of places that throw exceptions, and more are added in each standard, and all foreseeable future language proposals expect them to be available and used.
So - if you have to learn how to use them simply to use C++, why not teach yourself and those around you best practices on using them? RAII, nothrow swaps, the three guarantees, etc.? It will make better, much more readable, and better extensible and maintainable code.
>These unfounded claims about non-exception code being so much slower
So you claim checking every level of code if something failed, only to pass it up higher, does not add code size (I.e., pushing out of cache) or branches (i.e., having to deal with branch prediction) in a hot path? This is pretty easy to see. There's plenty of places where a low level rare failure needs passed through several layers of code to a section that can do something about it, and an exception for that case is demonstrably, provable a faster hot path.
If you really need me to code one up and show you I can. It's trivial to check, and this is not a rare case. There's a reason modern compilers went to the zero overhead exception structures for not taken exceptions, and this is a prime example.
Well, there are several kinds of allocation strategies, so while the general malloc does a best-effort, I have no real choice with exceptions at the moment, other than to turn them off.
I do a lot of stuff in embedded, and it would be nice to have something like this. I'm also experimenting with using C++ as a compiled scripting language for a game engine, and for that I would like the binaries to be really small.
Because of docker, compiling freestanding binaries for any architecture is not such a barrier anymore.
Apps aren't the entire computing universe. There are probably dozens of processors in the room with you right now, in everything from your smartwatch to your monitor to your thermostat to multiple components within your laptop/phone, each running firmware that somebody had to write in a more constrained way than apps can get away with. It's the "dark matter" of computing, but it's not narrow at all.
My smartwatch runs a fat Java stack. My thermostat runs full fat Linux stack with a ton of services. My lightbulb runs fucking HTTP server for its REST API that controls it.
None of this is operates in any way remotely close to "malloc up front and never touch again." Nor do they operate with any sort of significant constraints vs. the typical app.
There are definitely things that do that. But even in the world of embedded it's a subset of things that do that.
Heck, the most software-constrained thing I have is probably the controller in my monitor. But then again that also runs Altera Arria V GX FPGA with 768MB of DDR3 RAM, which also definitely classifies as a niche area with highly specialized demands.
I do some work in embedded myself but vehemently disagree with that. Only very small parts of embedded systems need zero-alloc policies. Hell, I know people succesfully using new / delete on arduinos and the like.
I taught a bit at uni and you can't imagine the amount of students who read stuff like parent comment's "it's common to not use malloc" on internet and then try to apply this as a mantra to their codebase when the assignment is to make a freakin snake game with SDL and write actually legible code - the immense majority of them will never ever encounter any realtime problem and don't need to think that this is an actually common thing - it is not !
This is doing a disservice to the whole profession. If you actually need to program real-time systems, you will have processes in place to ensure that things don't end up in a syscall - or just don't have malloc at all available in which case the problem does not exist.