Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are two ways that exceptions create problems not present with other error-handling approaches.

One is that the effective behavior of a function can be changed with no change to the function itself because of an exception thrown "over its head" from something it calls to something that called it. This makes both human and machine analysis significantly more difficult.

The other, somewhat related, has to do with performance. It's easy to do a function-by-function analysis of runtime in error and non-error cases, and those analyses are readily composable. With exceptions, it's much harder to predict how much time will be spent unwinding through M functions, calling M*N destructors, etc. This is exacerbated by the fact that exception-oriented languages also tend to encourage idioms that involve many more possibly-expensive object creations/deletions (including the exception object itself). "Zero cost abstraction" is true much less often than people seem to think.

If you're concerned with ensuring that a given piece of code will never exceed a certain execution-time bound, your job becomes at least an order of magnitude harder with exceptions than with plain old return codes, even if the two seem equivalent at some higher conceptual level.



>This makes both human and machine analysis significantly more difficult.

Given that the C++ standard throw exceptions in many places, and is only adding more in future versions, all this needs checked anyways. So this argument is a red herring. Proper exception handling is not optional for robust C++ code, since anyone under you can throw when someone makes feature changes or uses new features that throw in the future.

>your job becomes at least an order of magnitude harder with exceptions than with plain old return codes

No it doesn't. I routinely write high performance C++ code, often using hand crafted assembly where needed. You learn what can throw, what cannot, and don't put exceptions in the hot path.

How often are you really writing crazy optimized code where anything exceptional can happen? I think never. No allocations, no files being pulled from under you, no using reousrces you don't have properly set up before you enter the hot path.

>calling M*N destructors

What does this mean? Are your objects in an object soup of interdependencies? Use proper system decomposition and statements like this would never make sense.

> harder with exceptions than with plain old return codes

Suppose you have functions calling each other 10-20 deep, acorss a library or two, and the bottom guy in LibA returns an error code in the style of LibA. Somewhere up the stack a function using libA also touches LibB and your own stuff, all of which can error.

Now at the top, and along the entire path, you need to marshall error codes or change them, adding lots more code in the hot path, all checking and passing, and hoping someone doesn't make a change somewhere, because now you get to revisit every one of those paths to ensure new codes get transcribed....

The code is slower, has more branching, is harder to maintain (this adding bugs), than simply throwing when something fails at the bottom and handling it where appropriate.

So returns codes in fact make code messier, more error prone, and certainly larger, overflowing caches more often than simply using modern exceptions.


> You learn what can throw, what cannot, and don't put exceptions in the hot path.

That puts you light-years ahead of most C++ programmers I've encountered.

> What does this mean? Are your objects in an object soup of interdependencies?

Not my code; other people's. We're talking about the typical programmer here, not the outliers.

> at the top, and along the entire path, you need to marshall error codes or change them....

Yeah, it's a pain, but that's totally not the point. Ease of verification and ease of later modification are orthogonal.

> The code is slower, has more branching, is harder to maintain (this adding bugs), than

"Simply handling..." How cute. What about proving that it's handled, not just sometimes but every time? What about the legions of programmers who "forget" to handle some exceptions? What about the dozens of aborted processes that I see every day while running one of the world's biggest storage services at a company famous for its coding interviews, that result from such sloppiness? Never saw so much garbage in the many projects I've worked on that didn't use exceptions.

Look, I'm not saying that return codes are all sunshine and roses. Neither are exceptions. I'm making a very specific point that exceptions make code harder to verify for either correctness or bounded performance, because somebody asked. Maybe you prefer exceptions and that's fine, but that's an advantage they don't have. These unfounded claims about non-exception code being so much slower or less maintainable (despite your own claim to write such code for high performance) add nothing to the conversation.


>We're talking about the typical programmer here, not the outliers.

You're relying on the typical programmer to handle all error codes, pass them all throughout all the levels of an application, and not miss any, yet you don't want to use exception handling and train them to use that? Its's vastly less error prone.

>What about proving that it's handled, not just sometimes but every time?

Plenty of analysis tools do this such as Coverity. It's been done for literally decades.

Which is easier? Handling error codes each and every time, or ensuring that there are some top level exception catchers in important places, or simply one at the top, logging failures?

And in the previous section you were arguing about writing highly optimized code - this is not the domain of "typical programmers". Pick your goalposts.

>Yeah, it's a pain, but that's totally not the point.

If it's painful, it's less likely to get done. This is true for typical and advanced programmers. If it's painful, it's also more error prone.

>What about the legions of programmers who "forget" to handle some exceptions?

Simple to handle at a higher level, log and exit. Automates testing can easily catch them all for you, since you can have programs locate every throwable place in C++ and inject them.

Good luck automated testing all possible error codes, which are not easily programmatically discoverable.

You also didn't address that C++ already is full of places that throw exceptions, and more are added in each standard, and all foreseeable future language proposals expect them to be available and used.

So - if you have to learn how to use them simply to use C++, why not teach yourself and those around you best practices on using them? RAII, nothrow swaps, the three guarantees, etc.? It will make better, much more readable, and better extensible and maintainable code.

>These unfounded claims about non-exception code being so much slower

So you claim checking every level of code if something failed, only to pass it up higher, does not add code size (I.e., pushing out of cache) or branches (i.e., having to deal with branch prediction) in a hot path? This is pretty easy to see. There's plenty of places where a low level rare failure needs passed through several layers of code to a section that can do something about it, and an exception for that case is demonstrably, provable a faster hot path.

If you really need me to code one up and show you I can. It's trivial to check, and this is not a rare case. There's a reason modern compilers went to the zero overhead exception structures for not taken exceptions, and this is a prime example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: