1. Every function now receives an additional parameter that is basically a pointer to a std::optional<exception_t> (from now on, "exception holder"; exception_t can hold any value whatsoever).
2. "try" is translated into allocating a (new) exception holder on the stack and passing pointer to it in all function calls inside of the try block. "catch" is translated into checking this exception holder for holding an (appropriate) exception. If "yes", handle it, if "no", go on.
3. Throwing an exception is done by putting the constructed exception into the exception holder passed to you, and returning as usual.
4. Each function call is now followed by checking whether the exception holder holds an exception. If "yes", return immediately, if "no", keep executing.
Example:
void C(std::optional<exception_t>* exc_holder) {
Bar bar;
exc_holder->emplace(0);
}
void B(std::optional<exception_t>* exc_holder) {
Foo foo;
C(exc_holder);
if (*exc_holder) { return; }
}
void A(std::optional<exception_t>* exc_holder) {
std::optional<exception_t> new_exc_holder;
B(&new_exc_holder);
if (new_exc_holder)
if (new_exc_holder.value().is<int&>()) {
int& p = new_exc_holder.value().as<int&>();
// catch body...
} else {
*exc_holder = std::move(new_exc_holder);
return;
}
}
}
So, it's basically "if err != nil { return <default_value>, err }" but with slightly less amount of copying error values. Somehow, it ends up with the same performance despite all of this constant branching error-checking and better performance when re-throwing exceptions.
Is that an ABI break due to the extra parameter? I skimmed the paper, but it only seemed to talk about the standard table-based unwind in passing, with no mention of compatibility. Did I miss something?
If it is in fact an ABI break, how much of an obstacle would that be to getting this through the committee? I was also under the impression that up-to-date compilers/runtimes are not something one can take for granted in the embedded world, so recompiling everything is a shaky proposition.
> If it is in fact an ABI break, how much of an obstacle would that be to getting this through the committee?
In my understanding:
Historically, the committee basically said "if your paper breaks the ABI, it won't be considered, don't bother." In Prague last month, the committee voted that C++23 would not break ABI, though that decision is not final. However, they did say that authors should bring papers that would break the ABI, and they will be considered in a general sense.
That's fair. I suppose the next question would be what it would take to get the committee to stop kicking the can down the road and actually vote to break ABI, since "considered" is a fairly weak promise. Perhaps they're waiting to see what people come up with?
I was not in attendance, but from what I read on /r/cpp, at least some people thought that the discussion was brought up in sort of a suboptimal way, and that there was actually a lot of support in the room for breaking the ABI. Regardless, I think that what was decided is clearly in this direction.
I'm not familiar enough with the darker corners of C++ implementations to say, but I'd guess not. From what the paper describes it seems table-based exception handling is deeply ingrained into the Itanium ABI so any significant change would be an ABI break, which would make said change more difficult to spread.
I am not sure if all implementers made the same choices, but for example I believe Microsoft concluded that table-based EH is always better and made not only their IA64 ABI but also their amd64 ABI use it. They keep 32-bit x86 at an older implementation for compatibility reasons, but any ABI introduced later gets the new thing.
Hmm, I missed the C++ism for this term. Googling around:
> Although it was initially developed for the Itanium architecture, it is not platform-specific and can be layered portably on top of an arbitrary C ABI.
Changing the handling of existing exceptions, no. But adding a new kind of exception is fine. That's what the herbceptions proposal did. However, i think they are in fact proposing to change the ABI here.
It is ABI breaking but there is a way out. Instead of replacing the existing exceptions, this mechanism could be used for checked exceptions only which will necessarily require annotating functions, so ABI breaks are to be expected.
> Somehow, it ends up with the same performance despite all of this constant branching error-checking and better performance when re-throwing exceptions.
SJLJ exceptions were also deterministic and slow.
The author's recorded a 2-4% slowdown in the case that exceptions weren't thrown (relative to zero-cost DWARF EH), and a 1-2% slowdown in the case that exceptions were thrown (again, relative to DWARF EH).
a) apparently branch prediction is good enough to make all this constant checking for exceptional returns to slow down the execution by only a couple of percents;
b) apparently exception propagation and re-throwing is way, way faster than in the standard implementation;
c) the exclusion of the unwind tables makes the code 3 times smaller on ARM, and 5 times smaller on x86-64. That's pretty huge.
So all in all, I'd say that if those figures actually hold, then this approach is pretty solid. It's just surprising, given how much complaining there is about how Go's idiom of "if err != nil { return nil, err }", or Rust's "let v = some_fun()?" kills performance because of branch mis-prediction and hot path code bloat, so maybe those figures don't actually hold?
> given how much complaining there is about how Go's idiom of "if err != nil { return nil, err }", or Rust's "let v = some_fun()?" kills performance because of branch mis-prediction and hot path code bloat
Was that the main complaint? I thought that Go's thing was was more about being noisy boilerplate and a bug magnet rather than a performance issue. Not sure about Rust, either, since IIRC the ? operator is sugar for the try! macro, which itself was just sugar for bubbling up a Result<Err>.
For "standard" applications, sure, but the paper mentions that for embedded uses code size and determinism can be more important than pure execution speed, and perhaps a few percent slowdown would be considered acceptable for embedded development if it allows the use of exceptions.
Seems to be better exceptions than what is already there. And fully compliant as far as I can tell. Someone just had to go out there and do it. From doing my own benchmarks on exceptions, I already know you should basically never use them. The bloat and unexplained slowdowns in real programs is just a headache you don't want.
In the end, I still use exceptions, so this gives me hope. I like C++ exceptions as a feature, but I'm not happy about how it's implemented right now.
In your opinion, what are situations/examples that justify the use of exceptions?
I haven't used a language with exceptions since 3 years (i.e: since I switched to Go), and I haven't yet faced a situation where I felt a need for them.
Also, I started to learn C++ as a hobby a few months ago, and I have to say that I still don't understand when I should use exceptions or not, things seem fine so far by just returning some error values.
Exceptions are ideal for happy path programming. You keep state in the stack and build your software so things get thrown away. For some types of programming this is not that important but for OLTP it is very handy.
Basically this approach is handle errors if you know what to do with them, otherwise leave the error for something else.
In erlang, where you have one process per transaction, and nothing handles the error, the process crashes.
In C++, you could just have an exception handler around a transaction, and then when an exception reaches that point you throw away the transaction, but log that it failed, you could have exception handlers below this to try and do rollbacks etc.
This is overly simplified but it gives a basic idea.
In my experience, the amount of interesting logic that "something else" does is vanishingly small. An arbitrary real-world example probably just logs or returns a string that is opaque and mostly useless to any potential error recovery mechanism.
In theory one could throw a FileExistsException with semantic and accessible data like filePath, but as things get this specific, they practically become part of the API. And a leaky part at that. The top-level entities that need to handle the exception need to know, somehow, that (1) encapsulated implementation details may throw something specific and (2) what to do with the thrown events.
So, practically, programs tend to catch std::exception, and pass the opaque error string on to some form of external I/O system (logs) with some minimally informative error status code (ERROR, 5XX, -1, false, null).
It is interesting to perform transaction failure operations, but there is no std::transaction_aborted, std::connection_dropped, or std::resource_busy. More realistically, std::exception is caught, which could just as easily be std::bad_alloc or std::domain_error. What do you do with a domain_error? I don't know either.
At any rate, the most interesting feature for transactional logic is RAII semantics, and you don't need to use exceptions to leverage those.
OLTP in this context is indeed online transaction processing.
This general approach of «let it "crash"[1]» or happy path programming is what I used when I wrote OTLP systems for mobile networks which were performing financial transactions. The exception handling would allow you to write fairly complex business logic and relying on exception handling to recover to a stable state.
[1]: Crashing in Erlang terms is not at all what people generally mean with crashing. It is a very unfortunate word because in part many of the problems in node.js comes from joyent not understanding what "let it crash" meant - it did not mean let the unix process crash. Erlang does one process per transaction and they are incredibly lightweight. Some details from joyent can be found here: https://www.joyent.com/node-js/production/design/errors
Well, in the case of C++, since the standard library uses exceptions everywhere not using them is a pretty big "cut". Basically either you don't use the standard library or you write all your code in a completely different paradigm.
For example, since constructors can't return error values reasonably, you would have to make them all private and use static factory functions, or similar workarounds. It's certainly possible, but much less pleasant to work with.
The constructor example is an interesting one. A Go constructor is just a function such as `func NewMyObject(someParam string) (*MyObject, error)`, it is possible and not that uncommon to return an error in addition to the instantiated object, for example if there is some check on the parameter (an example I have in mind is http.NewRequest https://golang.org/pkg/net/http/#NewRequest).
How do people deal with constructor errors in the C++ world? I have never seen C++ code where the constructor call is wrapped in with an exception handler. I would guess that's done by creation an addition function "make_my_object"? But then what is its signature? Or is an exception handler used in that case?
If I create several objects and then one of them fails, the ones that succeeded get destructed in reverse order, and then my callers get the exception that killed me. I don't need a handler unless I know what to do about it or I'm doing something weird that doesn't work with RAII. And I can't even try to use an object that didn't properly initialize.
> I have never seen C++ code where the constructor call is wrapped in with an exception handler.
The point of exceptions is not to wrap every call that can throw but instead to let them bubble up to have them be handled where it makes most sense - a large amount of software only needs to show an error dialog to the user and asks the user to save / exit, or log a message somewhere (so a single `catch` at the top of your event loop).
Only when you know for a fact that a subsystem can actually handle a specific case of error more intelligently, then you add exception handling there.
>How do people deal with constructor errors in the C++ world? I have never seen C++ code where the constructor call is wrapped in with an exception handler.
There are about 200 different ways to be honest, stemming from the unfortunate situation that the most common advice, "use exceptions in exceptional situations", is actively harmful.
Let's take a "Connection" class for example. You'd initialize it with an IpEndpoint, or something to that effect. Some people think "well I'm writing a download script so if I can't connect the script can't continue, so that is exceptional" - so they throw an exception in the constructor and catch it somewhere in main. Other people are writing something more complicated and they think "being unable to establish a connection is normal, I'll just make a factory function for this and wrap the Connection constructor". Which leads to completely different paradigms in the same situation but with different environments - and those two decisions are both good, there are tons of decisions that aren't!
This is compounded by the fact that the C++ standard library only uses 2 kinds of resources: Memory and Files. And they use different kinds of error handling. Well, memory allocation always throws (at least it's consistent here), but the iostream error handling is, uh, interesting? Streams don't throw by default, however you can enable them to throw at runtime, but they also store their error state inside. It's a mess really.
Oh and also the std::logic_error exists and gets thrown by standard library functions, but a lot of advice around exceptions says that you're not supposed to use exceptions for programming errors, that should be handled by assertions.
And then there are of course people who just disable exceptions. Some disable exceptions but still use the standard library and just kind of hope that everything will still work?
So yeah, it's a mess. That's partly why proposals like this would help - if you can make the actual throwing of exceptions cheap, the advice of "use exceptions for exceptional situations" goes away instead becomes "use exceptions for every kind of error handling". Which would be much more consistent. And people wouldn't have an excuse to disable exceptions anymore, at least in the long run.
> And then there are of course people who just disable exceptions. Some disable exceptions but still use the standard library and just kind of hope that everything will still work?
No, there are features in most of the standard library that allow you to confirm that an object is in a state where it won't throw an exception, and you use those. For example, with std::map you can check the return of find() against end() to determine whether you can safely access the second element of the tuple. So you would do that before you access the tuple.
And you would generally write your own classes to either crash the process quickly or fail gracefully on construction. Basically you have to manage the shutdown of your process when you encounter exceptional situations in the constructor.
You can also count on any "exception" that is thrown with no-exceptions to result in an immediate crash.
> Some disable exceptions but still use the standard library and just kind of hope that everything will still work?
It's not like the errors are just ignored then. The program just aborts instead of unwinding stacks. And all the APIs you care to use have defensive ways to use them. For instance, you can check if a vector is empty before accessing element zero.
In my (unrequested) opinion, exceptions are useful when there are errors you don’t really want to handle during normal execution. Things like malloc failing, or open files suddenly disappearing out from under you (e.g. someone yanked a hard drive). They are errors but trying to handle every little bit of how a computer can go wrong is not really a feasible task, although you might be inclined to handle some.
I agree with your two examples (malloc failure and hard drive issue).
> They are errors but trying to handle every little bit of how a computer can go wrong is not really a feasible task, although you might be inclined to handle some.
Though, using error values, it's not that different.
1. check if the error is something that you care about
2. if not, check if it is something that you should pass back to the caller (in general that's an assert "if not null/empty"), if it is then return it
3. otherwise continue what you're doing in the current function
You have the same result without a need for exception, you only handle what you care about in the current scope, or let the chain of callers decide what they want to do with the error.
It's too easy to forget to check or propagate an error value (in most languages) though. Exceptions are nice because you can't forget - you either handle it somewhere in the call stack or the process is killed.
A particular task may not be recoverable, but the process state can be. Photoshop shouldn't abort just because you pulled the thumb drive you were reading/writing from. A concurrent network server with 9999 connected clients shouldn't abort because malloc failed initializing connection state for the 10000th.[1] This isn't just a QoI issue, it's also a security issue.
[1] Many types of network servers, for example a multimedia streaming server or a SOCKS proxy, only need to allocate dynamic memory during the early phase of the connection. After that the client can be served indefinitely. If you're writing a library, best practice is to assume your caller can handle malloc failure; if not they can choose to abort.
It depends on the context. Just because you can't continue writing to a file that was open, doesn't mean the program should crash.
For example, in a user-facing desktop application, maybe you just catch the exception in a high-level place, log the error, report it to the user, and explain that the action they just attempted failed. They can then try again, without having to restart the entire application.
Because only the caller of a function gets to decide if an error is recoverable or not. Also note that this reasoning applies to all kinds of error handling, not just exceptions.
As a general rule, use them when for truly exceptional situations: Running out of memory, accessing something out of bounds - things that you are checking for anyway, but you don't or don't want to use a return-value-based error handling system for.
In modern C++ this is already the case. For example, every container that implements an at() function will throw an exception if the (unsigned) index is >= size.
If you look at the assembly of a simple function only accessing a std:array using the [] operator, you will see that it's quite small. If you access it using at() you might see that the core function remains the same, except it now has a bounds-checking test which jumps to a huge postamble. The postamble will throw the exception. Since the CPU likes to execute instructions in sequence, we say that the happy path (which is indexing correctly) is now the fast path.
Why use exceptions for unrecoverable errors? The only reason would be to do proper stack unwinding, and if the proper response to an unrecoverable error is to terminate the process, I don't see the point of cleanup.
you most certainly never had to wrestle with operating systems keeping sockets open for a few minutes after a process was killed then. pleaaaase do unwind properly and release as much resources as you can.
As a Rust fan boy where the Error monad is used instead of exceptions, there are times where exceptions are nicer. One example is when using `map` and other functions on iterators. With exceptions, error reporting is out-of-bound. Without them, things get messier, see https://doc.rust-lang.org/stable/rust-by-example/error/iter_...
I've got some ideas for siphoning off the errors to make them out of bound. Will need to experiment to see how ergonomic it is.
With the more common modern compilers (Microsoft, GCC, LLVM, ICC, many others) there is zero runtime overhead to programming with exceptions unless they are thrown. This is possible because the unwind information is stored externally to the function code. There is still an additional overhead in terms of space, since the .eh_frame (or equivalent) sections need to loaded and possibly relocated by the linkloader at program startup, although that's minimal work if addresses are in self-relative tables like in the ARM EABI.
The alternative to having the exception-handling code out of line is, of course, handling all exceptional conditions in line, which not only makes the code harder to read but makes the generated object code bigger and, if it fails to fit in a cache line or causes unwholesomely large numbers of branch evaluations (see spectre and meltdown) results in slower or more insecure code. The real answer to C++ exceptions is the classic C-style programming where all errors are just ignored.
Of course, no amount of reasoning or hard data survives in the face of religious belief, and 80% of programmers out there are part of one cargo cult or another, so carry on with what you were doing. It's probably good enough.
I've had major regressions in performance when a thrown exception was added.
We narrowed it down to the fact that the compiler wasn't free to A) inline the function, B) reorder internal operations in the function which came before vs. after the exception being thrown.
Yes, exceptions affect optimizations. The crux is that anything that might throw is a barrier to any other statement with observable side effects before or after it. Neither can be moved to the other side of the potential thrower. However, explicit error handling adds a branch and explicit return inatead of a potentially throwing statement, so if the compiler is rightfully pessimistic, no performance is lost. In the other hand, you can make the compiler ignore the potential for exceptions by declaring functions and methods nothrow. This will make the compiler ignore all considerations for potential exceptions in callers and lead to better optimizations in some cases. But sticking nothrow onto things is hard because when a nothrow fuction does happen to throw, you're in UB land.
In the current version of the standard on the GitHub repo [0], Section 14.5 Paragraph 5 says:
> Whenever an exception is thrown and the search for a handler (14.4) encounters the outermost block of a function with a non-throwing exception specification, the function std::terminate is called (14.6.1).
I don't know when precisely noexcept was changed to call std::terminate, but I did find N3103 (included in the 2010-08 post-Rapperswil mailing [1]), which argued that the standard should require calling std::terminate to avoid security issues.
False, exceptions can affect codegen even if they are never thrown (including cases where they cannot possibly be thrown, e.g. no throw statement is even present).
> classic C-style programming where all errors are just ignored
That's a mischaracterization of most C code written by professionals, because (with appropriate compiler flags) return codes have to be explicitly ignored. Contrast with languages where most functions don't even have a return code, and the compiler/interpreter doesn't complain about failing to catch possible exceptions. I see this a lot in Python code (which I generally like and have liked since 1.5 BTW), a bit less so in C++, etc. Even "elite" programmers in those languages tend to be sloppy that way, which I rarely see in C programmers with more than a couple of years' experience.
> That's a mischaracterization of most C code written by professionals
I've been a professional programmer for 40 years, much of that in C. I've seen a lot of code, some of it in production in critical systems for many years. It is not a mischaracterization, it is a description of the state of the art.
I'm sorry you've worked on so much crappy code. I have 30+ years' mostly-C experience myself, I've seen a lot of code that doesn't check or handle errors as well as it should, but code that routinely ignores errors (like e.g. most Python code) is a distinct minority. I wouldn't work on code that was "in production in critical systems" with that misfeature for very long before seeking a job where I could work with actual professionals.
Python's default (stop and raise the exception in the caller) is almost always what you want. A stack of ten function calls should not have ten try blocks; that's a waste of effort and very hard to read.
> Python's default (stop and raise the exception in the caller)
No, Python's default behavior is an uncaught exception causing program termination with a stack trace, which is not generally what anyone wants. You have to add code to get non-default behavior.
> A stack of ten function calls should not have ten try blocks
While I agree with that, it's kind of missing the point. Exceptions can be used well. So can return codes. The problem is that when letting an error/exception be "someone else's problem" is the default, that tends to be what everyone does. It becomes nobody's problem, except the user who's left staring at an inscrutable program-termination message. It's the Volunteers' Dilemma in code.
A good error-handling paradigm would require that errors be explicitly handled, passed on, or suppressed. Exceptions let them be invisible. C-style error returns are a bit cumbersome, but still better for correctness. My favorite approach right now is Zig's, based on error returns but with extra features (e.g. defer and try) to make common idioms less cumbersome. I've heard Rust is similar, but haven't really looked into it.
I haven't measured the exact effect of this, but - although that if the wind doesn't throw the ship doesn't slow - having an exception-al path through the code will stop some compiler optimizations. I have seen this first hand but I also remember Walter Bright mentioning it either here or elsewhere.
If this is definitely the case maybe someone could chime in why exactly (I've never fiddled with a backend in the right places to deal with exceptions)
By default, every function call--including external function calls--can cause an exception to be thrown. Any unresolved function call now causes an extra edge in the control-flow graph that goes to what is effectively a return in the function. This extra edge is also harder to manipulate for normal CFG optimizations (e.g., splitting critical edges or duplicating nodes).
Yes, I also was pretty oblivious to this until Walther Bright mentioned it in the D forums a few months back. I went back and did some simple tests with the compiler explorer and sure enough, I got some extra overhead in the generated code with exceptions enabled as soon as the compiler could not prove the absence of exceptions.
SJLJ have overhead (but no one really uses that anymore).
The vast majority of modern envs (which boil down to DWARF and MSVC) only have overhead when thrown.
Theoretically true, but colleagues of mine have done benchmarks showing MSVC has a slight overhead with exceptions enabled, even if none are thrown. Worth benchmarking yourself if you need to be sure.
Is that recorded somewhere? The benchmarks I've done have been pretty all over the place and seem to be around jiggling the I$ alignment. Sometimes helps, sometimes hurts.
I agree completely, it really does run everything from start to finish. Sometimes that's what you want, and sometimes you don't. I could break on main as an option, and then start the benchmarking after main. It would include teardown, unless you call _exit() at the end of main.
I'm afraid not sorry. Though it probably wasn't a great benchmark. They were only investigating whether enabling exceptions has a cost if not thrown on MSVC. So a single example where there was a penalty was enough to cast doubt on exception usage. Enough that nobody felt like sticking their neck out to fight the cause, anyway ;)
Exceptions create another execution path through your code base. This means additional complexity, and one that is not often analyzed. I also don't like how exceptions are so often just passed around. They allow you to say "let's make this somebody else's problem".
Having such huge mechanisms as optional programming language features is also problematic for collaboration on big project and for using third-party code that made different decision. I prefer when language designer is very opinionated on such decisions, and then the combination of all decisions either proves to be clumsy or gains wide following.
Why do people complain about exceptions being "non-deterministic"? Malloc is not deterministic either, so in theory it causes bloat and unexplained slowdowns everywhere.
There are two ways that exceptions create problems not present with other error-handling approaches.
One is that the effective behavior of a function can be changed with no change to the function itself because of an exception thrown "over its head" from something it calls to something that called it. This makes both human and machine analysis significantly more difficult.
The other, somewhat related, has to do with performance. It's easy to do a function-by-function analysis of runtime in error and non-error cases, and those analyses are readily composable. With exceptions, it's much harder to predict how much time will be spent unwinding through M functions, calling M*N destructors, etc. This is exacerbated by the fact that exception-oriented languages also tend to encourage idioms that involve many more possibly-expensive object creations/deletions (including the exception object itself). "Zero cost abstraction" is true much less often than people seem to think.
If you're concerned with ensuring that a given piece of code will never exceed a certain execution-time bound, your job becomes at least an order of magnitude harder with exceptions than with plain old return codes, even if the two seem equivalent at some higher conceptual level.
>This makes both human and machine analysis significantly more difficult.
Given that the C++ standard throw exceptions in many places, and is only adding more in future versions, all this needs checked anyways. So this argument is a red herring. Proper exception handling is not optional for robust C++ code, since anyone under you can throw when someone makes feature changes or uses new features that throw in the future.
>your job becomes at least an order of magnitude harder with exceptions than with plain old return codes
No it doesn't. I routinely write high performance C++ code, often using hand crafted assembly where needed. You learn what can throw, what cannot, and don't put exceptions in the hot path.
How often are you really writing crazy optimized code where anything exceptional can happen? I think never. No allocations, no files being pulled from under you, no using reousrces you don't have properly set up before you enter the hot path.
>calling M*N destructors
What does this mean? Are your objects in an object soup of interdependencies? Use proper system decomposition and statements like this would never make sense.
> harder with exceptions than with plain old return codes
Suppose you have functions calling each other 10-20 deep, acorss a library or two, and the bottom guy in LibA returns an error code in the style of LibA. Somewhere up the stack a function using libA also touches LibB and your own stuff, all of which can error.
Now at the top, and along the entire path, you need to marshall error codes or change them, adding lots more code in the hot path, all checking and passing, and hoping someone doesn't make a change somewhere, because now you get to revisit every one of those paths to ensure new codes get transcribed....
The code is slower, has more branching, is harder to maintain (this adding bugs), than simply throwing when something fails at the bottom and handling it where appropriate.
So returns codes in fact make code messier, more error prone, and certainly larger, overflowing caches more often than simply using modern exceptions.
> You learn what can throw, what cannot, and don't put exceptions in the hot path.
That puts you light-years ahead of most C++ programmers I've encountered.
> What does this mean? Are your objects in an object soup of interdependencies?
Not my code; other people's. We're talking about the typical programmer here, not the outliers.
> at the top, and along the entire path, you need to marshall error codes or change them....
Yeah, it's a pain, but that's totally not the point. Ease of verification and ease of later modification are orthogonal.
> The code is slower, has more branching, is harder to maintain (this adding bugs), than
"Simply handling..." How cute. What about proving that it's handled, not just sometimes but every time? What about the legions of programmers who "forget" to handle some exceptions? What about the dozens of aborted processes that I see every day while running one of the world's biggest storage services at a company famous for its coding interviews, that result from such sloppiness? Never saw so much garbage in the many projects I've worked on that didn't use exceptions.
Look, I'm not saying that return codes are all sunshine and roses. Neither are exceptions. I'm making a very specific point that exceptions make code harder to verify for either correctness or bounded performance, because somebody asked. Maybe you prefer exceptions and that's fine, but that's an advantage they don't have. These unfounded claims about non-exception code being so much slower or less maintainable (despite your own claim to write such code for high performance) add nothing to the conversation.
>We're talking about the typical programmer here, not the outliers.
You're relying on the typical programmer to handle all error codes, pass them all throughout all the levels of an application, and not miss any, yet you don't want to use exception handling and train them to use that? Its's vastly less error prone.
>What about proving that it's handled, not just sometimes but every time?
Plenty of analysis tools do this such as Coverity. It's been done for literally decades.
Which is easier? Handling error codes each and every time, or ensuring that there are some top level exception catchers in important places, or simply one at the top, logging failures?
And in the previous section you were arguing about writing highly optimized code - this is not the domain of "typical programmers". Pick your goalposts.
>Yeah, it's a pain, but that's totally not the point.
If it's painful, it's less likely to get done. This is true for typical and advanced programmers. If it's painful, it's also more error prone.
>What about the legions of programmers who "forget" to handle some exceptions?
Simple to handle at a higher level, log and exit. Automates testing can easily catch them all for you, since you can have programs locate every throwable place in C++ and inject them.
Good luck automated testing all possible error codes, which are not easily programmatically discoverable.
You also didn't address that C++ already is full of places that throw exceptions, and more are added in each standard, and all foreseeable future language proposals expect them to be available and used.
So - if you have to learn how to use them simply to use C++, why not teach yourself and those around you best practices on using them? RAII, nothrow swaps, the three guarantees, etc.? It will make better, much more readable, and better extensible and maintainable code.
>These unfounded claims about non-exception code being so much slower
So you claim checking every level of code if something failed, only to pass it up higher, does not add code size (I.e., pushing out of cache) or branches (i.e., having to deal with branch prediction) in a hot path? This is pretty easy to see. There's plenty of places where a low level rare failure needs passed through several layers of code to a section that can do something about it, and an exception for that case is demonstrably, provable a faster hot path.
If you really need me to code one up and show you I can. It's trivial to check, and this is not a rare case. There's a reason modern compilers went to the zero overhead exception structures for not taken exceptions, and this is a prime example.
Well, there are several kinds of allocation strategies, so while the general malloc does a best-effort, I have no real choice with exceptions at the moment, other than to turn them off.
I do a lot of stuff in embedded, and it would be nice to have something like this. I'm also experimenting with using C++ as a compiled scripting language for a game engine, and for that I would like the binaries to be really small.
Because of docker, compiling freestanding binaries for any architecture is not such a barrier anymore.
Apps aren't the entire computing universe. There are probably dozens of processors in the room with you right now, in everything from your smartwatch to your monitor to your thermostat to multiple components within your laptop/phone, each running firmware that somebody had to write in a more constrained way than apps can get away with. It's the "dark matter" of computing, but it's not narrow at all.
My smartwatch runs a fat Java stack. My thermostat runs full fat Linux stack with a ton of services. My lightbulb runs fucking HTTP server for its REST API that controls it.
None of this is operates in any way remotely close to "malloc up front and never touch again." Nor do they operate with any sort of significant constraints vs. the typical app.
There are definitely things that do that. But even in the world of embedded it's a subset of things that do that.
Heck, the most software-constrained thing I have is probably the controller in my monitor. But then again that also runs Altera Arria V GX FPGA with 768MB of DDR3 RAM, which also definitely classifies as a niche area with highly specialized demands.
I do some work in embedded myself but vehemently disagree with that. Only very small parts of embedded systems need zero-alloc policies. Hell, I know people succesfully using new / delete on arduinos and the like.
I taught a bit at uni and you can't imagine the amount of students who read stuff like parent comment's "it's common to not use malloc" on internet and then try to apply this as a mantra to their codebase when the assignment is to make a freakin snake game with SDL and write actually legible code - the immense majority of them will never ever encounter any realtime problem and don't need to think that this is an actually common thing - it is not !
This is doing a disservice to the whole profession. If you actually need to program real-time systems, you will have processes in place to ensure that things don't end up in a syscall - or just don't have malloc at all available in which case the problem does not exist.
This is still kinda bad. What you really want is multiple return addresses. This is liked "double-barelled continuations", but the two continuation closures share the same activation record (and inductively, the same stack)
This paper is truly interesting. People have been fighting the performance cost between exceptions and returning error codes for years, So the paper just designs an instruction level mapping of generating code as if exceptions would be returning error codes.
The approach is somewhat simple and really not that clever, yet the outcome looks impressive enough. This surprises me as it's just a mechanism of what people have been arguing over years.
They propose a major language change: all functions would be noexcept by default, and only those explicitly marked with "throws" would be allowed to throw exceptions.
Interesting paper, but it's not gonna fly in practice.
This proposal is sort of based on herbceptions [1]:
> Sutter [19] has only very recently proposed re-using the existing function return mechanism in place of the traditional stack unwinding approaches, requiring no additional data or instructions to be stored, and little to no overhead in unwinding the stack. Furthermore, by removing stack unwinding’s runtime reliance on tables encoded in the program binary itself, the issue of time and spatial determinism is solved. As it is possible to determine the worst-case execution times for programs not using exceptions, it follows that exception implementations making use of the same return mechanism must also be deterministic in stack unwinding.
> Given these clear advantages, we have based our implementation on this design, with a core difference being the replacement of their use of registers with function parameters, allowing for much easier interoperability with C code, which can simply provide the parameter as necessary.
> A limitation with [19] is that they require all exceptions be of the same type, leaving much of the standard exception-handling functionality up to the user. Our novel approach includes a method of throwing and catching exceptions of arbitrary types (as with existing exception handling), without imposing any meaningful execution-time penalties when exceptions do not occur.
At least, it uses a comparable implementation strategy. Herbceptions used a different surface syntax, adding a new kind of exception and a new way to declare them.
I rather like that herbceptions are more restricted in type. Restricting the type means that a programmer catching an exception has more things they can do with it. It's currently impossible to sensibly handle exceptions of unknown type in C++. The change is an improvement, not a limitation.
The point about interoperation with C is interesting. How would that work? If you wanted to call a C++ function from C, would you have to kludge the prototype to add the exception pointer?
The herbceptions paper proposed trying to get the C ABI changed to support the same exception mechanism:
> 4.6.11 Wouldn’t it be good to coordinate this ABI extension with C (WG14)?
> Yes, and that is in progress. This paper proposes extending the C++ ABI with essentially an extended calling convention. Review feedback has pointed out that we have an even broader opportunity here to do something that helps C callers of C++ code, and helps C/C++ compatibility, if C were to pursue a compatible extension. One result would be the ability to throw exceptions from C++→C→C++ while being type-accurate (the exception’s type is preserved) and correct (C code can respond correctly because it understands it is an error, even if it may not understand the error’s type).
1. Every function now receives an additional parameter that is basically a pointer to a std::optional<exception_t> (from now on, "exception holder"; exception_t can hold any value whatsoever).
2. "try" is translated into allocating a (new) exception holder on the stack and passing pointer to it in all function calls inside of the try block. "catch" is translated into checking this exception holder for holding an (appropriate) exception. If "yes", handle it, if "no", go on.
3. Throwing an exception is done by putting the constructed exception into the exception holder passed to you, and returning as usual.
4. Each function call is now followed by checking whether the exception holder holds an exception. If "yes", return immediately, if "no", keep executing.
Example:
So, it's basically "if err != nil { return <default_value>, err }" but with slightly less amount of copying error values. Somehow, it ends up with the same performance despite all of this constant branching error-checking and better performance when re-throwing exceptions.