Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, we are earnestly analyzing a silly little example program, but okay :). The equivalent C code wouldn't produce stack-allocated mutexes unless that's what the programmer wanted. E.g. the POSIX pthread functions don't care where your mutexes are allocated, since they are always passed by reference.


It's not the mutexes that'd be stack-allocated, but the list of things to call back to at the end of the function. The locks list could be modified or freed by the end of the function, but something still must hold the list of things to deferred-unlock.

The pthread_cleanup_push/pthread_cleanup_pop thing presumably keeps its own heap-allocated vector, backed by malloc or something. C itself can't willy nilly heap-allocate, so that list will be on the stack. But the stack is tiny compared to how long loops can be. Hence stack overflow.


C libraries -- including the runtime implementations of features like this -- can heap allocate just fine. They just need to return pointers on the stack to the heap allocated values, either directly or indirectly (e.g. buried in a struct return value). As is always the case with C, the burden to free the memory is on the caller: no problem.

Given a possible implementation, this loopy mutex example could have a tiny stack footprint: a single pointer to the shared cleanup() function; and a single pointer to the head of a (heap allocated) linked list of pointers to mutex (i.e., the function arguments). And the function pointer would not necessarily require allocation at all, as we can statically point at the function definition here. So we are down to a single word of stack allocation.


Who's going to construct the linked list, and where does it live? That's what the parent comment is pointing out.

In the general case I see no alternative to either the compiler generating one alloca() per defer or heap allocating defer callbacks. Both are terrible solutions for C, because alloca can overflow, while heap allocations can fail with an error code and defer has no way to catch that error. Besides, C programmers just won't use the feature if it requires allocation out of performance concerns. Block-scoped defer is the only reasonable semantics.


Same question in return: who's going to alloca() or heap-allocate the defer callbacks? How is that substantively different from maintaining a linked list? As soon as compiler support is on the table -- i.e. we're not limited to using some too-clever cpp macrology and a support library -- then virtually any implementation is possible. There's obviously more than one way to do it.

> C programmers just won't use the feature if it requires allocation out of performance concerns.

I agree that many C programmers wouldn't touch the feature for performance reasons. But let's not pretend that every C program is a video driver, a AAA game or a web engine. Many, many large C programs would benefit immensely from `defer` semantics -- otherwise, why would the GCC feature exist -- and they are performance-tolerant enough that a little heap allocation would be a reasonable tradeoff for increased safety.

But I'm not really defending `defer` in the first place...

> Block-scoped defer is the only reasonable semantics.

I agree with you completely. :) I was never defending function-scoped `defer`, but answering claims about the necessity of stack allocation. There are possible defer implementations that wouldn't blow the stack: that's my only point.


> But let's not pretend that every C program is a video driver, a AAA game or a web engine.

While true, the culture in C and C++ circles pretends otherwise, hence why we have unsafe by default and safety as opt-in in those languages.

The sky would fall if we lose that 1us.


Let's hope that they sky doesn't fall. :)

It's healthy for culture-conscious programmers to reflect, now and then, on just how small a segment they represent. Cultured programming is fine, but it's like opera: the ordinary programmer recognizes a few of the tunes, but they don't sing along. It's hard to appreciate just how much uncivilized business code is out there, when nobody is getting HN likes for keeping that 1980's ERP running.


> Besides, C programmers just won't use the feature if it requires allocation out of performance concerns.

Nevermind embedded platforms where heap might be unavailable or just so scarce that its use beyond early initialization is strictly verboten. Or interrupt handlers where you simply can't call an allocator.. Block scoped defer could still be useful on such systems (e.g. with locks).


Of course. And if you were writing your embedded system in C++, you'd avoid EH and other non-zero-cost features. That doesn't mean that these features aren't useful in other contexts. I submit that the world of C and C++ programming is vastly larger and more diverse than the world of interrupt handlers and embedded systems.

...But as I pointed out in a sibling comment, I was never defending function-scoped defer. I agree with you. I was pointing out that the implementation of such a feature wouldn't require excessive stack allocation.


defer, as proposed here, isn't a library feature though, it's a part of the core language. I'd like to be able to use it, but if it can ever do a malloc (which is horrifically slow compared to not having one), it's just infeasible.

But I looked through the spec again, and it actually just says that defer outside the top level is implementation-defined, so this is irrelevant anyway.


I admit that I didn't read the actual article. :) I was using "library" in a broad sense, just to mean the runtime code that you didn't have to write yourself.


C doesn't exactly have "runtime code that you didn't have to write yourself" though. There's libc, but you can easily disable it, and having any form of defer be unavailable then is just bad. Everything else comes from your code, the headers it includes (which don't contain function definitions), and statically linked things.


Sure. And just like libc, you could easily disable (by not using) a theoretical libc_defer that provided defer semantics. Kind of like libm, you use it when you need it.

This has been a fun conversation, and I really enjoyed chatting with you about this. :) Take care.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: