> For example, though you can "acquire" locks, it's really just flipping a bit of state that already exists somewhere more than it is granting you a new something.
That's true of literally every resource acquisition, including malloc and fopen. It's all bits in memory.
And IMO, the defer model is forcing a conceptual baggage because of it's strict definition of when it fires. You can always simply return the object and keep the resource acquired into your calling function, pass the acquired resource to another thread, etc. Defer enforces much more of a code structure in order to guarantee cleanup.
> That's true of literally every resource acquisition, including malloc and fopen. It's all bits in memory.
Hard disagree. Taking an action that causes a finite resource (fds, memory, ports, etc.) to be allocated and used up is fundamentally different from just flipping a bit in a resource that already exists. The lock doesn't come into being when you "acquire" it, the lock already existed. Nothing was allocated when you acquired it. There's no new resource. There's no "oops I don't have enough {RAM/fd/whatever} to lock the lock for you" if the lock exists already.
> And IMO, the defer model is forcing a conceptual baggage because of it's strict definition of when it fires.
Is automatic storage and block scopes conceptual baggage too then? I think that is only true if you reject the execution model that implies a stack. Even if C isn't a perfect model of what happens under the hood, it's a a reasonably thin veneer over how your machine executes the code. Forcing an object model is strictly adding new concepts that do not exist at the level of the bare CPU (or the abstract machine as defined in the C spec, for that matter). It's also not required for programming. In that respect it is superfluous and if you don't want or need it, I think it's fair to call it conceptual baggage. Again, I don't need objects, and my CPU doesn't need objects, we just need a function call (or just a bare expression) to manipulate some state. There's no need to add to that.
> Taking an action that causes a finite resource (fds, memory, ports, etc.) to be allocated and used up is fundamentally different from just flipping a bit in a resource that already exists.
The lock itself is the finite resource. Only one consumer can lock it at a time. And there are very much systems where semaphores are kernel objects that can run out (although they tend to be older systems that thankfully we've moved away from for the most part).
> Is automatic storage and block scopes conceptual baggage too then? I think that is only true if you reject the execution model that implies a stack.
RAII very much uses the stack model of functions and blocks at its core too; defer doesn't have a monopoly on that.
That's true of literally every resource acquisition, including malloc and fopen. It's all bits in memory.
And IMO, the defer model is forcing a conceptual baggage because of it's strict definition of when it fires. You can always simply return the object and keep the resource acquired into your calling function, pass the acquired resource to another thread, etc. Defer enforces much more of a code structure in order to guarantee cleanup.