Yes, it's suboptimal to have a server that receives a request with a 32 bit length field indicating 3 gigs of incoming data that the bombs out trying to malloc 32 gigs.
I'm saying: what's an example of a server that reliably, for every allocation of every piece of metadata, every strdup, every hash table entry, every connection object, &c &c, has a recovery regime so that it doesn't have to fail ever when malloc does?
One way you could find such a program would be to compile a candidate and preload a malloc that randomly (1 in every 100 calls per allocation size, for instance) returns NULL. See if the program (a) continues to run and (b) passes some simple unit test suite.
My contention is:
(a) Those programs will be hard to find, because most "production ready" Unix code does not have that property, and
(b) The criteria I'm talking about matters a lot, because (1) memory exhaustion strikes at totally arbitrary points in a program's execution, not just at the points where you're prepared to handle it, and (2) attackers can pinpoint exactly the allocation they want to have fail.
As an example of an approach that doesn't seem to work well: in "memqueue", the function that creates HTTP headers in responses allocates an array of iovecs. If that malloc fails, the header creation function returns -1. The function that calls the header-creation function, http_respond, catches that error and itself returns -1. Nothing ever checks the error return from http_respond.
(Also, the loop in which memqueue reads entire requests into memory by continuously realloc()'ing a receive buffer has an integer wrap bug in it, though it's probably not triggerable. But incorrect is incorrect, right?)
Also worth noting: under normal conditions on Linux, malloc will never fail--unless you set something like a rlimit on address space, mmap will happily hand you as much address space as you want, then OOM-kill you (or worse, someone else...) when you try to make it resident. So even if we could write malloc-failure-safe software, which we can't, it'd be almost impossible to end up in that condition.
you don't have to receive a request asking for 3 gigs of data. You could be close to the edge and there's not enough room to receive this particular request.
There are many daemons that require you to keep what you have in memory and just fail the particular action you are doing instead of throwing everything away. DB servers for example.
memory exhaustion is an error that should be handled like other errors.
in memqueue, http_respond logs the failed memory allocation(in http_cli_resp_hdr_create) and returns -1. there's nothing else I need to do in this case. The connection will get dropped without a response.
when reallocing, I don't see the integer wrap bug. Can you point me to the line?
What I am suggesting is to not overthink system error handling. Just handle it; aborting is one type of handling but not always what you want. Programs run in various environments and to guarantee a defined behavior we need to abide by the standard.
Then you've misunderstood me, or I've miscommunicated. My argument is that the default handling strategy should be to abort. I'm not saying that special case handling is evil. I'm saying that defaulting to manually checking malloc's return value is evil.
Also, your chunked encoding decoder seems to be using a signed strtol() routine to read an unsigned length variable. I could be misreading; I didn't look carefully.
It depends on the goal in the end. As long as it's an explicit decision and not relying on environment the behavior is expected to be defined.
For instance, I worked on an enterprise proxy where aborting on asserts wasn't acceptable. Why? Because the customer didn't want to interrupt his users even though in our opinion the proxy state was out of whack. This created a nightmare for us because it was hard to debug. We ended up fork()-ing and aborting on the side to debug the cores.
Yes, it's suboptimal to have a server that receives a request with a 32 bit length field indicating 3 gigs of incoming data that the bombs out trying to malloc 32 gigs.
I'm saying: what's an example of a server that reliably, for every allocation of every piece of metadata, every strdup, every hash table entry, every connection object, &c &c, has a recovery regime so that it doesn't have to fail ever when malloc does?
One way you could find such a program would be to compile a candidate and preload a malloc that randomly (1 in every 100 calls per allocation size, for instance) returns NULL. See if the program (a) continues to run and (b) passes some simple unit test suite.
My contention is:
(a) Those programs will be hard to find, because most "production ready" Unix code does not have that property, and
(b) The criteria I'm talking about matters a lot, because (1) memory exhaustion strikes at totally arbitrary points in a program's execution, not just at the points where you're prepared to handle it, and (2) attackers can pinpoint exactly the allocation they want to have fail.
As an example of an approach that doesn't seem to work well: in "memqueue", the function that creates HTTP headers in responses allocates an array of iovecs. If that malloc fails, the header creation function returns -1. The function that calls the header-creation function, http_respond, catches that error and itself returns -1. Nothing ever checks the error return from http_respond.
(Also, the loop in which memqueue reads entire requests into memory by continuously realloc()'ing a receive buffer has an integer wrap bug in it, though it's probably not triggerable. But incorrect is incorrect, right?)