Rule #1 of systems programming: "Never check for an error that you don't know how to handle." This is, of course, a joke, but it has a kernel of truth.
I've worked on production code where, if allocation fails, the system fails, too, and restarts. The allocator never returns. There's usually some kind of helper system watching for the failure, which can take a dump or at least a stack trace.
The theory is that any attempt at recovery is going to be a disaster; not well tested, and probably doomed in the face of memory pressure anyway. On 64 bit systems the memory ceiling is essentially infinite, but reaching the point where we start to swap is horrible and a pyrrhic victory at best, so instead let's do a controlled crash and log enough information so we can analyze and fix the actual problem (which is a misunderstanding of load and resources, or maybe a bug or an attack where we're trying to allocate twenty gazillion bytes).
That's one philosophy. Another (which was prevalent on the Mac in the 80s, in user applications) is to have a reserve; you test the crap out of the code and give it enough memory in some kind of emergency mode to succeed at getting back to a normal state, or at least quitting in an orderly fashion (ditch dynamically loaded code, fonts, whatever) so that the user's work isn't totally lost.
There's a lot to be said for both designs, and of course these don't cover the case when you're writing library code that can be used by anyone, where reliably returning failure to the caller is important. Here you don't have a choice other than to check and be robust.
I worked on some safety critical systems in C. The philosophy was treat the code like a little person running around. The person (the code) always wanted to know where the fire escape was. The fire escape always needed to be very easy to use and very reliable. Every function had little fire escape handler to clean up on error. Higher level functions had to decide how to put out the fire (speaking metaphorically here but we really could have started fires) or to raise the alarm to a higher level.
At the very top level, the only thing we could do was save a log then reset (least worst option). Too many resets too fast and we'd take ourselves offline and scream for a human (again, least worst option).
I've worked on production code where, if allocation fails, the system fails, too, and restarts. The allocator never returns. There's usually some kind of helper system watching for the failure, which can take a dump or at least a stack trace.
The theory is that any attempt at recovery is going to be a disaster; not well tested, and probably doomed in the face of memory pressure anyway. On 64 bit systems the memory ceiling is essentially infinite, but reaching the point where we start to swap is horrible and a pyrrhic victory at best, so instead let's do a controlled crash and log enough information so we can analyze and fix the actual problem (which is a misunderstanding of load and resources, or maybe a bug or an attack where we're trying to allocate twenty gazillion bytes).
That's one philosophy. Another (which was prevalent on the Mac in the 80s, in user applications) is to have a reserve; you test the crap out of the code and give it enough memory in some kind of emergency mode to succeed at getting back to a normal state, or at least quitting in an orderly fashion (ditch dynamically loaded code, fonts, whatever) so that the user's work isn't totally lost.
There's a lot to be said for both designs, and of course these don't cover the case when you're writing library code that can be used by anyone, where reliably returning failure to the caller is important. Here you don't have a choice other than to check and be robust.
So three, yes, three philosophies.