Both NULL and a unique pointer value can be safely passed to free() :-).
Answers to this question taught me about the candidates taste and understanding of good API design :-).
Both NULL and "unique pointer" are correct answers, but all modern implementations only chose to return one of these. My follow-up question is "why ?" :-).
When malloc returns NULL, it's saying that there was some error.
However, IIRC the only malloc error is ENOMEM. It's unclear why malloc(0) would run into that. (If malloc(0) did run into an ENOMEM, then NULL would be required, but the result of malloc(0) need not tell you anything about subsequent calls to *alloc functions. However, there is a possible malloc guarantee to consider.)
There's some interaction with realloc() which may favor NULL or a non-null pointer to 0 bytes, but that's too much work to figure out now.
Suppose that malloc(0) returns a not-NULL value. Is malloc(0) == malloc(0) guaranteed to be false? (I think that it is, which is how ENOMEM can happen.)
So, the "right" answer is probably malloc(0) returns a unique pointer because then the error check is simpler - a NULL return value is always a true error, you don't have to look at size.
The current malloc() overloads the return to say NULL == no memory available/internal malloc fail for some reason and as far as the standard goes, allowing NULL return if size==0.
So if you get NULL back from malloc, did it really mean no memory/malloc fail, or zero size passed in ?
glibc and all implementations distinguish the two by allocating a internal malloc heap header, but with internal size bookkeeping size of zero, returning a pointer to the byte after the internal malloc heap header.
The only valid things you can do with the returned pointer is test it for != NULL, or pass it to realloc() or free(). You can never dereference it.
Returning a valid pointer to NO DATA is what all modern implementations do when a size==0 is requested.
In an interview situation, discussions around all these points are very productive, telling me how the candidate thinks about API design and how to fix a bad old one, whether they know anything about malloc internals (which is essential as overwriting internal malloc header info is a common security attack), and how they deal with errors returned from code.
Remember, it was only my warmup question :-). Things get really interesting after that :-) :-).
While it would be good if malloc(0) always returned a pointer, you can't rely on malloc(0) returning NULL just for errors. There's not even a guarantee that malloc(0) does the same thing every time.
Note that "returning a pointer to the byte after the internal malloc header" means that malloc(0) == malloc(0), breaking the unique pointer guarantee unless malloc(0) actually causes an allocation.
However, allocating in malloc(0) means that while(1) malloc(0); can cause a segfault, which would be a surprising thing.
malloc(0) != malloc(0) because the internal malloc header is different for each allocation.
Asking for a malloc of size n, means internally the malloc library allocates n + h, where h is the internal malloc header size. So there is always an allocation being done for at least a size of h, just with an internal bookkeeping "size" field set to zero.
and yes, while(1) malloc(0); will eventually run out of memory, counter-intuitively :-).
Answers to this question taught me about the candidates taste and understanding of good API design :-).
Both NULL and "unique pointer" are correct answers, but all modern implementations only chose to return one of these. My follow-up question is "why ?" :-).