> Most programs do not need precise control over these sizes; many programs that do try to achieve this control would be better off if they didn't.
Interesting that modern languages like rust, go, and zig all lean towards using integer types with well defined sizes. I think that we've learned that the less exact definitions in c can cause problems. For example, if you develope and test in an environment where int is 32 bits, you could easily end up with bugs that only exist if int is 16 bits.
Modern C (from C99 onward) introduces a few families of sized types -- [u]int[_least|_fast|][8|16|32|64]_t. The non-least-or-fast types are not guaranteed to exist (e.g., int8_t is optional because implementation may be unreasonable on a 24-bit DSP with no sub-word operations), but the _least and _fast types are generally what you want anyway, unless you're relying on unsigned overflow behavior.
In my experience, modern code is either written almost entirely in terms of sized values like int32_t (when assuming a "normal" platform), or almost entirely in terms of _least/_fast values (when going for maximum portability). The only case that "naked" int/long is still common in this model is in a few specific uses such as loop indices where it's idiomatic.
The _fast and _least types are especially interesting, pragmatically. "int_fast8_t" means "I need a signed value that can store values between -128 and 127, doesn't have defined behavior on overflow, and is allowed to fill a register / use a word worth of memory." "int_least8_t" means "I need a signed value that can store values between -128 and 127, doesn't have defined behavior on overflow, and I am likely to have a significant number of these in memory so please pack them as efficiently as reasonable." (And of course we have bit packing when we need to pack more efficiently than reasonable; bit-packing with int_fast8_t allows you to represent "I need this to take exactly 8 bits of memory, whatever it costs compute-wise, because I'm basically filling memory with these.")
In general, I find that these types give modern C a good balance between declaring requirements on types and not overly trying to control their size.
Either is valid, but int_fast8_t is more meaningful, in my mind. Using an int_fast8_t encourages the compiler to use the fastest representation, except when explicitly bit-packed. When the value is pulled into a register it's probably not going to matter (unless the compiler is doing something quite silly with regards to handling unsigned overflow for a uint*), but it means that spilling it to the stack, etc, will prefer word-sized operations (which may or may not be preferred, but is my default). Meanwhile, the bit packing is fully specifying the representation when the struct containing the value is written to memory, so there's no distinction between _fast and _least here.
> And of course we have bit packing when we need to pack more efficiently than reasonable; bit-packing with *int_fast8_t* allows you to represent "I need this to take exactly 8 bits of memory, whatever it costs compute-wise, because I'm basically filling memory with these."
AFAICT, "int_fast8_t" is generally typedef to "int", and "int_least8_t" is generally typedef to "char", which conflicts with "I need this to take exactly 8 bits of memory, whatever it costs compute-wise, because I'm basically filling memory with these."
I'm fond of Ada's approach: generally you just specify the range you want and let the compiler figure out the best integer size to use. You only need to specify a size if you're doing low-level work or FFI.
Technically the wrong type was used if no less than 32 bits was desired. A common failing when you learn a compiler/platform or two instead of the language. Such has been called many things, among them 'everything is a VAX'.
Interesting that modern languages like rust, go, and zig all lean towards using integer types with well defined sizes. I think that we've learned that the less exact definitions in c can cause problems. For example, if you develope and test in an environment where int is 32 bits, you could easily end up with bugs that only exist if int is 16 bits.