If you're trying to implement a language with substantially different semantics from C (e.g. a substantially different memory model, or without UB) the semantics of C make it really unsuitable as an IR.
You can't use C's casts (undef for out of range float -> int conversions, for example), arithmetic (undef for signed overflow), or shift operators (implementation-defined behavior for signed right shifts, undefined behavior for left shifts into the signbit or shift counts not in [0, n)). You can work around these by defining functions with the semantics that your language needs, but they get gross pretty quickly (they are both much more verbose and more error-prone than having an IR with the semantics you really want, and they require optimizer heroics to reassemble them into the instructions you really want to generate). Alternatively, you can use intrinsics or compiler builtins, but then you're effectively locking yourself to a single backed anyway, and might as well use its IR.
The issues around memory models (especially aliasing, but also support for unaligned access, dynamic layouts, etc) are worse.
Even LLVM IR is too tightly coupled to the semantics of C and C++ to be easily usable as a really generic IR for arbitrary languages (Rust, Swift, and the new Fortran front end have all had some struggles with this, and they're more C-like than most languages). C is much worse in this regard.
The behavior of shift operations on signed integers will be fixed in C++20 and C2x, as part of the effort to require twos complement representation. It is a massive potential source of UB in currently standardized C and C++.
Even after C2x is finalized, people will be using C compilers that don't conform to C++20 and C2x for at least another decade, so you'll forgive me if I don't hold my breath =)
You can't use C's casts (undef for out of range float -> int conversions, for example), arithmetic (undef for signed overflow), or shift operators (implementation-defined behavior for signed right shifts, undefined behavior for left shifts into the signbit or shift counts not in [0, n)). You can work around these by defining functions with the semantics that your language needs, but they get gross pretty quickly (they are both much more verbose and more error-prone than having an IR with the semantics you really want, and they require optimizer heroics to reassemble them into the instructions you really want to generate). Alternatively, you can use intrinsics or compiler builtins, but then you're effectively locking yourself to a single backed anyway, and might as well use its IR.
The issues around memory models (especially aliasing, but also support for unaligned access, dynamic layouts, etc) are worse.
Even LLVM IR is too tightly coupled to the semantics of C and C++ to be easily usable as a really generic IR for arbitrary languages (Rust, Swift, and the new Fortran front end have all had some struggles with this, and they're more C-like than most languages). C is much worse in this regard.