Windows doesn't bother checking every string after every modification, but neither does Linux with UTF-8. You can pass invalid UTF-8 to tons of APIs and almost all of them will just work, just like you can with UTF-16 in Windows.
Doing a string validation check in every single API call would waste cycles for no good reason.
Linux (the kernel) doesn't claim (AFAIK) to use UTF-8. It takes NUL-terminated strings in any charset you choose (or none) and either compares them with other NUL-terminated strings, or spits them back out again later.
Interpreting a string as encoding text in particular character set is, as far as the kernel is concerned, a problem for userspace.
Linux supports UTF-8 the same way Windows does: by trusting you that the encoding is correct, unless you access APIs that are encoding dependent somehow. The Windows API is a lot more complete than the Linux API, so comparing them is rather pointless.
Windows supports UTF-16, it doesn't guarantee UTF-16 correctness. The native methods annotated with the W suffix all take UTF-16, so unless you want to render your own fonts, you're going to need to provide it with either that or ASCII.
Doing a string validation check in every single API call would waste cycles for no good reason.