Especially since you don't even know what you're attempting to optimise for.
Latency? p99 of linux is fine, nobody is going to care that the request took 300μs longer. Even in aggregate across a huge fleet of machines waiting an extra 3ms is totally, totally fine.
Throughput? you'll bottleneck on something else most likely anyway, getting a storage array to hydrate at line rate for 100GBPs is difficult and anyway you want to do authentication and distribution of chunks and metadata operations anyway? right?
You're forgetting that it's likely an additional cost of a couple million dollars per year in absolute hardware to solve that issue with throughput, which is, in TCO terms, a couple of developers.
Engineering effort to replace the foundation of an OS? Probably an order of magnitude more. Definitely contains a significant amount more risk, and the potential risk of political backlash for upheaving some other companies workflow that is weird.
Hardware isn't so expensive really.
Of course, you could just bypass the kernel with much less effort and avoid all of this shit entirely.
Linux is not your "random on the side" feature that is good enough.