Once upon a time Linux solved the "syscalls are slow" problem by making syscalls fast, rather than introducing convoluted APIs as a workaround, like other operating systems did.
Perhaps one day we can get back to that model, or maybe now that Linux has gone corporate those days are gone. Even when you can make something faster with a simple, straight-forward solution, there's always that one corporate sponsor who doesn't benefit. Before Linux would just tell them sod-off.
that’s (at best) a gross oversimplification of the process, idealizing the past while ignoring how expectation for the kernel have increased (I e. security & privacy), all colored by a layer of bad faith that, to be frank, seems to originate with some non-specific grievances.
I'm simply channeling Linus Torvalds. Here's a sample of a rant/boast from 2000 where Linus defends the "heavy-weight" threading approach against claims that a proper user-space threading architecture could provide better performance by requiring fewer syscalls and less memory:
> Yes, you could do it differently. In fact, other OS's _do_ do it differently. But take a look at our process timings (ignoring wild claims by disgruntled SCO employees that can probably be filed under "L" for "Lies, Outright"). Think about WHY our system call latency beats everybody else on the planet. Think about WHY Linux is fast. It's because it's designed right.
The same architectural approach was taken with fork: Linux optimized the heck out of fork to the point where fork on Linux was faster than thread creation on Windows. On multiple occasions Linus has argued (and proven), that optimizing the simple but "heavy-weight" approach can reap dividends at least as great as more complex, more flexible architectures.
Of course, that was then and this is now. And many things have changed, including the success of Linux. I'm just suggesting, or perhaps implying, that there's more eagerness to entertain the replacement of traditional syscall semantics and other subsystems with more complex frameworks than there once used to be. In the context of Spectre, arguably it would have been easier for a younger Linus (and younger Linux) to refuse to rearchitect syscalls (with the concomitant additional kernel and semantic complexity), to tell everyone to sit tight and wait for AMD and Intel to come up with mitigating hardware fixes, and in the intervening years just take the performance hit.
Perhaps one day we can get back to that model, or maybe now that Linux has gone corporate those days are gone. Even when you can make something faster with a simple, straight-forward solution, there's always that one corporate sponsor who doesn't benefit. Before Linux would just tell them sod-off.