> The audio stack is fragmented on all operating systems because the problem is a large one. For example, on Windows the audio APIs being ASIO, DirectSound and WASAPI. Perhaps MacOs has the cleanest audio stack, CoreAudio, but nobody can clearly say if they can’t look at the code. PulseAudio was inspired by it.
The stack of commercial operating systems are not actually better or simpler.
Maybe so for developers, but as a user on Windows or MacOS I don’t have to know or care about any of that stuff. Things just work, and work sensibly.
> Maybe so for developers, but as a user on Windows or MacOS I don’t have to know or care about any of that stuff. Things just work, and work sensibly.
depends on the kind of user you are. If you're looking for the lowest-latency possible (for instance because you play guitar with an effect stack going through your computer) then windows with ASIO or linux with Jack give you a better mileage on the exact same hardware than CoreAudio
You aim for 3ms, 10ms can be described as annoying. 50ms is really bad.
In reality you either move to analog processing when you reach that much delay or design your creative workflow around it (learn not to expect immediate feedback) so you move towards offline processing.
As a guitarist, 8-10ms of processing latency is playable but feels somewhat strange especially if you compare to plugging the guitar right into a tube amp. 2ms is entirely indistinguishable for me.
And, with the "normal" system APIs (except CoreAudio which fares a bit better) it's pretty hard to get below 15ms in my experience, even with beefy computers.
As a desktop Linux user who don't really care about that stuff, things have always “just worked” for me. When I started using Linux back in 2009, I've heard horror stories form the past about how pulseaudio was a nightmare and never wanted to run, but I've never had issues with it myself (nor on my mother's and wife's computer, which have also been running Linux for a few years).
Yet that is not true at all. The other day i noticed that whenever you use a bluetooth headset on ms teams, it uses its own method to connect directly to the headset, skipping the windows mixer and other infrastructure. The net result is that I cannot play any other sounds throught the headset while teams is using it, unless I also set windows to target the HFP profile on the headset.
Things can just work if your expectations are low and specific requirements are common, and this is true independently of OS. Where Linux shines (in general), is with customization or uncommon requirements, which in this case would AFAIK be things like low latency audio.
Maybe so for developers, but as a user on Windows or MacOS I don’t have to know or care about any of that stuff. Things just work, and work sensibly.