Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

this is a really good question, and the more I think about it, the less I'm able to come up with an answer

assuming each in-application sound generator is pull-based (like the OS sound APIs typically are) surely each one could write to a single accumulator buffer which is then sent to the OS as-is, eliminating the need for any special mixing

I'd like to know more about the requirements that led to this extra in-application mixing latency

edit: obviously the presence of push-based sound generation in the application would add another layer of mixing and latency, requiring previously prepared buffers to be mixed into a signal for the OS milliseconds after the source buffers were initially filled. however, this would be a self-imposed limitation that I still don't see a reason for



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: