Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because the rest of the system is not necessarily designed to tolerate high frequency content gracefully. Any nonlinearities can easily cause that high frequency junk to turn back into audible junk.

This is like the issues xiphmont talks about with trying to reproduce sound above 20kHz, but worse, as this would be (trying to) play back high energy signals that weren’t even present in the original recording.



That would mean that higher sampling rates (which add more inaudible frequencies) could cause similar problems. OK xiphmont actually mentions that, sorry, I had only watched the video when I replied.


If I were designing a live audio workflow from scratch, my intuition would be to sample at a somewhat high frequency (at least 48kHz but maybe 96kHz), do the math to figure out the actual latency / data rate tradeoff, but to also filter the data as needed to minimize high frequency content (again, being careful with latency and fidelity tradeoffs).

But I have never done this and don't have any plans to do so, so I'll let other people worry about it. But maybe some day I'll carry out my evil plot to write an alternative to brutefir that gets good asymptotic complexity without adding latency. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: