Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not even close. You can have 1 millisecond latencies with a well configured kernel (PREEMPT), directly talking to ALSA. I'm talking both Qualcomm and Exynos CPUs.

It just takes a SCHED_FIFO task with forced CPU affinity. Android does not make it easy to get one.

There are some other hardware issues, like audio input and output using separate clocks on Qualcomms, but that's at most one extra buffer.

Speaking of this, you can have sub-millisecond latencies on Beagleboard. These devices in phones are vastly more powerful.



> "It just takes a SCHED_FIFO task with forced CPU affinity. Android does not make it easy to get one."

Are you aware of any non-RTOS that "makes it easy to get one"?

> "Speaking of this, you can have sub-millisecond latencies on Beagleboard."

I did not make myself clear, so I'm taking the blame here.

I'm mostly interested in the use case. If it's just capturing audio alone, this number makes sense, and no fancy hardware is necessary.

The moment we're introducing some-kind of processing, or even logging the stream to disc, buffering becomes necessary and latency is introduced. Assume audio stream read by a "user-mode" service, then redirected out through headphones - are we still talking sub-millisecond latencies?


Audio is an extraordinarily low demand task, and hardware is not the issue, and hasn't been for a couple of decades (even DSP offloading has virtually disappeared). A smartphone has many, many magnitudes of excess performance to handle extremely low latency audio. The iPhone has had a 7ms latency for many generations, and the 7ms is not some intrinsic limit but is simply a decent balance between low power usage and ensuring no glitches.

The problem on some platforms is one of architectural choices and prioritization. On Android we know they started with the already arguably poor latency Linux audio foundation of ASLA, then layered on and layered on (flingers and HLAs and user-mode transitions), each layer adding its own ring buffers.

Even on Windows, on the fastest PC known to man, audio has generally poor latency (because it's architectural) which is why audio software makers have their own hardware->application drivers (ASIO).

Low latency audio was not important to the project, and they dug themselves in so deep that for many years we've been hearing recurring "We've finally solved that latency issue" claims.


Check out the Bela project[1], which has back-ends for a few different audio packages (e.g. PD and SuperCollider) that totally bypass the kernel and handle the audio callback directly (maybe from a bare-metal ISR, not exactly sure). That’s how they get sub-ms round-trip latencies.

[1]: http://bela.io


Completely agree. Hardware itself doesn't introduce any noticeable latency, it's just whole lot of buffers at various software layers because audio stuttering and power consumption is always prioritized over latency. Some devices actually support low-latency in audio drivers by conforming to Android Professional Audio. I haven't yet gotten chance to test any of those devices if improvement is noticeable.


What about all the delays from the use of java? Even if your audio app is in c++ for performance, it's still competing for resources with many other java apps easily using gigabytes of ram just idling. I don't see any way to mitigate that.


Not a problem at all, that is just the typical Java FUD and bad coded apps.

The real time audio APIs are in native code and they can request for priority use when on foreground.

Samsung used to support real time audio on their S models since many years.

https://developer.samsung.com/galaxy/professional-audio

Which they are now deprecating as they also contributed to the design of AAudio.


I can "feel" the lag in the android experience and it's the same lag that I get from desktop apps written in java. It's not just FUD. Almost always when there's a memory-bloated application it's written in java. Hell, adk won't even run unless you increase the -Xmms setting to 4+ gb.

People rightfully pointed out that tablets are plenty powerful to do DSP type applications, but something is consuming all the resources. I'm just saying what that something is.


Don't confuse the language with the ability to program.

I lost count the amount of times I have fixed junior code doing what should be background stuff written on the main thread, for loops instead of System.arraycopy, allocating memory in loops and lots of other stuff due to lack of proper teaching.

As for Android, the real time audio stack is fully native and Google had to learn from Samsung how to do it properly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: