Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The correct way to attack this isn't by attacking the theory. It's to gather a lot of people and ask them to press a button indicating whether the audio they hear is 16-bit or 24-bit.

If the results are no better than chance, then 24-bit doesn't matter, regardless of how sound the underlying argument is.

EDIT: The experiment would also be extremely difficult to design. For example, you'd need to run this test with music, not simple sounds. So the question is, which music? I think whatever is most popular at the time would be a good candidate, because if people are listening to music they hate, they won't care about the fine details of the audio. But that introduces an element of uncertainty and noise into the results which is hard to control for.

Some people might deliver accurate results with https://www.youtube.com/watch?v=2zNSgSzhBfM but not with https://www.youtube.com/watch?v=4Tr0otuiQuU whereas for others it's the opposite.

Or, it could be the exact opposite: Maybe you can only detect whether a sound is 24-bit when it's a simple tone, and not music.

Age is also a factor. My hearing is worse than a decade ago.

The headphones used by the test are another factor. If you feed 24-bit input to headphones, there's no guarantee that the speakers are performing with 24-bit resolution. In fact, this may be the source of most of the confusion in the debate. I'm not sure how you'd even check whether speakers are physically moving back and forth "at 24-bit resolution" rather than a 16-bit resolution.



For example, you'd need to run this test with music, not simple sounds. So the question is, which music? I think whatever is most popular at the time would be a good candidate

A quick summary would be that most "popular" music has been mastered with the following goal: the song should be recognizable and listenable on a FM-radio with only a limited bandwidth midrange-speaker. One of the many things they do to achieve this is by eliminated almost all dynamic range through a process called "compression" (dynamic compression, not digital-compression).

They also limit the spectral range to not have "unheard" sounds cause distortion when played through limited bandwidth amplifiers and speakers.

This means that the kinds of musical pieces which could benefit from the increased dynamic range of 24-bit would be thoroughly excluded from the test.

And then you'd probably get the "expected" result, but only because you now test whether music mastered specifically not to have dynamic range benefits from having increased dynamic range. For which the answer is given.

Note: I'm not claiming 24-bit end-user audio has merits, of which I have little opinion. I'm just pointing out the flaw in the proposed experiment.

If you feed 24-bit input to headphones, there's no guarantee that the speakers are performing with 24-bit resolution.

Not sure if you're just imprecise in your language here or if you're genuinely confusing things. Speaker-elements, as found in both speakers and headphones are analogue. They operate according to the law of physics, and respond to changes in magnetic fields, for which there is practically no lower limit.

They have no digital resolution. A quick example: Take your 16-bit music, halve the volume and voila! You are now operating at "17-bit resolution". Halve it again. 18-bit resolution. Etc.

There's probably some minimum levels of accuracy, yes, but it just doesn't make sense to measure it in bits.

If you're aware of this and were just trying to adjust the language to the problem at hand, I'm sorry for being patronizing, but I just wanted to make sure we keep things factual here.


24 bit resolution is important for capture, because it leaves headroom for mistakes. 16 bits is enough for mastering.


There's also headroom for the signal processing in the equipment. Equalization or volume control done poorly can lower your dynamic range, for example when turning the volume down on windows then turning it up on an external amp.


The experiment would also be extremely difficult to design.

I disagree. I think all the factors you are concerned about can be eliminated with a large enough sample size, like in the thousands (or maybe 10s of thousands).

You allow each person to select the genre of music they like, and you play a few clips from a few songs of each bitrate. Then they guess which is 24-bit and which is 16-bit.

I'm not paying to set it up. But it could all be done online without too much grief. It would be good to track the other statistics (age, headphone brand, etc.) as well, and see if something falls out of that.


Pretend your headphones only moved with 8-bit resolution. There is no possible way the experiment could derive a useful conclusion, but you might trick yourself into thinking it did. Especially if your sample size was 10,000 people.

More realistically, the participant might choose music for which no 24-bit recording exists.

It's very important to control for every variable. It's actually not possible to gather info about what headphones the listener is wearing. Even if it was, it wouldn't be possible to know whether they're doing the experiment in a quiet room, or whether there's a traffic jam just outside their apartment window, or whether their dog is barking during the test. Stuff like that.

Crowdsourcing this is an incredibly cool idea, but it'd just be so easy to believe you've performed a reliable test even though some variable undermined it.

I forgot another variable: Whether the music was recorded at 16-bit resolution. Most musicians use 24-bit, but it's easy to imagine that some of their samples might've been quantized to 16-bit without them realizing it.


It's very important to control for every variable.

It's not, actually. Say you have 10,000 listeners and you randomly assign each one to 16-bit vs 24-bit listening. You have enough listeners that any differences between the groups are due to chance and will very close to even out. Now, if you find people are unable to distinguish between 16-bit and 24-bit you might want to try the test again with more control over the environment, but if you find a substantial difference in a large blind randomized test that's a real finding.


More realistically, the participant might choose music for which no 24-bit recording exists.

Well, obviously we'd need to have a limited set of music selections for which we have 24-bit recordings.

As you suggest, I expect the biggest impact on playback fidelity is going to be other factors like the noise in the system (likely a PC) and such.

But the flip side of it is that's also a good real world test. If the only time you can tell a difference is to be in an acoustically dead room with top end equipment, then the higher sample rate really isn't worth it.


But the flip side of it is that's also a good real world test. If the only time you can tell a difference is to be in an acoustically dead room with top end equipment, then the higher sample rate really isn't worth it

Hey, that's a great point! Hadn't considered that.

Proving "most people can't tell the difference between 24-bit and 16-bit in real-world settings" is less compelling than proving "no one can ever tell the difference," but it's still very relevant.


If the results are no better than chance, then it remains possible that a small subset of the test group actually can appreciate an improvement. Content providers may like to cater for that small subset. disclaimer: I am not in that hypothetical subset.


If the results are no better than chance, it means the study methodology is flawed OR there's no effect.


No, it means the study methodology is flawed OR the effect is too small to be detected with the sample size.

So you'd have to decide in advance what difference is meaningful and choose your sample size to ensure you can detect it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: