Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'd probably make the bet that there exist certain 24-bit sound files that certain listeners can discern from the same sound file that has been downsampled to 16-bit.

I would take you up on that bet. This has been tried before, and no difference was found, even when dithering wasn't used! The noise floor on 16 bit audio is around -96dB. There are very few HiFi systems that can manage that. Even in the highly unlikely event that there are listeners that can distinguish it, it's likely any difference will end up eliminated by noise in the analog components.



I probably should have written 16/44.1 vs 24/192, since I was thinking mostly of the waveform 'beat' interactions as shown in the linked video. Do you feel those are also indistinguishable? I can't afford an actual bet on it, but I'm interested enough to explore a bit and see what I can find.


1) I don't believe ultrasonic beat frequencies are by themselves audible (though I could be wrong about this).

2) It is possible to exploit nonlinearities in air to make audible sounds from ultrasound, but IIRC levels above 100 dBSPL are required. I think Disneyworld has an attraction that uses this.

3) It is highly likely that an arbitrary sound sample will sound different on playback if you add e.g. an 80kHz tone, as harmonic distortion of amplifiers and speakers tends to increase with frequency. This is generally considered a bad thing though, as it is a difference that would not be heard by a live listener.

[edit] Wikipedia link for #2: http://en.wikipedia.org/wiki/Sound_from_ultrasound

To understand how this works, consider that the speed of sound in air varies with air pressure. Furthermore sound is a pressure wave. Hand-wavingly this means that a sufficiently intense sound wave will alter its own propagation speed, which will in turn cause all sorts of interesting effects.

An analogous effect for light is used in many green lasers: some piezoelectric crystals will vary their index-of-refraction when an electric field is applied; a sufficiently powerful laser will generate a strong enough electric field. This can be used to frequency double infrared lasers into visible light.


The frequency of a 'beat' is completely different to what we normally refer to as the frequency of a sound. The beat's frequency relates to how quickly the amplitude of the sound wave varies. The frequency of a tone relates to how quickly the pressure waves oscillate back and forth.

Besides, if a combination of sounds outside the audible spectrum DID combine to produce audible sounds (maths says they don't, but maybe non-ideal properties of air etc mean they might?), the resultant audible sounds would be picked up by the recording equipment anyway! So you'd never need to record the inaudible source sounds, just the resultant audible bit.


Unless your listener can hear frequencies above 22.05kHz, it's theoretically impossible because the sampling theorem says 44.1kHz sampling can perfectly reproduce all frequencies below 22.05kHz.

Any differences within normal human audible frequency ranges must be caused by imperfect DAC. Agreed?

(No, I don't have a perfect DAC, but if the audible artifacts are because of a DAC that produces less perfect analog waveforms below 22kHz when fed a 44kHz source rather than a 192kHz source, isn't that squarely the DAC's fault? It should also be made abundantly clear that this is a hypothetical. Is this actually a problem? Has anyone simulated an analog waveform from 44.1kHz sample, compared it to oscilloscope readings from a decent quality DAC, and noticed theoretically audible differences?)


Do you have a perfect DAC we can use as a reference?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: