Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't 100% get the band limited signal bit. How does band limiting imply that there's only a single possible reconstruction of the digital signal? I can kind of picture the fourier transform meaning that there's only one representation but it's a bit of a leap for me. Can the converter itself not create the signal incorrectly?

Nyquist's theorem shows that you can reproduce a sine-wave, provided that you have strictly greater than two samples per cycle (and implicitly assume that you have a sine wave).

Any periodic signal (which, for audio, which is AC-coupled, is always) can be represented exactly by it's Fourier transform. The FT is simply a different representation of the same function, namely a sum of sine-waves. In general the FT of a signal has many terms (i.e., many spectral components or harmonics). However, in a band-limited system, only certain harmonics are allowed to pass. If your audio system claims to have, say, 20Hz to 20kHz bandwidth, then any Fourier component outside that range will be greatly attenuated (if the attenuation puts that component below the noise floor, we could say that the component has been completely eliminated). It is the act of attenuating those out-of-band components that causes the square wave to go squiggly.

This means that for any arbitrary signal which has been band-limited to half your sampling frequency, you can exactly reproduce the band-limited copy.

The sampling system usually (for consumer audio, always) has an anti-aliasing filter between the signal source and the sampler (not having an AA-filter is what allowed Tektronix sampling oscilloscopes to show multi-GHz signals in the 1960's, by effectively heterodyning the signal frequency down to something usable). That means that the digitizer only ever sees the band-limited signal. The bandwidth of the AA-filter is chosen so that the maximum frequency passed to the sampler is half (or less) than the sampling rate.

This also might help: http://dsp.stackexchange.com/questions/7879/how-does-subpixe...

> Isn't there also an argument that frequencies above 19khz can be heard by some people so need to be accurately represented?

That argument has been put forward, yes. "Back in the day" Bell Labs performed actual experiments with human test subjects to produce "equal loudness" curves. The upshot is that the power of an audio source must increase as the fundamental tone rises (starting from about 4kHz, or so). Then you must consider the effect of loud sounds on the human ear. Hearing damage is cumulative and the rate of damage increases with sound power, so it's hard to put an exact upper bound on the upper power limit of human hearing. However, at some point in the 18kHz to 22kHz range, the equal loudness curve crosses the damage curve. At that point you have to turn up the power of the audio source so high that you damage your hearing listening to it: So there is a definite upper frequency limit to human hearing. It may be slightly higher for some people than others, but 22.05kHz (the Nyquist limit for CD-rate sampling) is almost certainly greater than many standard deviations above the mean.

It's difficult to admit for a group of people (engineers and scientists) who are always striving to make things better, but we have come to a point where audio reproduction is "good enough." For a modest amount of money, the sampling, storage and reproduction is for all intents and purposes "perfect." Your living room is never going to sound like a concert hall with a live orchestra, but it's not because you don't have enough bits or samples or bandwidth.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: