Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another really useful/practical thing from looking at things this (inner product) way.

You have a function expressed as a weighted sum of basis vectors in your space: e.g.

f = Sigma_n x_n e_n = Sigma_n <f,e_n> e_n

(for simplicity lets say it is self dual)

then take any approximation g = Sigma_n <g,e_n> e_n

now consider the residual:

f - g = Sigma_n w_n e_n , clearly, for some set of weights {w_n}

So this demonstrates that when you produce an error in approximation, that error itself is composed of the same building blocks (I know, this is obvious, but a lot of people miss it!).

This explains why in signal processing the Gibbs ringing effect looks sinusoidal, and why errors in approximation in Haar look "blocky", etc.

Aside: also, this generalizes nicely to Frame theory where you give up orthonormality (and hence energy conservation) but gain other things.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: