In the next examples, we will concentrate on the design of a low pass filter, but
certainly, the same techniques can be applied to any type of ideal filter.
So the first idea is the following.
We pick a cut off frequency, omega c.
We compute the ideal impulse response for the lowpass, analytically.
We truncate h[n] to a finite support, hat h[n].
This hat h[n] defines an FIR filter that we can use.
For the time being, we don't worry about causality, so we preserve the symmetry
around zero of the impulse response and we approximate the ideal filter
with an FIR of length to 2N + 1 center around zero.
And so, here hat h[n] is equal to omega c over Pi times the sinc of omega c over Pi
for n less or equal to capital N in magnitude and zero everywhere else.
A justification of this method, is the fact that if we compute the mean square
error between the original filter and the approximation.
We have that the norm of the error int he frequency domain,
is the integral from minus pi to pi of the frequency response of the ideal
filter minus the frequency response of the approximated filter squared.
This equal to the norm squared of the difference between the two frequency
responses.
And thanks to Parseval's theorem, this is also the norm squared
of the difference between the impulse responses in the time domain.
And so we can write that, the mean square error,
as simply the sum from n that goes to minus infinity to plus infinity
of the difference between the two impulse responses squared.
The impulse response of the ideal low pass is a sink and
you remember, it decays monotomically as 1/n.
So the mean square error is clearly minimized if we pick values for
hat h[n] from an interval that is symmetric around zero,
because in this way we're actually killing in that summation the largest terms.