Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

来自 École Polytechnique Fédérale de Lausanne 的课程

数字信号处理

340 个评分

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

从本节课中

Module 4: Part 2 Filter Design

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

In order to understand intuitively what happens when we approximate a filter by

truncation, we have to take a little bit of a detour, and

look at the problem from a different perspective.

So we could consider the approximated filter ĥ of n,

as the product of the original input's response, times a sequence w of n.

Which is just the indicator function for the interval minus n to plus n.

So w[n] is just a series of points of value 1,

centered in 0 and going from -n to n.

So with this notation, the question now is, how can we express

the Fourier Transform of the filter as the product of two sequences?

And for that, we have to study the modulation theorem.

The modulation theorem is really the jewel of the convolution theorem.

You remember the convolution theorem states that the Fourier Transform of

the convolution of two sequences is the product in the frequency domain

of the Fourier Transforms.

The Modulation Theorem gives us a result for the Fourier Transform

of the product of two sequences and tells us that the Fourier Transform

is the convolution of the Fourier Transforms in the frequency domain.

So, what is a convolution in the frequency domain?

Well in C∞, i.e., in the space of infinite support sequences,

we can define the convolution in terms of the inner product between two sequences.

So the convolution between x and y is the inner product between

x conjugated with y time reverse and delayed by n.

And so, if you apply the definition of the inner product in C∞,

that's exactly what you'll get.

We could adopt the same strategy in L2([-π, π]), which is the space where

DTFTs live, and define the convolution between two Fourier Transforms

as the inner product between the first Fourier Transform conjugated.

And the second Fourier Transform frequency reversed and delayed by ω.

And if we apply the definition of the inner product for L2([-π,

π]) we get that the convolution between two Fourier Transforms

is 1 / 2π times the integral from -π to π of big X(e^jσ),

times big Y(e^jω)- σ in the σ.

With this notation in place we prove that the DTFT of the product of

two sequences is the convolution of their Fourier Transforms by working backwards.

So we start with the inverse DTFT of the convolution of two Fourier Transforms.

We can write it out explicitly like so.

And if we expand the definition of the convolution inside

the integral that defines the inverse DTFT, we have the following expression.

Where here, you have the convolution.

And outside here you have the inverse DTFT.

Now we use the same trick that we used when we proved the convolution theorem.

Namely we replace ω with ω- σ + σ in the exponential here.

And we managed to split the complex exponential in a way that will allow us to

separate the contribution due to X and to Y.

And indeed when we do so

we have a first part here, which is just a inverse DTFT of big X.

And here, we have the inverse DTFT of big Y.

The fact that the argument to the complex exponential has a -σ term doesn't really

bother us because the integral is between -π and π and of course,

the DTFT is 2π periodic, so an offset does not change the result of the integral.

And so finally, we have indeed what we were looking,

for, the product of the two time domain sequences.

As an interesting aside, let's look back at the sinusoidal modulation result

with the help of the modulation theorem we just studied.

So the DTFT of a sequence x of n multiplied

by cosine of ωcn turns out to be the convolution

of the DTFT of x with the DTFT of the cosign of ωcn,

which is 1/2 𝛿 of ω- ωc + 𝛿 of ω + ωc.

This 𝛿 here is the 𝛿 defined over the real line.

So it's that functional that isolates the value of the function

when it is used on the integral side.

So because of the distributive property of the convolution,

we can split the above convolution product into two terms,

which we write out explicitly here as convolution integrals.

And we have for instance in the first case, the integral of

the product between the DTFT of X and 𝛿 of σ minus ω + ωc in the σ.

Similar here, we have the complimentary term on the negative axis, the integral of

the product between the Fourier Transform of x and 𝛿 of sigma- ω- ωc.

Because of the sifting property of the 𝛿, these integrals will isolate the value for

the Fourier Transform for the value of σ where the argument to the 𝛿 is equal to 0.

So in the first case, we will have the 𝛿,

we'll kick in for σ = ω- ωc and in the second term,

it will kick in for the σ = ω + ωc.

And so in the end the final result is a well known modulated signal.

1/2 the Fourier Transform of the signal centered in ωc +

1/2 the Fourier Transform of the signal centered in- ωc.

So this is another way to arrive at the sinusodal modulation result.

But now we should go back to the reason why we started this detour into

the modulation theorem, and

try to understand what the Gibbs phenomenon is all about.

So remember, in the beginning of our detour, we were at a point

where we had expressed the approximate filter, ĥ of n, as the product

between the ideal impulse response, h of n times an indicator function.

That serves the purpose of isolating

the points of the ideal impulse response that we want to keep.

So if this is the ideal impulse, h of n, and this is w of n, you can see that this

window basically kills all these guys here and kills all these guys here and

leaves us with an FIR approximation to the impulse response.

In the frequency domain,

this corresponds to the convolution of the Fourier Transforms of the two actors.

The Fourier Transform of the ideal impulse response is direct function

that we know very well.

The Fourier Transform of the indicator function, we have seen many times before

and it is actually the Fourier Transform of a zero-centered moving average filter.

So it's formula is here and it's sin(ω(2N + 1) /2) / sin(ω/2).

So here's we're going to try and

compute the convolution between the Fortier Transforms in a graphical way.

Here are the actors involved in the play.

We have H(e^jω), which is the Fourier Transform of the idea of filter.

We have W(e^jω), which is a Fourier Transform of the indicator function.

And here, we have the result which is the integral of the product of these two guys.

As ω moves along the axis here,

we will recenter the Fourier Transform of the indicator function,

take the product of the two, and compute the integral.

In the beginning, we're just integrating the wiggles of the function

over the non-zero part of the ideal impulse response.

So what we have is oscillatory behavior, of small amplitude.

Things start to become interesting when the main part of the Fourier Transform of

the indicator function starts to overlap with the support of the erect filter.

As we approach the transition band of the filter,

we see that the value of the convolution starts to grow.

And as a matter of fact, it grows even more with the entire main lobe

of the Fourier Transform of the window is under direct function.

As we move along, the ripples that trail the main lobe starts to get integrated and

so the behavior in the past band is again oscillatory as those

little ripples enter and exit the main integration interval.

As we reach the other transition band,

we have exactly the symmetric behavior at the other edge of the band.

The shape of the Fourier Transform of the approximate filter will depend

then on the shape of the Fourier Transform of the indicator function.

So if we look at it here, we see that we have what is called a main lobe,

here in blue, that will determine how steeply

the approximate filter will transition from the stop band to the past band.

The width of the main lobe is what determines the steepness,

whereas the amplitude of the so-called side lobes will

determine the amplitude of the error on either side of the transition band.

And this is what determines the Gibbs phenomenon.

In terms of our requirements, what we would like to have is a very narrow

main lobe so that the transition is very sharp and at the same time,

we'd like to have very small side lobes so that the Gibbs error is kept low.

And we would also like to have a short window so that the FIR will be efficient.

And these are very, very conflicting requirements.

There is a large body of literature that is concerned with developing

the best possible window in order to approximate an LDL filter.

For instance, we have used a rectangular window to truncate the impulse response.

But if we use a triangular window, that therefore weights the impulse response and

tapers it to zero in a more gradual manner.

What we have is that we will be able to attenuate

the Gibbs error at the price of a wider main load.

So here, you have the comparison between a 19-tap rectangular window in gray and

a 19-tab triangular window in red.

And you see that the side loads are much lower for

the triangular window but the width of the main load has increased.