Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

来自 洛桑联邦理工学院 的课程

数字信号处理

241 评分

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

从本节课中

Module 4: Part 1 Introduction to Filtering

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

Let's go back now to our friend the moving average and

compute its frequency response.

The moving average has an impulse response,

which is just the indicator sequence for the interval 0 to M- 1 divided by M.

We have already computed the DTFT of such a signal a few lessons back.

If we consider now the magnitude of the DTFT,

we find out that it's just 1 / M multiplies the absolute

value of sine(omega / 2 times M) / sine(omega / 2).

An interesting thing to remember is that the magnitude response

is going to be 0 at multiples of 2pi / M, except in 0.

So here you see that you have indeed M minus 1 zeros along the frequency axis.

We can plot the magnitude response for increasing values of M.

So, for instance, here you have the frequency response for M = 20.

And you could count the 19 zeros along the frequency axis.

And here you have the frequency response for M = 100.

And you can see that the frequency response is very concentrated

around the origin.

We will see the significance of that later on.

Let's go back to our denoising example that we studied in the time domain in

the previous module and look at its development in the frequency domain.

So remember, we had a smooth signal.

Here we're talking time domain.

So we had the smooth signal that was corrupted by some additive noise

that we represent here in orange.

In the frequency domain, the spectrum of the smooth signal looks like this.

Most of its energy is concentrated around the frequency so it's a low pass signal.

And the Fourier transform of the noise

looks like noise in the frequency domain as well.

So when we put things together, we have the spectrum of the smooth signal.

We have the spectrum of the noise.

And the sum is the noisy spectrum of the measured signal.

When we use a moving average to filter this noisy signal, in the frequency domain

we're multiplying this spectrum by the frequency response of the moving average.

And in magnitude, it looks like this.

So here we're using, for instance, a nine point average

as you can see from the number of zeros along the frequency axis.

So the product in magnitude determines a very deep

attenuation of the high frequency.

And therefore,

most of the noise that was contained in these bands has been eliminated.

At the same time, though,

if you compare the result of the filtering operation with the original spectrum.

You see that the filtering has eliminated parts of the original spectrum that

actually had every right to be there.

So that proves that there's no free lunch in signal processing as well.

And in order to remove the noise,

sometimes you have remove some of the good part with it as well.

Before we move on, let's look once again at the same denoising operation and

the time domain.

Now that we know what's going on in the frequency domain.

And we can see that when we use a moving average of, say, length 12.

We are indeed smoothing out the highest components of the noise.

But not so much the ones that are slower moving.

And that we are already starting to alter a little bit the curvature

of the signal as explained by the spectrum we just derived.

So what about the phase?

The best way to understand the effects of the phase on a signal is

to distinguish three different cases.

The first one is zero phase, which means the spectrum is real.

The second case is linear phase,

where the phase is proportional to the frequency via a real factor, d.

And the third case is nonlinear phase, which covers all other possibilities.

To understand what phase does to a signal, let's take a very simple discrete

time signal made up of the sum of two sinusoids.

x[n] = one-half sine(omega 0 times n) + cosine(2 omega 0 times n).

So it's two sinusoids at frequencies one double of the other.

And the sum looks like this signal that we plot here in this graph.

We can call this signal a zero phase signal because the phase associated to

each sinusoidal function is 0.

Now, let's add a phase term to each component.

And make sure that the phase is proportional to the frequency of

the sinusoid.

So what we add is theta 0 to the first component at frequency omega 0.

And we add 2 theta 0 to the second component whose frequency is 2 omega 0.

The value of theta 0 is actually irrelevant.

But we can see that when we add a phase term to the sinusoidal components

that is proportional to the frequency of the sinusoid.

The net effect in the time domain is just a shift of the signal.

The shape of the signal remains exactly the same.

If now we add a phase that is non-proportional to the frequency.

So in this case, we change the phase of the first term to 0.

And we leave a phase term of 2 theta 0 to the second term.

While the frequencies of the two components have not changed.

The shape of the signal in the time domain has changed significantly.

Please note that in all three cases,

the spectrum of x[n] remains exactly the same in magnitude.

To understand that a linear phase term is simply a delay in the time domain.

Consider the following system,

where we have a filter D that simply produces a delayed version of its input.

The input output relationship is this one here.

And in the frequency domain, we can take Fourier transforms left to right.

And we obtain that Y(e to the j omega) = e to

the -j omega d times X(e to the j omega).

This is really equivalent to saying that the frequency response of this delaying

filter is simply e to the -j omega d.

So again, we have a linear phase term, where the phase introduced by the filter

is proportional to the frequency via a factor d.

Which actually represents the delay in time domain.

In general, if we can split the frequency response of the filter into the product of

a pure real term and a pure phase term.

It means that the filter operates by combining the action of a zero phase and

therefore zero delay component.

That only affects the magnitude of the input

followed by a delay by these samples.

And this delay is inevitable in the case of causal filters, for instance.

Let's consider the moving average again.

The frequency response is the product of a real term here,

and a pure phase term here.

Where the delay, d, is exactly M- 1 / 2,

which represents half the length of the support of the impulse response.

And this is indeed the delay introduced by the moving average.

So we've looked at the moving average.

And by now, you know that what comes next is the leaky integrator.

And indeed, let's consider the impulse response.

Which you remember is (1- lambda) times lambda to the power of

n multiplied by unit step, so an exponentially decaying sequence.

If we take the Fourier transform of that,

we've done that in detail before, and here is the result.

H(e to the j omega) = 1- lambda /

1- lambda times e to the j omega.

Now to find the magnitude and the phase of this animal,

we get to use a little algebra.

And we need to recall this very simple result.

If you have 1 / a + jb, you can always rewrite

that as a- jb / a squared + b squared.

So if the number x is actually 1 / (a+jb),

the magnitude square of x is 1 / a squared + b squared.

And the phase of x is the inverse tangent of -b / a.

This applies to the leaky integrator in the sense that we can rewrite

the frequency response as 1- lambda / (1- lambda cosine omega).

Which is the real part of the denominator,- j times sine of omega.

Which is the imaginary part of the denominator.

And by applying the formula that we just saw, we find out that

the magnitude square of the leaky integrator is (1- lambda)

squared / 1- 2 lambda cosine of omega + lambda squared.

And the phase of the leaky integrator is the inverse tangent

[ lambda sine of omega / 1- lambda cosine of omega].

So you can see that the phase is definitely nonlinear in this case.

If we plot the magnitude, we can see that we have a characteristic that is

very similar to that of the moving average.

Although we don't have any zeros and

the characteristic is monotonic rather than oscillatory.

As lambda goes closer to 1,

we see that once again, the frequency response concentrates around the origin.

And for lambda very, very close to 1, we have something that resembles

the frequency response of a moving average computed over a large number of taps.

The phase looks like this, it's a nonlinear characteristic.

And as lambda changes, the steepness of the transition between positive and

negative phase increases as well.

What is interesting in terms of the combination between the magnitude and

the phase response is that what interests us

is the part of the filter where the attenuation is not too big.

Because that's where the frequencies will pass through.

Whereas here, they will be fundamentally killed.

And in that area, where the frequency response has a magnitude that is

sufficiently close to 1, the phase is actually more or less linear.

So we can use the leaky integrator without incurring excessive

phase distortion in the pass band.

Finally, let's revisit another classic, namely the Karplus-Strong algorithm.

And let's try to analyze its behavior using the convolution theorem.

Remember, the Karplus-Strong algorithm is

initialized with a finite support signal x of support M.

And then we use a feedback loop with a delay of M taps.

To produce multiple copies of the original finite support signal.

Scaled by an exponentially decaying factor, alpha.

So, for instance,

if we initialize the algorithm with one period of a sawtooth wave.

What we get in the end is multiple repetitions of that period.

Scaled by an exponentially decaying envelope.

In the time domain, we can write this out explicitly as such.

Here we have the first period of the output signal is just

a copy of the non-zero values of the finite support signal.

Followed by another copy of the non-zero values of the finite support signal,

scaled by alpha.

Followed by another copy scaled by alpha squared, and so on and so forth.

The key observation to analyze in the algorithm within

the paradigm of the convolution theorem is to see that the output

can be expressed as the convolution of the finite support

input with a sequence w[n] that we build in the following way.

w[n] = a to the power of k for values of the index n that

are the kth multiple of M, and it's 0 otherwise.

So if I was to draw the sequence, it would look like so.

So it would be 1 and 0, it would be alpha and

M, alpha squared and 2M and it would be 0 in between.

So a series of exponentially decaying deltas spaced M points apart.

With this, we can write the convolution theorem.

And we say, okay, the output is simply, in frequency,

the product of the Fourier transform of the finite support signal

times the Fourier transform the signal w[n].

Well, and now we can proceed exactly like we did a few lectures back.

If we consider the example of a sawtooth wave, we know that

the Fourier transform of one period is this one.

Whereas the Fourier transform of the sequence w[n] is the rescaled

Fourier transform of the exponentially decaying sequence.

Which means we will have several peaks happening in the -pi pi interval.

We put them together graphically.

We have the Fourier transform in magnitude of the sawtooth period.

We have the Fourier transform in magnitude of the staggered exponential sequence.

We take the product of the two.

And we obtain the same spectrum that we derived when we talked about the DTFT of

the Karplus-Strong signal.