Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

来自 École Polytechnique Fédérale de Lausanne 的课程

数字信号处理

284 个评分

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

从本节课中

Module 4: Part 2 Filter Design

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

Hi, in this module we want to talk about a couple more ideal filters.

Namely, the fractional delay and the Hilbert filter.

These ideal filters, we will use them later in a variety of applications, and

when I say use them, I, of course, mean I will use an approximation of these

filters, and we will study, very soon, how to approximate ideal filters.

So, we have seen the ideal low-pass, and the various transformations

that we can use to turn that into a whole different set of ideal filters.

Let's start with something that is related to the low pass.

It's called a fractional delay.

To understand what the fractional delay does let's consider a simple delay,

what we have seen I think in the first module of this class.

We take an input signal, x[n].

And we put it through a block that delays this input by

a certain number of samples, d.

So d now is an integer, right?

So the output, if we consider delay as a filter, we have an input, x(n), and

we have an output, which is simply a delayed version of the input.

If we consider this as a filter,

we can analyze the behavior of the delay in the frequency domain.

And we can derive the transfer function for the filter.

This is very easy.

If the input is a generic sequence x[n], and we indicate it's DTFT

as x[e] to the j omega, the output is the fourier transform of the signal here.

We apply the shift property of the fourier transform and

we find out that this is e minus j omega d times x[e] to the j omega.

And so from this relationship, we can find the transfer function system

as the output divided by the input and we get this formula here.

So the transfer function of a simple delay is e to the minus j omega d.

So we have said that for a standard delay d is an integer.

But in this transfer function formula here,

there is no requirement for d to be an integer.

So the question is, what happens if we replace d

which up to now has been an integer number, by a real number d.

Surprising as it may seem using this real quantity for

the delay, will result in, what is called this fractional delay

namely the filter with the known integer d will compute an output.

Which is the input delayed by an integer number of samples plus a fractional part.

Let's start by looking in more detail at the frequency response of

the fractional delay.

The magnitude response is identically 1.

This is of course the magnitude of a complex exponential which is 1 independent

of frequency.

So the filter can be classified as an old past filter.

It doesn't alter the frequency of distribution of the input

which is consistent with what we would expect from a simple delay.

The phase response is linear,

which once again is consistent with the response of the delay.

And the slope of this line will be proportional to the delay.

The impulse response can be obtained by taking the inverse DTFT of the frequency

response, so we take the integral from minus pi to pi of e to the j omega d.

Times e to the j omega n, and if we work through the integral, which is elementary,

we end up with an impulse response which is the ratio of sin of pi

that multiples (n- d) divided by pi that multiples (n-d).

Now, this function here you should recognize by now as a sinc function.

Namely, it's sinc(n- d).

Now remember that the sinc function is equal to 0 for

all integer values of its argument except when

the argument is 0 of which point the sinc is equal to one.

The shape of the sinc if you want this like this.

So here is equal to 1 and we'll cross the x axis for

all integer values of the argument.

So here, if d is an integer as in the case of the classic delay that we have seen so

far, the sinc collapses to a simple delta function.

So for instance, if is equal to 3.

You will have the sinc of (n- 3) will have a single know 0 value for

n equal to 3 in which case it will be equal to 1 here, and

will be 0 everywhere else.

But if D is not an integer, then the impulse response will have an infinite

number of nonzero values, and it will look like a sinced function.

So here is, for instance, the case for d equal to 0.5, so

a fractional delay of a half of sample.

And if you were to visualize the continuous version of the sinc

function it would probably look like this, okay?

So it's something like this, symmetric.

And so here we have the samples of this impulse response.

And you can see it's an ideal filter because once again you have

an impulse response that is infinite and two-sided.

We can say change the fractional delay value to 0.3.

And you have a different shape for the impulse response,

0.1 would look like this.

Now, if you look at the shape of this impulse response you would see that

the peak of the impulse response is in the vicinity of the integer part of the delay.

So for instance here, where the delay is 0.1,

you have that the peak of the impulse response is in 0.

And the rest of the impulse response will use all the remaining samples in

the signal to build an intermediate value between the original samples.

This is a little bit complicated to explain now but

it will be very clear once we study the sampling theorem.

And the relationship between continuous time models of signals and

discrete timelines.

For now suffice it to say that we can actually interpolate in discrete time and

find intermediate values of a discrete time sequence

using just discrete times filters like the fractional delay.

However, the price we pay is that we, in theory, need to use an ideal filter.

So something that we cannot really compute in practice.

We can, therefore, approximate the fractional delay, and

obtain arbitrarily good approximations of intersample values for sequences.

The Hilbert filter is another ideal filter, his approximated versions

are used in practice, and especially in communication systems.

So to understand the Hilbert filter, let's consider this weird problem if you will.

We have a sinusoid, a cosine of omega 0 n, for a given frequency omega 0.

And we ask ourselves whether we can build a machine that turns a cosine into a sine.

In other words, introduces a phase shift of pi over 2 or 90 degrees,

call it as you want.

To turn a cosine into a sine.

So if we are given a size frequency omega 0 we can use the convolution theorem and

try to find the frequency response of a filter that produces such an effect.

We remember that the Fourier transform or

the formal Fourier transform of a cosine is the sum of two deltas,

delta of omega minus omega 0 plus delta of omega plus omega 0.

These are Dirac deltas in the frequency domain.

So we multiply this input by the frequency response of this quirky machine that we're

trying to design.

And we want the output to be the Fourier transform of a sine

at the same frequency omega 0, whose formal Fourier transform we know.

So it's minus j that multiples the periodic Dirac delta at omega

minus omega 0 minus the periodic Dirac delta of omega plus omega 0.

So once we have this formula into place,

we can derive the value of the transfer function of the filter in two specific

points in the frequency axis, namely omega 0 and minus omega 0.

And for this equation to hold we have that the frequency response of

that filter will have to be minus j for a frequency equal to omega 0 and

plus j for a frequency equal to minus omega 0.

Now if we want this relationship to be good for all frequencies then we figure

out that the frequency response of the filter will have to have this pattern for

all frequencies between 0 and pi and 0 and minus pi.

So the final frequency response that

produces this transformation from cosine into sine.

It's a filter whose frequency response is identically minus j for

omega between zero and pi, and plus j for omega between minus pi and 0.

And this, of course, like all filters,

like all discreet time Fourier transform is 2 pi periodic.

So you repeat this pattern every 2 pi.

If we want to visualize the frequency response of the filter,

it will look like so.

Now please remark here that we're not showing the magnitude,

we're showing the actual value.

And this is an imaginary axis for a change.

So we have j for negative frequencies, and minus j for positive frequencies.

If we were to look at the magnitude of this filter, then we will have that

the magnitude is identically 1 and so again we have an allpass filter.

Whereas the phase if we go back to this will be equal to pi over 2 for

a negative frequencies n minus Pi over 2 for positive frequencies.

Although the frequency response of the filter is purely imaginary,

surprisingly enough, the input response is actually a real valued sequence.

We can obtain this by taking the inverse Fourier transform.

This is even simpler than in the fractional delay because we have

an integral that we can split into two intervals.

So between minus pi and 0 we will integrate j times e to the j omega n and

on the the positive frequency access will integrate minus

j that multiplies e to the j omega n.

If we work out this interval, we obtained 2

divided by pi over n for n odd and 0 for n even.

So if we plot the impulse response, it will look like this.

It will go down with hyperbolic decay, so 1 over n.

And every other sample is equal to 0.

Again, this is an ideal filter,

because the impulse response is infinite and two-sided.

Again, the decay is inversely proportional to the index.

Which means we can get reasonably good approximations with the finite

number of samples, if we want to implement the Hilbert filter.

Now, why would we want to implement an Hilbert filter?

Well, this is a useful building block in a demodulator.

In order to understand how to use the Hilbert filter for the modulation,

let's look at effect of the Hilbert filter on an arbitrary input signal.

The filter will introduce a phase shift in the signal and

different phase shift for the positive and negative frequencies.

To understand the behavior of the filter, let's look at the representation of

the input spectrum by displaying both the real and

the imaginary part on a three dimensional plot.

So here suppose that the input is real valued so we have a classic pattern, where

the real part of the spectrum is symmetric and the imaginary part is antisymmetric.

We will plot the real part here on the vertical plane and

the imaginary part on the horizontal plane.

And this is the frequency axis.

The Hilbert filter will introduce a 90 degree clockwise rotation of the spectrum

for the positive frequencies, and

a 90 degree counter-clockwise rotation for the negative frequencies.

Let's look at the real part first, so

imagine that the real part of the spectrum has this triangular shape, when we apply

the Hilbert filter this part will be rotated by 90 degrees in this direction.

It will become imaginary.

And this part here corresponding to the negative frequencies

will be rotated 90 degrees in this direction and will become imaginary.

Graphically, if we were to show this rotation as it unfolds we start

with a triangular shape and then we rotate it until it becomes like so.

So the real part of the spectrum has now become the imaginary part of the spectrum.

And from symmetric it will become antisymmetric.

Similarly the imaginary part of the spectrum will be rotated in the same way,

and from antisymmetric here, will become real and symmetric like so.

So if we look at the effect on the combined spectrum,

we start with this real and imaginary part.

And after applying the Hilbert filter to this input, we end up with this

spectrum here, where the imaginary part and the real part have been exchanged and

modified so that they preserve their symmetry and antisymmetry.

So let's see how we can use the Hilbert filter to effectively perform

demodulation.

This here is a Hilbert demodulator.

The input signal is supposed to be a modulated signal.

So, an original signal x(n) multiplied by cosine at omega 0n,

where this is the carrier at frequency omega 0.

So this signal is split into two identical parts.

One is passed through as is, and the other copy is passed through the Hilbert filter.

Then it's multiplied by j and summed back to original input.

Finally, the result of the sum is multiplied by a complex exponential

at a frequency equal to the frequency of the carrier.

In the end, what we get is the demodulated signal.

So, assume this is the original signal before modulation.

When we modulate the signal, remember you get two copies at positive omega C and

minus omega C, so demodulated signal spectrum looks like so.

Okay, two copies of the original signal.

Here again we show the real part on the vertical plane, and

the imaginary part on the horizontal plane.

The top branch of the demodulator remember, here is the signal.

And the top branch will have a Hilbert filter and then a multiplication by j.

We can interchange Hilbert filter and multiplication.

So, how does jy of n look in the frequency domain?

Well, multiplication by j is just counterclockwise rotation by 90 degrees,

and it's the same for positive and negative frequencies.

So we take this spectrum here, and we just rotate this by 90 degrees, so

that the imaginary part becomes real and the real part becomes imaginary.

But there is no change in symmetry or antisymmetry of the components.

So, when we do that we're just flipping the thing and now it'll look like so.

Now we convolve this with the Hilbert filter.

So now we will introduce this differential rotation between positive and

negative frequencies.

In particular we get clockwise in the positive frequencies and

counterclockwise in the negative frequencies.

Now, remember that we have two branches in the demodulator, the first one which is

the one we're showing here, Hilbert filter followed by multiplication by j.

And here with some, this result back to the original input.

If we look at the spectra in the two branches, we see that for

the negative frequencies, the spectra between the top branch and

the bottom branch are completely out of phase.

So both the real parts and the imaginative parts are negative of each other.

On the other hand, for positive frequencies, the spectra are in phase, so

they will sum up constructively rather than destructively.

So when we sum these two signals together,

the resulting spectrum is a one-side spectrum where we have, as you can see,

the same shape as the original signal x of n.

But simply translated in frequency and centered in omega c.

So now we can bring this back to base band by shifting the spectrum

just by omega C and we know that we can just shift the spectrum by multiplying

by a complex exponential at a proper frequency.

And so when we do that, we bring back the spectrum here and

we have completed the demodulation process.