Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

来自 École Polytechnique Fédérale de Lausanne 的课程

数字信号处理

443 个评分

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

从本节课中

Module 5: Sampling and Quantization

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

Let's look at some other interpolation possibilities.

So as always, we have to decide on the spacing between the samples,

that capital T s.

We need to make sure that at that location,

n T s, the x of T is equal to the samples at x n.

And we would like x of t to have a certain smoothness.

Maybe not infinitely differentiable as we had seen with polynomials.

But at least some smoothness.

The first example is piece-wise constant interpolation.

So take the sample at the origin, for example, and put a continuous time

function at such value between minus a half and plus a half, etc.

around all the samples.

So it's a staircase function that has a correct value at the samples in its

neighborhood, but of course it is not continuous.

It has discontinuous points.

What are the characteristics of this zero-order interpolation?

So x of t is given by this formula.

You take the index t plus one half and the floor function that

indicates which sample you use for the piece-wise constant interpolation.

So x(t), simply written as a linear combination of x n, rect(t-n).

So the interpolation kernel is this rect function,

sometimes called zero-order hold.

The interpolator has a short support of length 1,

however the interpolation is not even continuous.

We start with the same five samples as usual.

We put the first box function around minus two, then minus one,

at zero at one, at two.

And the sum is this piece-wise constant function with discontinuous

points at half integers.

The next simplest interpolation is first-order or piece-wise linear.

You simply draw a straight line between the samples.

This is the so-called connect the dots strategy.

x(t) is now the linear combination of an interpolation kernel

i1 shifted to the location of the samples and weighted by the samples x[n].

This interpolation kernel is also called the hat function or

the triangle function because it's simply 1 minus the absolute value of T

on the interval minus one to one.

So support now is of length two,

so it's longer than the previous interpolation kernel, and

the interpolation is now continuous even though the derivative is not.

We can see this interpolation on our usual five sample discrete sequence.

So we have a hat function at minus 2, at minus 1, at 0,

at 1, at plus 2, the sum is this red function which is

piece-wise linear and continuous by construction.

So we have seen I 0 and

I 1, so probably there is higher order interpolation exists.

One that is interesting is third order interpolation.

So X of T is a linear combination of I 3 in these shifted versions.

The interpolation kernel is put together from two cubic polynomials,

the support is of length 4.

And this one is continuous up to second derivative.

So we can do our usual construction with our five samples.

The cubic interpolator at minus 2, minus 1, 0, 1, 2.

Then the sum which is this very nice,

smooth red function.

So we have seen now several local interpolation schemes.

They all work the same way.

You have a kernel, ic, it is moved to the location of

the sample weighted by the sample and that's how you interpolate x of t.

The requirement is interpolation kernel, at 0 is equal to 1.

And is equal to 0 at t being a non-zero integer.

So it's interpolation property we have seen for Lagrange.

We have seen it of course for the box function, or the square.

We have seen it for the hat function, or the triangle.

And it was also the case for the cubic interpolation just before.

Let's look at these three interpolation kernels again.

So first, the box or rectangle function, second,

the triangle function, third, the cubic interpolator.

You can see they become larger and smoother.

The key properties of these local interpolators are the following.

It's the same interpolation function independently of N and

independently of location.

This was not the case of Lagrange interpolation.

Another advantage is the short support of the interpolation kernel,

the drawback is the lack of smoothness.

There is a remarkable result that links

the sinc interpolation scheme with the Lagrange interpolation scheme.

Namely, if you take a Lagrange interpolator of order capital N,

and you take the Nth indexed one, as N goes to infinity,

then this tends to sinc of t minus n.

So, in the limit, local and global actually are the same interpolation.

So we have a sinc interpolation formula as a limit of Lagrange interpolation,

namely that x(t) is equal to the sum of x[n] sinc of t- minus nTs divided by Ts.

This is a very elegant and very powerful formula.

Let us look at sinc interpolation at work.

So we'll have a sinc kernel centered on every sample.

So the first one at the origin.

Then that plus 1, plus 2, 3, 4, 5, etc.

You see the sum of them now as a red curve, very smooth, very nice.

So we have now the interpolated version through the samples and

this is a sinc interpolation of a discrete time sequence.

As a proof that the Lagrange interpolator goes to the sinc function as n goes to

infinity is rather technical, it is given in the book.

So please look it up if you're interested.

The intuition is that both sinc of t minus n and

Ln at infinity of t share the same set of infinite number of zeros.

Which is given here in the last two formulas of the slide.

We can explore this equivalence between the sinc function and

the Lagrange interpolator numerically.

So let's start with a sinc function here in the green centered at the origin,

then a Lagrange interpolator of order 100.

And you can see it's a good fit around the origin, not so

good towards the end of the interval.

L 200, it's a better fit, still not perfect.

L 300, even better, and you can see where it is going and we know, or

we can prove, that in the limit these two functions will be the same.