Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

来自 École Polytechnique Fédérale de Lausanne 的课程

数字信号处理

367 个评分

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

Instead of introducing vector space inductively,

namely by listing the axioms that regulate the behavior of vectors in vector space,

let's start with some examples that you should be familiar with.

R2 and R3 are also known as the Euclidean plane and

Euclidean space, these are very familiar entities.

The extension of Euclidean space to an arbitrary number of dimension is

called RN that is the subject of linear algebra.

Another extension uses complex numbers instead of real numbers, so

we have CN, vectors that are made of tuples of N complex numbers,

again the subject of linear algebra.

Other vector spaces are perhaps less known, we have L2 of Z.

This is the space of square-summable infinite sequences,

we will see that this is useful for modeling infinite like signals.

And L2 of a, b is the space of square integrable functions over an interval.

Here, the key point is that vectors can be arbitrarily complex entities,

such as functions for instance.

And this will be very useful in unifying our approach to signal processing.

Some spaces, some vector spaces can be represented graphically this is the case

for instance of planar geometry.

The intuition is that if you take the plane and set up a cartesian reference

system like in this figure here, then you can identify any point on the plane,

with a pair of real valued coordinates like so.

This correspondence between points on the plane and

vectors in R2 allow us to interpret the properties of vectors in R2

in terms of geometric properties on the Euclidean plane.

This analogy is extremely useful, especially when extended also to three

dimensions, because it will allow us to use our geometric intuition to understand

the properties of arbitrarily complex vector spaces that don't allow for

an intuitive representation or a graphical representation.

In three dimensions,

we use triples of coordinates to locate uniquely a point in space, and

at four we have a complete correspondence between Euclidean space and R3.

So, here we have our point in space and it's 3D coordinates will be like so,

and again, geometrical intuition is pretty straightforward.

Let's look now at a function vector space, this is L2 of -1,1,

so the space of square integrable function on the integral -1,1.

Any element of this vector space is a function, so this is the vector notation,

and then explicitly it will be a function of a variable t that goes from -1 to 1.

An element for instance is the sine function we can take x equal to

sine pi t and the vector can be drawn by plotting the function over the interval.

Other vector spaces do not admit to graphical representation, for

instance RN for N > 3.

Well there is no way that we can block that in a way that we can comprehend.

And CN, so the tuples of complex numbers cannot be drawn for anything but

N equal to 1 which case we have the classical complex plane.

So from this examples,

we can see that vectors spaces can be a very diverse bunch.

And what all these vectors spaces have in common is that they obey to set of

an axioms that define the properties of the vectors and

what we can do with these vectors.

So the ingredients for a basic vector space are a set of vectors V,

and we say nothing about what is inside these vectors.

And a set of scalars that in our examples will always be the set of complex numbers.

And we need to be able to at least resize vectors, so multiply them by a scalar,

to change their size, and combine vectors together, sum them together.

The way we do this is by defining some axiomatic properties for the sum and

for the scalar multiplication.

So this is pretty dry, but we will go through these axioms, and

then we will understand that all the vector spaces that we

have seen in the examples actually do fulfill these axioms.

So we want the addition to be commutative, we want the addition to be distributive,

we want the scalar multiplication to be distributive with respect

to vector addition and we want scalar multiplication to be distributive

with respect to the addition in the field of scalars.

Scalar multiplication is associative as well, we want to have a null element

in the vector space, so that the sum of any vector plus the null element.

And because of commutitivity,

the sum of the null element plus the vector will give back the original vector.

And then we want that for every element in the vector space,

there exists an inverse element for the addition.

So that X plus -x as we indicate the inverse for

the addition will be equal to the null vector.

All right, so let's look at an example if we take RN the space of tuples of

capital N real numbers and we define the basic operations like so.

So we have scalar multiplication, we take a tuple and

we multiply each element in the tuple by a scalar alpha.

And then the sum is defined as a tuple where each element is the sum

of the corresponding elements of the two tuples.

So if we define the operations like so, then we can easily verify that these

operations fulfill the axioms and therefore RN is a valid vector space.

We can, as we said, use the geometric intuition in low dimensions to understand

what these operations do.

Consider the scalar multiplication vector in R2,

well the vector is identified by a pair of coordinates, X0 and X1.

And if we multiply each component in the pair by alpha, we are in fact,

lengthening the vector by a factor alpha without changing its direction.

Vector addition in two dimension corresponds to summing the first element

of each tuple together and this becomes a new first element of the sum, and

then we sum the second components together.

Graphically we take our first vector, we take the second vector,

and we place it at the end of the first vector without changing it's direction.

And the new vector will be the arrow that goes from the origin to the end of

this combination of vectors.

This is the famous Parallelogram Rule.

In L2 of -1,1, scalar multiplication is

intuitively defined as multiplying the function by a scalar alpha.

So here we have the vector sine of pi t, we multiply this vector by a scalar 1.5,

and what we get is a function with a wider amplitude.

Sum is defined intuitively as well in L2 of -1,1 as simply

the sum of the two functions over the interval.

So we take our first vector sine of pi t, a second vector 0.33 times sine

of 25 pi t we sum them together and we get this other element of the vector space.

So now we have set of vector, set of scalars,

we know how to multiply vectors by scalars and

we know how to put vectors together by addition, but we need something more.

We need something to measure and compare vectors Vectors.

This is provided by the inner product which is an additional

new operator that we defined for the vector space.

The inner product takes a couple of vectors and

returns a scalar, which is a measure of similarity between the two vectors.

If the inner product is zero,

this is a very important concept that we will examine in detail,

we say that the vectors are maximally different or in other words, orthogonal.

Like for scalar multiplication and addition, the inner product is defined

axiomatically and these are the properties that it will have to fulfill.

We want the inner product to be distributive with respect to

vector addition.

We want the inner product to be commutative with congregation.

This, of course, applies when our set of scalars is complex valued.

We want the inner product to be fistributed

with respect to scalar multiplication and

we conjugate the scalar if it affects the first operant in the inner product.

We want the self inner product to be greater than or equal to zero,

which means that the self inner product is necessarily a real number.

And finally, the self inner product can be zero only if the element is

the null element for the vector edition.

Let's look at some examples of inner products to foster our intuition.

In R2 the inner product between two vectors is defined as the product of

the first component of each pair,

plus the product of the second components of each pair.

Now this is not immediately intuitive, so let's take it apart a bit.

First of all if we look at the self inner product of a vector, we can see that it's

equal to x0 squared plus x1 squared, so the sum of the coordinate square.

Now if we look at the graphical representation of the vector we can see

that this by Pythagoras' theorem is equal to the length of the vector square.

We actually have a name for this, we call this the norm of a vector.

And so the norm of a vector is the square root of the self inner product,

and it indicates the length of the vector.

So, the self inner product is a very good measure of the size of a vector

in R2 and by extension in all vector spaces.

Let's go back to the general definition for the inner product.

We could use some simple trigonometry to show that

this formulation is actually equivalent to this.

So the inner product between two vectors

is equal to the norm of the first vector times the norm of the second vector

times the cosine of the angle formed by the two vectors.

Now if the two vectors have equal norm

their inner product will be a good measure of similarity between this two vectors,

because it will go from the norm square when the vector go inside and

therefore angle alpha is zero to zero when the vectors are at 90 degrees and

therefore are pointing in orthogonal directions.

So, orthogonality is indeed the maximal difference

between two vectors on the plane.

As another example, lets have a look at the inner product in L2 of -1, 1.

This is usually defined as the integral of the product of the two functions

that corresponds to the two vectors.

It's very easy to verify that this inner product fulfills the axioms.

Let's now use this inner product to compute the norm

of the vector x = sin of pie t.

This will be equal to the self inner product, and so

the integral from -1 to 1 of sin squared of pie t in dt.

This is the area that is shaded in the picture and it's equal to one.

Let's take another element of L2 -1,

1 the vector y = t, so the linear ramp between -1 and 1.

If we compute the norm as the self inner product of this vector, we have

the integral from -1 to 1 of t squared and dt which is equal to two thirds.

So, this vector doesn't have the unit norm, if we want to normalize this vector,

we can define an alternate version where we say y=

t divided by the square root if two thirds.

We divide the vector by the norm and this vector will now have the unit norm.

The reason why we do that is because now we can use the inner product to compare

in L2 of -1, 1, the function sine of pi t

with the function t divided by the square-root of two thirds.

So, here we have the first function,

here we have the second function, we compute the inner product between the two.

We have to compute this area here, the result of the inner product is 0.78.

Now remember that we're comparing unit norm of vectors so

the inner product can range between one, in the case of maximal similarity,

and zero in the case of orthogonality.

Here we have a value of 0.78 which indicates that the linear ramp and

the sine functions are actually pretty close.

We can take the inner product with maximally the similar functions

if we take our usual x equal to sin of pi t, which is an antisymmetric function.

We take the inner product of this function with a symmetric function, like for

instance, the triangle function, 1 minus absolute value of t.

If we take the integral of the product of these two functions,

we have to sum the green area with the red area.

These two areas have opposite sides, and so the integral and, therefore,

the inner product, will be equal to zero.

So, the inner product has successfully captured the fact that we will

never be able to express any symmetric function in terms of

an antisymmetric shape.orthogonality.

There are maximally different, they have nothing in common,

their inner product is zero.

Another famous case of orthogonality between functions

is given by sinusoids whose frequencies are harmonically related.

So we're still in L2 of -1, 1.

Let's pick as the first function, sin of 4 pi t, and

as the second function, sin of 5 pi t.

These two frequencies are multiples of the fundamental frequency for

the interval, which is pi.

If we compute the inner product between the two, even graphically,

we can see that we have to compute these areas here and sum them together.

Now for each red colored area you have a corresponding

green colored area that has opposite sign.

And so as you sum them together, you end up having a total inner product of zero.

In general, if we have a vector space equipped with an inner product,

we have a natural norm defined on the space,

which is the square root of the self inner product.

We can use this notion of norm to define a distance between vectors

as the norm of the difference between vectors.

Again, in R2 this is very easy to visualize,

take one vector, take the other vector.

Now the difference between two vectors is the vector that connects their

end points and the norm of this connecting vector

is a quantitative measure of distance between the original vectors.

Note that the distance is not like orthogonality,

as a matter of fact orthogonal vectors can have a very large distance,

the distance is certainly not zero.

In function vector spaces the norm of the difference between vectors is

usually known as the mean square of error.

It is indeed the integral for instance in the case of L2 of -1,

1 of the difference squared between two functions.

So here we have as an example of our first vector sine of pi 4t,

the second function is sine of 5 pi t,

their difference will be this weird function here.

And if we compute the area of this functions squared we have a value of two

which is the distance between this two harmonically related functions.