The axioms of a vector space tell us how to combine vectors together. So that a linear combination of vector is the basic operation that we perform in vector spaces. We take two or more vectors x and y. We re-scale them by scalar multiplication. And then we combine them together by a vector addition. So when we combined vectors together to obtain new vectors in the space. Initial questions is, whether we can find a minimal set of vectors. So that we can express any vector in the space. As a linear combination of this bases factor. So this set of building blocks will be called the bases. You can think back of the Lego block analogy that we drew before. The question would be, can we find a set of minimal building blocks with which we can build any other piece in the set? A brief remark about notation where we want to indicate a family of vectors. And we want to be able to index them. Then we will use a superscript in parenthesis here. That indicates the cardinal number of the vector in the family. Let's start with the very simple example and introduced the canonical basis for the Euclidean plane. Do you remember any vector in the Euclidean plane? Is the point on the plane. And this can be express as a pair of coordinates x0 and x1 on the plane. Now, it's easy to see that if you take two vectors shape like saw, e0 will be the pair 1, 0 and e1 will be the pair 0, 1. Then any vector in our R2 can be expressed as a linear combination of e0 and e1. And the coefficients, the scalars that you will have to use in this linear combination. Are actually the values themselves in the original vector. The fact that the scalars in the linear combination coincide with the values of the elements of the vector. Is why we call this basis the canonical basis. As a graphical example, consider the vector x of coordinates 2 and 1, so this guy here. And we can see very easily that we can express this as the linear combination of two times the horizontal basis vector, canonical basis vector, plus one time the vertical canonical basis vector. So if we sum this 2 times 0, plus this, we have the usual parallelogram rule, and we get back our x. However, there are other basis that we could choose for the Euclidean plane. For instance, the basis given by the vectors 1, 0 and 1, 1 also represents a basis for the plane. And any vector of coordinates x0 and x1, can be expressed as a linear combination of v0 and v1. How do we find the coefficients in the linear combination? Well we have a system of two equations and two unknowns, pays Alpha 0 and Alpha 1. And we get that Alpha 0 will be equal to x0 minor x1 and alpha 1 will be equal to x1. So again graphically the same vector of coordinate 2 and 1 can be expressed as a linear combination of v0 and v1. And now the coefficients are different because they will be 1 and 1. And you can see indeed if you sum this guy plus this guy, again with a parallelogram rule you get back x. There are in fact an infinite number of bases for R2. But not all pairs of vectors represent the basis for the plane. Here's a kind of example, if you take g(0), the vector of coordinates 1 and 0, so equal to the first canonical basis vector. And then you take g(1) as -1 0, then it's easy to show that it's not possible to express any point on the plane as a linear combination of g(0) and g(1). As a matter of fact, you cannot express any vector for which the second component is not equal to 0. Graphically, it's easy to see because you can see that g(0) and g(1), are actually collinear. They're in the same line on the plane. So they can only capture one dimensional component of the plane. So any linear combination of g(0) and g(1) will be a point on this line. But you will not be able to reach the other points on the plane. We express this by saying that the vectors are not linearly independent. Okay, so the Euclidean plane, Euclidean space, and in general, finite dimensional spaces, are well behaved animals. And so it's kind of easy to believe that there will always be a basis for this basis. But what about interdimensional spaces? We have seen, for instance, the space of infinite length symbols. What is the basis for that? Can we write linear combinations of an infinite number of basis factors? So with some precautions, like for instance in the case of square summable sequences, the answer is yes. And the canonical basis for l2 of z for instance is a collection of an infinite number of vectors. In which there's only one non-zero component which is equal to one. Even more interestingly, we can develop basis for function vectors basis. Take for instance this base of square integral functions over an interval. You can express any function that vector space as a linear combination of basis vectors. Where are these basis vectors? Again, there exists an infinite number of basis for the space. But one of the most famous ones is the Fourier basis. In this case for say the interval- 1, 1. We can order the basis vectors and we have that basis vector 0 is a constant 1 over square root of 2. Basis vector 1 is cosine of pi t. Basis vector 2 is sine of pi t. And so on and so forth. Now, can we really represent any square integrable function over the -1,1 interval as a linear combination of these basis functions? The answer is yes, and this leads to some surprising results. For instance, we can approximate a discontinuous function. Such as a square wave, as a linear combination of continuous functions, a combination of sines. Consider the following sum here for k that goes from zero to capital N of sin of 2k+1 pi t over 2k+1. If you considered the order of the basis functions that we showed in the previous slide. This is equal to the sum, for k=0 to capital N of a basis function W of 4 K + 2 divided by 2K + 1. So, it is a linear combination of basis functions. And the basis function of index 4K + 2 is normalize by the factor 1 over 2k +1. So, in this figure here you see what happens when N is = to 0 so, you have just to first determine the sum adjusted sin of period pi t. As you increase the number of terms, so N = to 1. You see that the shape starts to converge to that of a square wave. Which to remind you will look exactly like this. So 1 from 0 to 1 and -1 from 0 to -1. So we increment the number of terms. And the more we add, the more we approximate to the function that we want to approximate. Now of course, you can see that as the number of terms grows, the approximation is better and better. But something's fishy is going on here at the transition points. This wiggle that you see around the transition point is called the gifts phenomenon. And it's do the fact that the convergence of the sum to the actual square wave only happens in the means square, since a knot point y. We'll see this a more detail when we start a filtered design. So after the example, so let's formalize the concept of basis for a vector space H. We take a set of k vectors from H, and we call the set W. W will be a basis if two two things happen. First, we need to be able to express any vector in the space as a linear combination of vectors from w. And two, the coefficients in this linear combination must be unique. Uniqueness of the representation is equivalent to linear independence of the vector in the basis. In other words, if we take linear combination of basis vectors. And we obtain the null vector this implies that the coefficients in the expansion are identical to zero and vice versa. Of all the possible basis for this space. The most important once are orthogonal basis. In orthogonal basis, the basis vectors are mutually orthogonal. And if were particular lucky this vectors will be also unit norm. Which gives us an orthonormal basis. We can express the key property of an Orthonormal basis in compact form, like so. The inner product between any two vectors in the basis is equal to the delta of the difference between the indices. So in other words, if the vectors are not the same vector, the inner product will be equal to zero. If it's a self inner product of a basis vector, the inner product will be equal to one. The good news is that we have an algorithm called the Gram-Schmidt orthonormalization procedure. That will allow us to turn any basis for a space into an orthonormal basis, if we need it. So if we have a basis, the questions is how do we find the expansion coefficients of scalars that we have to use in this representation for a vector that is a linear combination of basis vector?s. In general the answer could be. White involved, but if we have a orthonormal bases then the procedure is extremely simple. If we take the inner product between the bases vector number k, and the original vector. We directly obtain the expansion coefficient number k. You can verify this very easily by replacing here x, with this expression. And then consider the orthonormality of the bases. Another common question when using basis is how to change the representation of a vector, from one basis to another. Take a vector x and consider its representation as a linear combination of basis vectors from a basis w of k. Now the question is to find the expansion coefficients of the same vector, using a different basis v of k. If v of k is orthonormal, the solution is extremely elegant. The expansion coefficient number h and the new basis can be obtained as we have seen on the previous slide. As the inner product between the basis vector number h and the new basis and the original vector. Now, let's replace the expression for x by the linear combination of basis vectors from the original basis. So we get this expression here. And now, we exploit the distributivity of the inner product to obtain this expression here. Where we can see that each new expansion coefficient is actually a linear combination of the previous expansion coefficients. What is remarkable however, is that the factors used in this linear combination do not depend on the vectors that we're manipulating. These are just inner products between vectors in the original basis and a new basis. As a matter of fact, we can write this sum here as matrix vector multiplication. And this matrix here contains entries that are all the possible cross products. Between vectors in the original and in the new basis. In other words, given a starting basis and an orthogonal arrival basis. You can compute this change of basis matrix once and for all. And everytime you have to change the reference system for a vector, you just need to compute a matrix vector multiplication. We will see very soon that this is really what's behind the discrete Fourier transform algorithm for finite like signals. In the meantime, lets just show a simple example using an elementary rotation of the Euclidean plan. Lets start with the canonical basis e0,e1. We know the space is to be orthonormal. Any vector in the plane can be expressed as linear combination of the two canonical vectors. Now, we rotate the plane. So we rotate the reference system by considering a new basis v0 and v1. With v0 equals cosine of theta sine of theta. And v1 minus sine of theta cosine of theta. Theta is the angle that we rotate the plane by. We want to find the coordinates of a vector in this new rotated reference system. It is easy to verify with basic trigonometry the new basis is also orthonormal. So we can use the previous result and try to compute this change of basis matrix. It will be a two by two matrix. Where the entries are going to be given by the inner product. Between every vector in the rotated reference system and a vector in the canonical system. Now, because these vectors are very simple, they are just one, zero and zero, one. These inner products are going to be very easy to compute. And the change of basis matrix has this form here. The coefficients in the rotated reference system will be given by the product between the change of basis matrix. And the coefficients in the original system. Well, we were starting from the canonical reference system. So these are also the coordinates of the point in the plane. R is called the rotation matrix. And this is what you would use, for instance, in computer graphics. To rotate a set of coordinates to rotate an object on the plane. A key fact that depends on the orthonormality of the rotated basis. Is that, if we take the rotation matrix and multiple it by it's transposed. We obtain the identity matrix.