From now we are going to get progressively more technical. So here's another prerequisite warning. This is a matrix vector multiplication in explicit form and in compact form. If you're not familiar with these concepts, if you're not comfortable with this notation, perhaps it's time to revise a bit your linear algebra textbooks. Similarly, this is the standard parallelogram rule for vector addition in the plane. Again, if you're not familiar with this concept, a brief overview of geometry and linear algebra will certainly help in what's to follow. A generic discrete time signal can be indicated by this expression here. We have a set of numbers ordered in a sequence. But we have seen that we have already four different categories of signals, finite length, infinite length, periodic, finite support. It would be very complicated to develop the whole of signal processing theory if every time we had to stop and specialize what we're saying with respect to the four categories that we see here. So we need a common framework in which we can talk about signals without worrying about which category they belong to. This common framework is provided by vector space and linear algebra. It is very convenient because it provides the same framework for different classes of singles and also for continuous time signals. It will provide an easy explanation of the Fourier transform, an easy explanation of the sampling theorem and the interpolation theorem. It will be useful in approximation and compression applications. And it's fundamental in designing communications systems. So all these advantages form too long a list not to use vector space in signal processing. The three take-home lessons that I would like you to remember from the next modules, is that vector spaces are very general objects. And vector spaces are defined by their properties not by the shape of the vectors that they contain. And once you know that the properties of vector space are satisfied, then you can use all the tools for the space in all spaces. And that's really the power of generalization of vector space that will help us in dealing with different categories of signals. If you are familiar with object-oriented programming, perhaps you can think of vector space as of an abstract interface class. Suppose you define a class called Polygon, and every objected derived from this class will have to have a number of sides, a length for the side, and the coordinate of the center of the polygon. And then you can define some methods that describe how you can manipulate these polygons. For instance, you can resize a polygon by changing the size of the side, or you can translate the polygon on the plane by changing the coordinates of the center. And these methods will apply to all derived objects independently on the number of sides. So when you derive objects like triangles, all you need to do is instantiate a number of sides for a triangle but the methods will remain the same. A square will have four sides and so on, so forth. Just like in the interface paradigm a vector space dictates how you can combine vectors together, how you can re-scale them but does not pose any limit on the internals of the vectors, on the implementation of the vectors. A vector space, however, does more than just specifying methods, it imposes a structure on the space. And here, perhaps, another tenuous analogy is with Lego. In Lego you have a basic building block and what you can do is re-scale this building block. So have blocks that are of a different size. And then you can combine these blocks together according to the rules of the game. We will see that in vector space. The concept of basis will be the equivalent of the Lego minimal building block. And by decomposing an arbitrarily complex vector into a linear combination of basis vectors, we will be able to gain profound insight on the properties of the vectors and of the space.