To model some abstract vectors as n vectors, you should introduce some coordinates in an abstract vector space. Suppose we have an n-dimensional vector space V so that we have a basis B consisting of n elements. Then, for each vector x from this vector space, there exists some numbers which gives a linear combination of a basis, which is equal to x. Indeed, the basis should span all vector space so that each vector from this vector space is expressed in terms of the basis. The numbers which appears in these linear combinations are called coordinates of the vector x in the basis B. So that the coordinates depend not only on the vector x, but also on the basis B. Given a basis, we can denote the column of all coordinates by X_B. For example, if you remind the standard basis of the vector space Rn, then this standard column vectors are just coordinate vectors of each element of Rn in this standard basis. So that for Rn, the coordinates are the same as the coordinates in the standard basis. In general, you can express an abstract vector x as the following matrix product. Look, the first row of this product is just the list of symbols of abstract vectors b1,... bn, and the second term of this product form the column of numbers. So that after multiplication, you get this linear combination x1 multiplied by b1, et cetera. This is, in a compact form, just the matrix product, so that you can express this in this form. Lemma. Given a basis and the vector, the column of coordinates is unique. This means that all these coordinates are uniquely defined by the vector and the basis. So, x1, and x2, and Xn, all of them are defined uniquely. So that the whole column of the coordinates corresponds to the vector x. Let us prove this very important lemma. Suppose there is another coordinate column of the same vector x. We should prove that this another coordinate column is equal to the original one. So, we have two expressions of the vector x as linear combinations of the same basis B, but with different coefficients. Subtract from the last row the right-hand side. You get the following equality. By rearranging the summands, you get the following simpler form of the same equality. But look, this simpler form is a linear combination, which is equal to the null vector. But this is a linear combination of the basis B, so a linear combination of a linearly independent set of vectors. By the definition of that linear independence, we can deduce that all coefficients of this linear combination are zero, because this linear combination should be trivial. This means that all the coefficients are zero, or x1 is equal to x1', et cetera up to xn, which is equal to xn'. This means that the two columns of coordinates are the same column. So the coordinates are unique. Now, in a more abstract language, we can say that we get a function which for each vector x calculates its coordinate column. This is a map, or function, or operation which can express each abstract vector as an n-vector. Moreover, reversely, given an n-vector and the basis B, you can calculate a vector with these coordinates. So, each column vector you can imagine as a coordinate vector, and for each abstract vector, you can calculate the column vector of coordinates. So you have a correspondence between two sets. One is an abstract n-dimensional space and the second one is the set of columns of coordinates. That is the vector space of n-vectors. So, abstract vectors are in one-to-one correspondence with n-vectors. This is a very important point because it allows us to model each n-dimensional abstract vector space as an n-space. And n-space is a universal model for each n-dimensional vector space. To formalize this, let us list the properties of this map from an abstract n-dimensional vector space to the space of n-vectors. The first two properties are called linearity. First, if you scale the vector x, then your scale their coordinate columns. If you add two vectors to each other then it is the same as to add two coordinate columns to each other. It follows that if we have two or several linearly dependent vectors or linearly independent vectors, then their coordinate columns are also linearly dependent or linearly independent. It follows that if you have a basis, then the coordinate columns of this vectors form a basis of the space of n-vectors and so on. If one vector is equal to a linear combinations of other ones, then the coordinate column of this vector is equal to the linear combination with the same coefficients of the coordinate columns of that vectors and so on. These properties of the map Phi are very useful in many situations. Such maps are called isomorphisms. So in a more abstract language, we can say that each n-dimensional vector space is isomorphic to the space of n-vectors. Or from the mathematical modeling viewpoint, we can say that the set Rn is a universal model for all n-dimensional vector spaces. Now, let us consider the following situation. The basis is not unique. So we can imagine two bases on the same vector space, B and B'. This means that all elements of the second basis can be expressed in terms of the elements of the first basis. That is, each elements bj' has a coordinate column. We denote this column by bj. This is just a column vector of n numbers, and you can arrange all these columns to organize an n by n matrix, which is called a basis transformation matrix. This matrix contains all needed information to express one basis in terms of another one. Not every matrix can be a basis transformation matrix for some bases. For example, if T is a basis transformation matrix from B to B', then it should be non-degenerate. First example of a basic transformation matrix for each vector space is just the identity matrix. This is that transformation from a basis B to the basis B. This means that the columns of the identity matrix express the elements of the basis B in terms of themselves. So in the coordinates associated to the basis B, the elements of the basis B look like the standard basis elements of the vector space Rn. The second property that if you have three bases on the same vector space and you have a transformation from the first basis to the second one and from the second to the third one, then the total transformation from the first to the third basis is the product of two matrices. One can deduce from the first two properties that the transformation from one basis to another one gives the matrix inverse of the matrix of the transformation from the second basis to the first one. So the inverse transformations give the inverse matrices. In particular, of course as I said before, that basis transformation matrix should be non-degenerate. Now, if I had two bases, then each vector has two coordinate columns; x_B and x_B'. All information about the connection of two basis is in the basis transformation matrix T. So in terms of this T, one can calculate the new coordinates in the terms of the old ones and vice versa. The simple formula is to calculate the old coordinates in the terms of the new ones (look at the first formula!). And to calculate the new coordinates (the most natural problem!), you can use the matrix inverse or as the same the matrix of the inverse transformation. These are very important formulas, which will be used not only in algebra problems, but also in analysis problems.