At the end of this lecture, I would like to discuss one of the most important results of the theory of Markov chains, the so-called Ergodic theorem. The theorem can be formulated as follows. Let X t be a Markov chain, which is Ergodic. This means, just recall that there is only one class of equivalence and all elements from this class are recurrent and aperiodic. Then, the following limits exists. Limit Pij of n, probability to access state number j from a state number i within n steps. Where we take limit as n is tending to infinity. And these limits will be denoted by Pj star. An important point is that this limit do not depend on i. So, it isn't important at all from which state you start. Moreover, the theorem states that these limits are strongly positive and also the sum of Pj star j from one to M is equal to one. So, it is a vector P star equal to P1 star and so on, PM star. These are probability distribution with M elements. So, the main part of the theorem is the instance of this limit and it turns out that the limit included is are very special numbers for Ergodic Marcov chain. The meaning of these numbers is clear from the following corollary. So, corollary has two parts. The first part which states that if numbers P1 star and so on PM star are obtained from this formula, which is boxed now, then the distribution P star is a stationary distribution. That is, if you multiply P star by matrix P, then you get exactly Pi star. And the second part of this corollary is that if you take a limit of the probability, set a Markov Chain Xn equal to j and to take the limit as n is tending to infinity. Then you will get exactly Pi j star and this limiting value doesn't depend on what was the original distribution, what was that Pi zero. So, you see that Pi star is a very special distribution with Ergodic Marcov chain. And let me first show why these corollaries are fulfilled. Basically it isn't important at all in this corollaries that a chain is Ergodic, but it's important that the distribution Pi star is obtained via this formula. In fact, there are some examples when such formulas are fulfilled but the chains are not Ergodic. This is also possible. So both corollaries once more is only because of this limit exists. Let me now show why these corollaries are fulfilled. So, let me prove them. I will start with the first item. So I should show that factor in the left hand side in the right side coincide. Let me take some i from one to M and show that the element number i from left and right hand side coincide. Okay, and what was written on the left hand side is Pi star P element number i. Here we have a product of a vector and a matrix. Therefore, it's the sum j from one to capital M, Pi star J multiplied with Pji. Okay, now I will substitute the definition of the element Pi j star from this formula into the sum and I will get the sum j from one to M, limit P. And here I can take any first index, let me take some number k from one to M. So, Pkj of n limit is n turned into infinity, multiplied by Pji. Now, I can put limit outside this sum, and we'll get that here we have the limit as n is turned into infinity. Sum j from one to M. Pkj of n multiplied with Pji. Here, we have a product of two matrices, the first matrix is capital P(n) and the second matrix is matrix P. You know that P and then bracket is the same as P n, therefore it's the same as P n plus one, and P n plus one is the same as P n plus one brackets. This is according to the theorem which we have already proven. And due to this representation, we get that here we have an element of the matrix P(n+1) element number K i. And here we can use this formula, boxed formula, once more until we will get this is equal to Pi star. Therefore, because it started with element, was the vector Pi star P and finally got the element of the vector Pi. Therefore, these two vectors coincide as the first item is proven. Now, let me continue with the second part of this corollary. Now, let me prove the second part of this corollary. According to our definitions, we shall show that the limit of (Pi)j^n, as n stands to infinity is equal to (Pi)j^*. And here the initial distribution, (Pi)j^0 can be arbitrary. So we can start with any distribution and will get as limiting distribution the distribution (Pi)J^*. It's equal to the limit as n standing to infinity of the sum (Pi)k^0 multiplied by (Pi)kj of n. Here the sum was taken by k from 1 to capital M. Here we use the fact which was shown before that a vector Pi^n is equal to the vector (Pi)^0 multiplied by metrics P and (n) brace. Once more as this fact was shown before, n... What to do now, we will change the places of the limit and the sum. And since (Pi)k^0 also do not depend on n became process limit before the probability of (Pi)kj^n. This means the tool gives the following sum, k from 1 to M by (Pi)k^0 multiplied by the limit as n stands to infinity, (P)kj of (n). Now let's return to this formula which is boxed. And get that this limit is equal to (Pi)j^*. You see that this value of (Pi)j^* do not depend on k at all. Therefore, we can put this (Pi)j^* outside the sum. And finally, we will get that the sum is equal to (Pi)j^* multiplied by with the sum k from 1 to capital M, (Pi)k^0. And since, (Pi)1^0 and so on, (Pi)M^0, in some probability distribution; the sum of all those elements is equal to one. And to get that this expression is equal to (Pi)j^*. So finally, we conclude that the limit of (Pi)j^n under any choice of initial distribution is equal to (Pi)j^*. And this observation completes the proof. So once more, what is important in this corollary, that the values (Pi)j^* obtain as a limit of (P)ij^n. All other things are not important at all. Xt can be or not Ergodic. And also it isn't important that this values are positive. It can be any probability distribution. And, what is really important here is that this distribution is obtained as a limit of such elements. Now I would like to provide an example how one can use this corollaries in particular cases. We consider as a full length example, assumes that our Markov chain has two states, and the probability to access 2 from 1 is equal to 0.8; to go back is 0.6; from 1 to 1 is 0.2, and from 2 to 2 is 0.4. So this Markov chain has a falling transition metrics namely it's 0.2, 0.8, 0.4. Oh sorry, 6, and 0.4. Okay. Now we conclude that this Markov chain is Ergodic because it consist of only one class, all elements are current, and aperiodic. So what can implies the Ergodic theorem, so the limit of the corresponding (n) state transition elements exist and all of them are positive. And also both corollaries are fulfilled. Okay, let me find these limits from the first corollary namely, if I'll denote the elements of the vector (P)^* by (a b), I can find this (a b) from the system which appears if I will write this as the condition of stationary P by *P is equal to (Pi)^*. The following form, so, (a b) multiplied by 0.2, 0.8, 0.6, 0.4 is equal to (a b). This is basically a system of two equations with two unknown variables a and b. So have 0.2 multiplied by a, plus 0.6 multiplied by b is equal to a. And also, 0.8 multiplied by a, plus 0.4 multiplied by b is equal to b. If so this system we'll get that a is equal to 3 divided by 7 and b is equal to 4 divided by 7. Okay. Now we can apply this second corollary. The second part of this corollary and get as the probability that our Markov chain is equal to one, stands to 3 divided by 7. And the probability that Xn is equal to b to 2 is equal, stands to 4 divided by 7. Thus, limiting values do not depend upon the initial distribution at all. So this is an example, how I can use the Ergodic theorem in practice. The most important role is played by this two corollaries. According to the first corollary, you can find this limiting values. And due to the second corollary, you can find the limiting distributions of this system. This is basically all that I would like to tell you today. Thank you for your attention and hope to see you soon on our next lectures.