0:00

So now we have seen where LTI systems come from. We've seen non linear models that

Â turn into very well behaved and pretty LTI systems. And we've seen non linear models

Â that don't necessarily produce useful LTI models. But a lot of systems do produce

Â useful LTI models. And it's really our. Most systematic way of designing

Â controllers. And they are extremely useful . So even though there aren't that many

Â bananas in the universe. A lot of things act like bananas. so what we're going to

Â do now is, we're going to start by understanding how these systems behave. So

Â what I'm going to do in this lecture is, I'm actually going to find. The solutions

Â to these systems and once we have those solutions we can start talking about how

Â they behave. And we're going to start by simply ignoring the input and ignoring the

Â output. So we're going to start by just saying that, let's say that I have x dot

Â is Ax and at time t not, this is the time when we wake up, we start somewhere. So

Â this is the, the physical part of the system. Not the thing that we bought

Â actuators for. Not the thing that we bought sensors for. It's just x dot is ax.

Â Let's see what happens in that case. How does the system behave or drift if you

Â will, when you're not messing with it. So we need to solve x dot is ax. let's start

Â with a scalar version of this, where x is just a number, right? So the scalar

Â version I'm going to write this x dot as little ax. So this is a scalar version.

Â And I start some Somewhere. Well, you may not know this, but if you

Â take in or see differential equations, the solution to this differential equation is

Â actually given by x of t is e to the a, t minus t not times, times X not. So here,

Â professor shows up and says, ohhh, this is the solution to this differential

Â equation. Now you clearly are critical thinking people who don't just accept

Â anything the professor says, so what you want to do now is make sure that this is

Â indeed correct. So how do you ensure that what someone feeds you, say here's a

Â solution to differential equation, how do you make sure that this is correct? How do

Â we know? Well, the first thing you have to do is make sure that the initial

Â conditions are right. Meaning that, my solution here actually respects this

Â initial condition. So what I'm going to do, is, I'm just simply going to plug in t

Â not here, and see what I get. Well, if I do that, I get x of t not is e to the a. T

Â not minus t not times x not. Well, this thing is zero, right? So I get e to the

Â power zero x not. And e to the power zero is always equal to one. So, the

Â exponential evaluated at zero is one. so x of t not is equal x not. Which means that

Â the initial condition is correct. So we're done with this. Now, clearly, we need to

Â deal with this, right? We need to make sure that the dynamics is indeed correct.

Â So now I'm going to take the time derivative of my proposed solution. So I'm

Â going to take d, dt of this thing, and see what I get. Well, the time derivative of

Â an exponential. All we do is we pull out the coefficient there. So we're going to

Â pull out a, and write an extra a there, That's all we do. And this is why

Â exponentials are so wonderful. So the time derivative of x with respect to t is a

Â times what we have here? Well that this thing, this thing here that's x right. So

Â the prime derivative of x, my proposed x is equal to a times x. Well that's where

Â we started right. So what we now know is that the dynamics is correct as well. And

Â if the initial conditions are right then the dynamics is right. We know Thanks to

Â the existence and uniqueness of solutions to differential equations. That this is,

Â indeed, the right solution. Now here is the kicker. For higher order systems. So

Â now, x is in rn. We get the same solution. We have x. is e to the at minus t not x

Â not here. Well, now we have this, x dot is the same thing. The only thing I did

Â different was I wrote capital A instead of lowercase a. And the thing to keep in mind

Â here is that this is what's called a matrix exponential, instead of a sc alar

Â exponential, which looks kind of, just a little scary. But we're not scared of

Â matrix exponentials. In fact, what we do, is we look up the definition of an

Â exponential. And an exponential, e to the a t for scalars, well it's simple, simply

Â this sum. This is the definition of what, the exponential is. Well, here is just

Â multiplications, and we can write multiplications for matrices as well. So,

Â the definition of a matrix exponential is just this sum. Now, it turns out that it's

Â actually not that important to us to be able to compute matrix exponentials very

Â much. However, we need to know where they come from. And they come from this sum.

Â And the reason why this is useful, it actually allows us to compute the

Â derivative. Of a matrix exponential. So let's take the derivative, the time

Â derivative of this whole sum, right? So this is the sum here. Well the first term

Â that I'm going to do is I'm giong to, going to pull out the k equal to zero term

Â here. So then I get A to the power zero times t to the power zero divided by zero

Â factorial which is one. So this whole thing is actually equal to one. And now

Â I'm going to take the time derivative of one well the time derivative of one is a

Â big fat zero. So I'm going to pull out the first term and then I'm going to take the

Â derivative, of the remaining terms with respect to t. So all I get here is I get

Â an extra k. Well, now I can rewrite things, I can pull out then a and write

Â everything in terms of k minus 1 instead of k here. But I'm summing from one to

Â infinity so if I shift my k now to see out at infinity I get back this thing. So what

Â this means is that the time derivative of e to the At is simply big A times E to the

Â At. So the matrix exponential behaves just like the scalar exponential. That's all I

Â wanted to show with this slide is that, even though this looks a little awkward,

Â we have these sums of matrix powers, all it means is that we can take derivative of

Â matrix exponentials and trust that they behave just like in the scalar case. In

Â fact, e to the a, t minus t not is such a fundamental object in linear systems

Â theory. That it has, it has been given its own name. It's known as the state

Â transition matrix. And sometimes, I'm actually going to write big pi of t and t

Â not. And what we should then remember, and probably I will remind you of it, is that

Â this is simply this matrix exponential. That's all it means, but it will show up

Â quite a bit. Okay. X dot is Ax. That means, in fact, that x

Â of t is e to the big A, t minus t not. X of t naught or in general I can write it

Â on this form. It's this state transition matrix which we now know is just a fancier

Â name for this matrix exponential. And it turns out that it doesn't matter if it's t

Â naught or not, it's just whatever time tao, well we just multiply what x was at

Â that time tao times the state transition matrix. So this is simply code for x of t

Â is e to the a t minus tau, x at time tau. So the point is that we know what the

Â solution to, to this equation actually is. And the way you would show that this is

Â the solution is you would use the following two properties. And I encourage

Â you to go home and do this. the first is the thing we just established. Which is

Â that the time derivative of pi is a tines oi. The other is that, pi tt is the

Â identity. Well, pi tt I just plug in a t here instead of a t not. So, then I get e

Â to the power of zero, in the scalier case that's one, in the matrix case that's the

Â identity matrix. So, that's the only difference when you go matrix. Fine, so

Â now we actually have a control system. So, we have x dot is Ax. Plus Bu, what

Â happens? Well again, the professor goes well, here's my claim, this is what I

Â claim that the solution is. This looks like a mouthful doesn't it? It doesn't

Â look pleasant at all. Some stuff, in fact this is the thing we had when we had no B

Â matrix at all, at all and then we have thing, thing here that's If you want to be

Â picky this is what's called a convolution, but, we don't have to be calling it

Â convolution. All w e need to know is that, you know what this is what we claim the

Â solution is. But how do we actually verify that this is correct? Well, we do exactly

Â what we did before. We have to check the initial conditions and the dynamics. So

Â let's plug in t0 see if we get the right thing. Then we get pi. Instead of t here,

Â I'm going to write t not. And then, instead of t here and here. I'm going to

Â write t not and t not. Okay. Let's see what this is. Well, pi tt is

Â equal to the identity matrix, no matter what t is, right? So this is the identity.

Â Now, here's an integral between t not and t not. So this is clearly zero. 'because

Â I'm just taking the integral over this. Individual points.

Â So this interval is zero. So what I have is I have that x of t not is equal to what

Â it should be, x of t not. So we're going to declare success on the initial

Â condition. Now, we need to deal with dynamics and that's harder. First of all,

Â we use the fact that if I take the derivative of this, I get an A out. So the

Â first component is no big deal. But then we have this awkward object here. We have

Â to take the derivative of an integral, with respect to t, when t shows up both

Â here and here. And this, it's not a trivial thing. In fact, what you need to

Â do, is you need to use something known as rule.

Â That tells us that if I have a general function here of t and tau and I take the

Â derivative of this thing with respect to t. Well, first this contribution here

Â translates into plugging in, instead of tau I am plugging in t and then I am just

Â getting rid of the integral. That's the first piece.The other piece is I pooled d

Â dt inside the interval and I have to take the reverse of this thing. With respect to

Â t. So this is technically what we have to do to compute this. So let's do that.

Â Well, f t, t. Well, let's pull it, pull out this thing, and evaluate it at tau is

Â equal to t o' clock. Then I get phi t, t times bu of t. Which, in fact, is simply u

Â of b, u of t, right? Because this thing is density matrix And then, I get the time

Â derivative of this thing. Or, in other words, the derivative of this with respect

Â to t. Well, I know that that's an a that I just have to pull out in the beginning. So

Â this is a little bit of a mouthful, I realize, to that. So, take a deep breath,

Â and redo this computation, just so that you believe it. But what happens when

Â you've done this, then. Is, you can actually pull out the a, and find that the

Â time derivative of my. Proposed x is, big A, times this whole thing, plus B times u.

Â And now, this whole thing is equal to the same thing here. So instead of writing

Â this rather awkwards big expression, I'm just going to write, x sub t here. Or in

Â other words, d, xdt is ax plus bu which, which is where we started. So we can

Â declare success also on the dynamics. So to summarize, after all these pushups, and

Â I realize that today's lecture was a little bit of a it was a little thorny in

Â terms of all the integrals and derivatives. In fact, it was much thornier

Â than anything we've ever seen before. The reason I needed to do it was not because I

Â think you guys need to be world champions at applying rule.

Â I just want to be able to say the following. That if I have x dot is ax plus

Â bu. Y is cx. Then I can write y of t as C times x of t

Â where we computed the solution. So we actually know that the output is given by

Â this thing in yellow here and you know what? Let's add another sweetheart to

Â this. So all these push ups just ended up with us being able to write. Explicitly

Â what the solution is. Now, we're not going to be able to or particularly interested

Â in actually computing this at all. But we need to know it to move forward. So at the

Â end of this application of rule, what we ended up with was an expression for the

Â output or the state if you want to Get rid of the c matrices of this general LTI

Â system. And fi here, the thing to remember is that phi known as the state transition

Â matrix was simply given by this matrix exponential. What we're going to do now in

Â the next lecture is see h ow does this actually translate into us. Being able to

Â say things about how the system behaves. And in particular, we're going to look at

Â stability.

Â