So with the last lecture, we are actually declaring success in terms of designing

controllers. The useful placement, which add

controllability, and off we go. the big problem, though, is well, we

don't have x. And we have to, when we do.

u=-Kx. Well, x is there, but we don't have it.

So, what about y. Ultimately, we don't have x.

We have y coming out of the system. And somehow, this Y has to translate into

a u. It's not enough to say x translates into

u because we actually don't have, y. Well here is the, cool idea.

I'm going to put a little magic block here.

And the output of that block, somehow should become x meaning I would like to

be able to take y push it through a magic block.

And get the state out. Now I'm not going to get x exactly, in

fact I'm going to put a little hat on top of it.

This is my estimate of a state. Meaning I'm taking my sensor measurments,

y and based on those measurements I'm going to estimate what x is.

And I'm going to call that x hat, in fact the magic block.

The thing that allows to get x from y is called an observer.

So in today's lecture I'm going to be talking about these observers and how do

we actually be design them. Well, it turns out the general idea

behind the observer design can be summarized in the predictor-corrector.

Under the predictor corrector banner. So, let's say that we have, a x is ax.

Forget about u for now, that doesn't matter.

And y is cx. Well, here is the idea.

The first thing we're going to do, is we're going to make a copy of this

system. And our estimator is going to be this

copy. So I'm going to have x.

is =. Sorry.

xhat. is = to Ax hat, so my estimate is

going to evolve, according to the same dynamics as my actual state.

And this is known as the predictor, which allows me to predict what my estimate

should be doing. But that's not enough, what I'm going to

do now is I'm going to add some kind of notion of a wrong, or right the estimate

is to the model. And one, one thing to note is the

actually output is Y, the output I would have had if the state was, was exact is

c*x hat * exact. So I'm going to compare y.

To c*x hat. And, in fact, what I do, is, I add the

piece to my predictor. So, x.

is ax or hat, + this difference. y-cx hat.

which tells me how wrong I am. And then I add some game matrix here, l.

And this. Gives me a predictor and a corrector.

So, this part here is the predictor, and this part here is the corrector.

And this kind of structure is known as a Luenberger observer named after David

Luenberger, but the point is that, when you have this predictor correct repair,

you have some way of hopefully figuring out the state, or at least a good

estimate of the state, from the measurements, y, that show up here.

So the only question now. Well, one question is, does it work? The

other question is, what is this L? So the first thing we should ask is, how do I

actually pick a reasonable L? Well the first thing we'll do.

Is, let's define an estimation error, e, as the actual state - my estimated state.

And I should point out that we don't know e.

Beacuse we don't know x .but we can still write down e as x-x hat.

Well, I would like E to go to 0, right. 'Cuz, if I can make e go to 0, the x hat

goes to x. Which means that x hat is a good estimate

of x. So what I would like to do is actually

stabilize e. Make e asymptotically stable.

So, what we need to do first, is, write down the dynamics for my error equation.

So e dot well that's x dot-x hat dot. Well x dot is just Ax and x hat dot.

Well, we have this format the Ax hath+ L(y-Cxhath) and then we get the minus

signs in front of everything. so this is my estimation.

Now y Is equal to c*x, right? So what I actually have here is e dot

being A(x-xhat)-LC(x-xhat). But x-xhat is e so e dot is (A-LC)e.