0:00

So, last lecture was really satisfying because there, we finally understood how

Â we can do control design using the state and then at the same time, figure out

Â what the state is and everything works. Thanks to this fantastic principle known,

Â known as the separation principle. So, what it tells us is that we can

Â completely decouple a control and observer design, and here, I have a

Â rather important little parentheses that says, in theory.

Â Now, there is a great American thinker that has figured out that this in theory

Â is actually kind of important. This is Yogi Berra, the baseball player

Â who presumably said, in theory, theory and practice are the same. In practice,

Â they are not. Now, this is rather profound and it has

Â some implications on the fact that just because the theory tells us something, we

Â need to be aware of certain things at least.

Â So, the first thing we need to be aware of is, the controller is really only

Â useful once the estimate, that the estimated state is close to the actual

Â state, meaning, that the controller doesn't

Â really do anything useful until the observer has converged.

Â So, what we want to do is want to make sure that the observer converges quickly.

Â What that means is that we want the observer to be faster which in turn means

Â that this eigenvalues that we were picking should be larger for the observer

Â than the controller. Now, one thing we saw with large

Â eigenvalues though, is that we get large gains.

Â So, in the control side, this is kind of bad, because that means that we have

Â large actuation signals, which means that we can saturate the actuators.

Â In the controller side, I'm sorry, the observer side, that's no big deal because

Â the observer is entirely done in software.

Â There is nothing that's going to saturate, so we can actually make our

Â observer eigenvalues large without having to run into issues like saturation.

Â So practically, what we need to do is pick the eigenvalues typically in such a

Â way that the controller eigenvalues are all, first of all, they all need to have

Â negative real part, of course. And then, what we want to do is we want

Â to make the observer eigenvalues bigger because that means that the observer is

Â faster than what the controller is. So, here is a completely made up

Â eigenvalue selection. But the important thing here is that the

Â slowest observer eigenvalue, which really dictates how quickly the observer

Â converges, is significantly faster than the slowest controller eigenvalue.

Â So, that's something that we typically want when we're building our joint

Â observer control design structures. Okay.

Â Having said that, let's actually use this to control a humanoid robot.

Â And this is the Aldebaran Nao that we're going to be working on, and in fact, what

Â we can control on this thing are joint angles, meaning how the different angles

Â are, are moving. And luckily for us, we actually have

Â detailed models of these joint angles. In fact, for a given joint, the angular

Â acceleration is 1/J times Ki minus B theta dot.

Â And these things, well, they are physical things.

Â So, J is the moment of inertia, i is our input, alright, so i is actually equal to

Â u, here in this case. This is our input to the system.

Â This is the current replying at the motor.

Â Well, K is a torque constant that translates roughly currents into

Â accelerations and then there's always a friction coefficient,

Â the viscous friction coefficient in these motors.

Â Now, luckily for us when you buy a robot like this, someone has already figured

Â out these physical parameters and there are user manuals that describe what these

Â parameters are. Now, we need to put this on state base

Â form. And the first thing we're going to do, as always is say, well x1 is theta

Â and x2 is theta dot, alright?

Â We're also going to say that what we can match around this thing is the angle

Â itself. So, y is going to be equal to theta.

Â Well, with this choice, we get a linear time-invariance system that looks like

Â this. x dot is 0 1 x 0-b/Jx and then we have

Â this b matrix which is 0K/J times u and y is 1, 0 x is since we're pulling out the,

Â the, orientation. Now, one nice thing about this system is

Â that it is completely controllable and completely observable.

Â So, what we have indeed learned in this class should be applicable.

Â Okay. So, let's do that. The last thing we want to do though is we

Â actually don't want to hold or stabilize the Nao into all the angles being zero.

Â We want to be able to move it around. So, what we want to do is, we actually

Â would like to track a reference angle. We would like the, the angle of joints to

Â be something. So, I'm going to define a new variable e,

Â it stands for error. It's not the estimation error, it's

Â another error, which is the current angle minus the desired angle.

Â And then as the second variable tossing in the angular velocity.

Â And I would like to drive e to zero because if I have e=0, then I have theta

Â equal to theta desired, meaning, I'm holding it at the angle I would like.

Â And I have theta dot equal to zero, which means I'm actually holding it there.

Â I'm not moving through it only. Okay, so this is what we would like to

Â do. okay, then we need to write down the

Â dynamics for our new variable e. Well, e dot, well, it's simply, Ax+Bu,

Â because e dot is really, [SOUND] well, it's theta dot minus theta desired dot

Â theta double dot, right? But this thing is 0 because the, the desired heading is

Â constant, so all we're left with is theta dot, theta double dot, which is the same

Â as x dot, right? This is the same as x dot so what

Â we do is we plug in the equation for x dot and we get this.

Â Now, we don't want to express this in terms of x.

Â We want to express it in terms of e. And what we get if we plug in e is, we

Â get this expression instead. Now luckily for us, a times this vector

Â is actually equal to zero. And I encourage you to compute this so

Â that you trust me. But having done that, what we get is that e dot is equal to

Â Ae+Bu meaning we have same system dynamics as before but now, defined on

Â this error, where the error is the current orientation or angle of the joint

Â minus the desired angle of the joint. So, this is the dynamics we're caring

Â about. Well, we have to do the same thing to the

Â output. The output is Cx, well again, we replace

Â x with e plus this vector. So, this is Ce+C times this vector, and

Â remember that C was actually 1,0. So, if I take 1,0 times that, out comes

Â data desired. So, my output is C times e plus theta

Â desired. Now, this doesn't scare us one bit.

Â We just plug it into our standard controller and observer design

Â methodology. So, u is -K, not e because we don't know

Â e but e hat, which is our estimate of e.

Â And e hat dot, well,

Â it has the standard predictor part and it has the corrector part.

Â And the corrector part is the current output minus what the output would have

Â been. And the only difference is I have to keep

Â track of this little extra theta desired. But it's no big deal.

Â It acts exactly the same way. So, this is now my control structure.

Â And instead of me talking about it, why don't we move on to see an actual

Â humanoid robot executing this controlled strategy.

Â So now, that we have designed a, an observer based state feedback controller

Â for controlling the joint angles of this humanoid robot, the Aldebaran Nao, we're

Â ready to do it for real. And I'm here with Amy LaViers. She was a

Â graduate student at Georgia Tech. And what she has done is made the Nao

Â move its arms and its head, and even its upper body, in such a way that it is

Â executing a friendly wave towards, probably you, who are watching this right

Â now. And, what's happening is we're running

Â the same controller on all the different joints with different desired angles to

Â get the effect out. So, Amy, why don't we take the Nao for a

Â little spin there and see what it can do? So, what's going on here is that we're

Â sequentially running multiple desired angles and that's how we're getting this

Â effect. In fact, why don't we watch this again

Â because I think this is quite, it's quite charming to be honest.

Â So, here we go. Observer-based state feedback controlling

Â action. Oh, thank you very much,

Â Amy. And thank you.

Â