So now these are the definitions. And this is what we're going to be using. And we're going to actually derive. We can't solve the equations and improve their properties, because these are not analytically solvable. So we will use all the tools that are open to do it. Linearization is still a very powerful thing. I got one slide on this I think you did some homework as well. So the linearization allows you to say, hey, this is the reference is great this is how I should be pointing here. I got that reference history, you have also done this somehow in your homeworks. It got your reference motions that you can integrate and now I want to look departures relative to this. So you can always do a linearized approach. But immediately soon as you do linearizations you always just get local arguments. If the linearized system which is what we did with the dual spinner was stable, we had it down to a linear form x plus some k times x essentially, right? And with the right omega spin rates, that k became positive or negative and so either stable or unstable. But that argument of stability was actually only a local argument. We didn't have any mathematical proof that for an arbitrary tumbling, large departure motions that dual spinning would be stable. It's just for very small neighboring ones. Now, what is small that's very application specific? For some applications smaller might behave plus, minus 120 degrees which is not really small. But that's what it is, right? For others it might be plus, minus micro radiance or something and that's it. So, it depends on the application. The linearization approach, we've done some of this already in your last homework you did it as well. You had this equation, you had to linearized around the 90 degree point. There's a whole process of how you do this. You've got your reference to linearize you have to define your states here relative to the reference. So introducing deltas. That's our departure motion again. Delta x is x in an actual state minus xr and xr could be fixed or vary with time. And then my control, you might have a reference control. Let's say, you know, that hovering, it wants to fly at 20 meters. Great, well, it's going to have to produce a thrust that gives it one g of acceleration, otherwise, it's going to start dropping again, right? So there would be a certain, let's say, that's ten Newton, that's a nominal thrust you have to produce to hold that reference. And then if you're off a little bit you might have to have more thrust to come, or if you're too high, you might reduce the thrust slightly, right? So this is the reference part, the ten Newton. The delta u becomes the feedback part, because you can solve for the actual u. The control is being u r plus delta u. Defeat forward part plus defeat back part. That's kind of the structure. So the reference trajectories we have often then generate our feed forward part of the control. If UAV was to accelerate, well, to accelerate it has to have more thrust, but you can compute that. So nominally, yes ramp up from 10 to 20 newtons over the time period. That'll give you the history that you want. That will again be your reference. So you as a user designed this and this is the reference that will achieve, this is the reference control that would achieve the reference motion. Hover, start to rise, stop. You can come up with an open loop force history, that would have to happen. So that's what we are linearizing about. So if you look at an actual Dynamical System that's nonlinear, we have control applied. We want to linearize the departures, delta x is x minus xr. So you put dots on everything and say, okay, my delta x dot is going to be x dot minus xr dot. And xr dot is just f of x r and u r. Right, we've already got that one. But the x dot, we are now going to linearize with the Taylor series. So if you remember Taylor series, we have y is equal to f of x and you want to linearize about x is equal to five. You put in f for five plus the first partial times the delta plus the next second partial. And that's what we're seeing here. We're doing first order stuff, so your Taylor series expansion of a function is going to be the function evaluated at the reference plus the first partial with respect to the states, evaluated at the states. That's why it's xr and ur multiply times the small departure. But we not only have departures in states, we also have departures in controls. We will be putting in just the open loop control, we may have to stabilize it so we have a delta u that happens as well, so we take the partial of f with respect to your control variable and then apply small departures. And everything else is higher ordered terms. So if you do this, all you're left is this minus this always cancels and you're left with some equation here that's basically delta x double dot plus this partial times delta x plus another partial times delta u is equal to, well, those two things are equal. This form, some of you may not have seen. Many of you will have seen this form, though. Basically, you got a x dot equal to a x plus b u, right? That's the classic, that's the plan matrix, that's the control ability matrix, that's where you come together. But for non linear system, this is how you find that A and that is the B. It's the partial of the actual f, denominator f function, with respect to the states after reference. That's what gives you the A matrix. And the other partial, that's what gives you the B matrix. [COUGH] So I expect you to do Taylor series expansion. it is also in vectorial form. We've done this kind of a partials when we did that one over RQ thing in class with a gravity gradients derivation and so forth. So it's the same math as being applied here. Take one of the hall marks as you solve this is as well in some systems. So I'll let you guys work with that. So but this is, you can always do this as well and then you can argue linear stability. But just not realize if you do this approach, at best you've only argue local stability. Not global.