Let's review, [SOUND] a little bit to control stability arguments, states-based formulations, also definiteness of functions. We got kind of through that all the way up to the Lyapunov function quickly, okay? I'd like to do this reasonably quickly and see what really sticks and I asked you guys to review these things again before today's lecture. So, let's talk about states face formulation. If you have this and let's say, let me do this different variable instead of always x. It gets boring. Come on, there we go. So, if we have theta double dot + omega n squared sine theta = to 0, non forced plane or pendulum problem. What's the dimension of my state vector x here? Lewis? Two, why two? Talk me through that. >> Make it two-by-one where you can have a system of first order. >> Yes, so you have maybe x is equal to theta and theta dot, right? This is a scalar equation but it's a second order differential equation. So you're going to need two initial conditions really. That's another way to look at it. What's the dimension or state vector? It has to be the same as the number of initial conditions. because initial conditions should define this is a fully deterministic problem. Once you have all initial conditions needed, you should be able to fully propagate. So, here, you need two so the state vector has to be of dimension two. Let's say we have this though. This is you classic orbit one, your unpatterebed orbit two problem. Here, what's the dimension of the state vector that we had? Andre, what do you think? >> Six. >> Six right, because here, in MATLAB, this in make vector form right, you have your r and then you need your r dot. In our attitude problem, if I'm doing the complete attitude, forget the translations pretend this thing is fitting in space, in place, in space. How many states do we need here, for the attitude problem? Jayla? >> 12? >> How did you get to 12? >> I just caught the end of your question. >> [LAUGH] So, if you want to write your attitude problem in this form, right? That's if you're doing integration, which you're doing also in this last problem. You're doing in this next problem, especially now, where you're getting the full six stuff. So, how many estates do you need in x if you have the complete attitude motion? >> You have the attitude, you need three degrees of [INAUDIBLE] for the attitude, so- >> Okay. >> And then don't you need three for the position? >> Well, we're not doing translation. >> Okay. >> We're just doing rotation. All the code you're writing right now really is just rotation. If you're adding translation, you're doing extra work. You don't get credit for that. >> So for the attitude alone, wouldn't you need six? >> Six then, right? because you need positions and rates but the positions are easy because it's r and r dot or x and x dot, whatever you called that position vector and its derivative. That's typically how we write it. It's kind of easy, it's simple. The attitude, though, we use different coordinates. We use different attitude coordinates specific for the orientation description and then we don't use yaw, pitch, roll rates. We use actually omegas. That's always angular velocity measure, so that's a six-dimensional vector that you'd have. Now, is it always going to be six dimensional? [SOUND] What else? How would you increase this dimension? >> Increase the dimension? >> Yeah, how would you have more? >> If we interested in higher dimension? >> No, 3D attitude. >> Higher order I guess, if you were interested in something like. >> Well, yes, if your differential equations need it, you don't just have omega dot that you're solving or omega double dot. Then, you might have to go to higher order and you're looking at those kinds of responses. But what's the easy one? >> If you're using something like Oiler parameters? >> If you're using quartarians, all of a sudden, we moved to four. Did it make my problem now, instead of six degree or a three-degree freedom problem, second order differential equation times two is six. Now, we have seven states. >> [COUGH] >> What must be happening, right? because we didn't just increase the degrees of freedom. >> Because of constraints. >> You have to have constraints, right? And that means in your code after you integrate from one step to another, you better normalize those quartanians again because otherwise, little numerical errors creep in. If you're using a DCM, you're adding nine coordinates and you'd have re orthogonise a DCM every time, which you could do. And if you use MRP, we can avoid. There is no constraints, but we have to deal with the switching just to avoid singularities. So there is still an if statement, right? So good, but that's kind of a quick run down. This is what you should be doing. Now, let's talk about neighborhoods. What is a neighborhood? >> Is it Daniel? No, David. >> David, in what context? A neighborhood, it's like a region around an equilibrium point where stability can be proven or shown. >> Okay, so this is typically what we are writing a lot. What do I mean by this? Does this xr have to be an equilibrium? >> No. >> We also be writing it around references, right? So the reference problem is typically the distraction problem. If it's in equilibrium, it's more like a regulation problem, just drive everything into a steady state, right? Whatever that orientation is good. And then there's a neighborhood. How do we define these neighborhoods typically? What norm do we use? >> L2-norm? >> An L2-norm, right, which means I always draw them as balls because that's essentially what in 1D, 2D, 3D, it's easy to visualize. But it extends to hyperballs, where you have just a four, six, eight dimensional space. That's what we're defining in the Lyapunov theory typically, is these spherical neighborhoods, yes. >> Your delta in your different coordinates could be different sizes, couldn't they? So- >> No, it's one delta. >> Really, okay so- >> So, however you define this system if you're mixing positions and angles, it's going to give you weird norms and that's part of the issue with this stuff. Sometimes, you can rescale it and prove the stability for a bigger neighborhood. And we talked about this too like if we look at the doffing equation, there was an equilibrium here and an equilibrium here. These were unstable. This one was stable, the phase based plots looked at this. But, if you fit the nearest sphere inside, this is the region that we can prove stability. Here, it's actually stable as well. It's just you may not get out of this result. So as we're doing stability arguments, we often have hey if this is the case, then it's stable. If we say if and only if, what do we mean then mathematically? >> More stability region? >> No, if you have an if and only if statement. >> Are there implications? >> Right, if, it's a one way direction. If a is true, if the lights are on, you guys are awake. If it says, if and only if the lights are on, that means you could be awake if the lights are not on but if they're on, you're awake for sure. If and only if means you're only awake if the lights are on. If you're awake, lights must be on, right? Now, you can go both ways with those arguments right, and that's a thing we use a lot in control. So we'll be looking for that a little bit. This coming up in today's lecture as well. That's an if statement, or an if and only if part is mathematical proof. So we'll be covering that. So neighbourhoods, we have this L2 normsthat we're dealing with, good. Then we have different types of stability that we discussed. What was the simplest type of stability? >> Lagrange. >> Lagrange, what does Lagrange mean in basic words? >> Bounded input, bounded output? >> Yeah, essentially, it's a bounded response. That means there is a neighborhood delta somewhere. This is your x r, such that at some point, you enter this tube, and that's typically set as t naught, just out of convenience. At sometime, you've entered it, and then you remain within this boundedness forever, right? That's great. Now, the key question though is does this neighborhood delta, this bound that you have, is it a function of the initial conditions? What do you guys think? Robert. Remember? Andrew? >> No, it's not. >> No, is not, that's the key thing, right? And the example we had was, think of a spring mass system, a spring master amp system. Just floating in space, it would oscillate and settle down as zero equilibrium. So now, you're studying this equilibrium subject to a disturbance and you're treating gravity as a disturbance, it's going to settle to this deflection. But it settles to the same deflection no matter how big you bumped it. It's always going to come to the same one. So it doesn't matter on initial conditions, right? It's just a quick visual way to think of this. What's the next stronger level of stability? We have Lagrange, Downtanis, then we have the Lyapunov, good. So Matt, talk me through the Lyapunov now. All right, we have some initial delta, [SOUND]. >> So for the Lyapunov, for any initial delta, there's some final epsilon that if you start in the b sub delta, you will end up and in the b selector. >> And stay within it, right? We talked about the separatrix motion. The astronaut spinning that key. It kind of stable for short period but then it flipped around then it will stable here. And then it flips again and it continues this. So that would never fit the stability requirements because it might be there for a short period of time. But at some point, it leaves again and then it comes back. And that's a whole different kind of a thing, okay, CK. >> I had a question on that because when it's flipping around, it's sort of flipping within these two like defined equilibria. So could you bound around that and call that- >> You could call it bounded. You could come up with boundedness arguments for that one, and say that for the system, the rates aren't spinning up like crazy. Something could be unstable like if you look at Europe effects, people keep studying on asteroids and debris. There, the spin's going to get bigger, and bigger, and bigger, and bigger, and bigger, and the cements are going to break apart. This spinning thing is not going to do that. It is bounded in its response. And you could argue some types of Lagrange stability around it, that I know it's not going to 6 billion RPMs all of a sudden, right? Exactly, but now, what's the analog we use here for the Lyapunov stability? because we say, we can pick any epsilon, any epsilon, really, really small. Now, the corresponding delta might be really, really small too, but you can find it, right? What kind of a mechanical system can you pick? I want to be oscillating within one degree, 0.1 degree, 1 arc second and all this find an initial condition that puts you there. >> [INAUDIBLE] >> No temper, you're close. It's just a spring mass system. If you look at a classic spring mass system, you can deflect it, right? And the stability, we are always talking about protobation. You can't go while otherwise, you don't deflect it. You can do anything but nothing. [LAUGH] Right? You can't do nothing, you have to do something. And so you bump it infinitesimally and whatever infinitesimal bump you gave it, it's just going to wiggle and there's no damping, right? It's just going to wiggle. So that's why we can never set epsilon here to zero. You can make it infinitesimally small and then perbations are just infinitesimal. And if you're within that regime, I'm jittering within one arc second, fine, okay great. That works. So Lyapunov stability typically also just refer to stability if you hear people nonlinear control talking about this. That is something that guarantees that yes, I can now pick the region that I'll be in by picking an appropriate initial condition set, right? So here, epsilon depends on delta. So stability is good. That get's you there and we can get some now more control over how much deflection we'll have at the end. Boundedness, you don't have such control. What's the next level of stability, the stronger stability beyond the Lyapunov stable? >> Asymptotic. >> Asymptotic, okay? Now, asymptotic basically means epsilon is going to go to zero. That will be a spring mass damper system essentially, right? Everything is going to converge to zero. But here's also a challenge. If you use two linear systems, if things are stable, not marginally stable, stable, all the roots on the left-hand side, you have an exponentially decaying response, in all those different modes in a linear system. That means with the next exponential decay, and if we do this on a log scale and now your errors are basically, decaying in a straight line, right? This gives you a performance guarantee. I can come up with a half life and say, okay, in 30 seconds, I want my errors to to decay by half. Another 30 seconds, they're another half. Another 30 seconds, they're another half, right? That's great. Every linear system acts that way. Non-linear systems, that's not true. Asymptotic stability just means this error will do something, and eventually, it's going to go to zero, if that's your reference that you have, right? It gives no guarantee on performance and that's a big tricky thing. So non-linear control papers, you often see people arguing forever about stability. And they got it's a wonderful, great, crazy, control and it's stable. Wonderful but is it worth anything? Because this control may take 6 million years to converge. You've proven it converges after much effort, but are you that patient, right? And you can come up with weird nonlinear systems that you guaranteed will get there. But man, the convergence rate is just atrocious, right? So there's actually a extra level of stability, people argue. Asymptotic stability, we'll show you today, and we'll use in class. There's an extra even stronger argument in non-linear control called exponential stability. And that basically means, if you look in the book, you'll find the definitions, that you can upper and lower bound the response by an exponentially decaying function. So whatever the craziness happens in-between may not follow exactly a linear response, but it's decaying fast enough that I can bound it by an exponential. So, that gives you a performance guarantee that hey, with this, I know it will be at least in 30 seconds, it's at least half as big the error, maybe even more. We're not going to deal much with exponential stability in this class, but if you're curious, I just wanted to highlight that. So you can see, different levels of strength and as we go from Lagrange, to stable, to asymptotically stable, to exponentially stable, there's always more and more and more things you have to argue. Whereas linear systems, once it's stable, you're done. Linear systems, are they locally stable or globally stable? Global or local? You guys are way too shy. >> Global. >> Global, thank you, yes. It's good. Spring mass system, the math, nothing says you can only defect 1 meter, 1.5, you're really pushing it. Come on, what are you thinking? Nothing in there about that. You can throw in any number you wish. And it'll oscillate. Whereas non-linear systems and we've seen that with equation highlighted there is actually multiple equilibrium. Some of them are stable, some of them aren't. Your arguments may only be good locally. We've also talked about this planear pendulum. This is one that you look at the homework as well, right? And what you're going to find is this system is actually stable. It's globally stable. So even in this one, I can come up with a bound as CK was talking about, and saying hey, if you're good within 180 degrees, this system is stable. But it's not asymptotically globally stable. It's only locally as we did argued last time because with damping, if some joke which gives you a perfect articulation stands up, it will never converge. So, it is one singular configuration that makes this whole thing not globally stable, but it's something if you're a mission designer, you'd be losing sleep over that one singular configuration, right? And you'd try to avoid this somehow, put in some mitigation strategy to not get stuck there.