Now, let's look at the mapping to and from the direction close on matrix. I'm not asking you to do this but you can use the quantum definition, plug in that beta I is qi times beta note and use the identity. This could be rewritten, so we get this. With the quaternions, we only had quadratic terms in here, which is kind of nice. Here we have quadratic terms plus a denominator. So it gets a little bit more complicated. Now you have fractions of quadratic polynomials. Still not quite the sines and cosines complexities. But still, this is reasonably fast to compute. And as you would expect, there's a singularity because as I go to 180, these things would go to infinity. These terms go to infinity and yet of infinity over infinity which gives you finite answer but it's a hard way to find it. [LAUGH] All right, so it's not good for 180. This is the matrix component version. This is if you do Matrix math where I'm using an inner product, q transposed q by q dotted with q. And this is a vector outer product. And that's the vector cross product term, all right. So if you just, you can look at these terms and then find the patterns. You can quickly prove to yourself this is equivalent to this.. In homeworks, when you're doing this, or even in research, if I have to do analysis with this, and this c times something else, and multiply it out. You do not want to use this version. This might be easy for programming, does then I can just tell it exactly what it is. It's very fast, I'm not doing lots of extra math libraries that hae to be called. But if I'm doing analysis, this is the version that's going to be much more elegant and quick. Why is that? Let's look at this. Let's say we have some math that says C times q. Right? What we are going to end up with? We have one minus Q transpose Q, that is one minus Q norm squared essentially, identity times Q. What is identity times Q? Russell, that's middle So is q, so is q, right? What is q q-transpose q? What is that going to give me, Nick? >> Outer product? >> Not quite. I have q q-transpose q. I have three terms, because I've multiplied this whole C times q. >> Sorry. Let's just go to the paper and answer the question. This is my question. What if you have to evaluate Q times Q transpose times Q? What is that going to simplify to? This part gives the outer product. >> But this part is what? >> That's the inner product- >> Inner product? So it's just q vector times the q norm squared and you can factor that out. Earlier we have some other stuff the other part was one, what was that? Where does that go? Okay. There we go. It was one minus. Minus Q squared. This gives you plus two times Q. This is Q, Q transpose. The Q transpose Q. The Q squared, so you could factor out the Q vectors. So it's one minus Q square, plus Q square which gives you one plus Q squared. Those things just combined. Okay, I see some eyelids going up, let me do that slowly. >> Believe you. >> [LAUGH] >> (1- q²)I, right? Was is + 2 q q transpose. And then there was something q ~, if we take this times q, let's look at the easy part, q ~ matrix x q, what's that going to give me? >> Zero. >> Zero, right, it's the cost product. So we like that part. This here Identity x q gave you (1- q² Times the q vector, here you have two times q, vectors q transpose q you said it was just a known squared, the inner product, so you can factor out the q vectors if you wish and the scale is one minus q plus two q squared, which is just q times (1 + q squared). This is way faster. I challenge any of you to do that in matrix component form this quickly. So in a homework. This is a capacity that I'm acting to practice. Let me do the optimal theory. Will kind this math all the time. They'll be transposes left to right, inner and outer products. Turn this base and do this dusty patterns identities. So when any kind of math like this don't jump to the component form. Do this kind of a matrix math and you'll be able to solve fairly complicated none linear things in a very compact way and a very efficient way. Especially in an exam that's what I would expect. If you do component form you'll just run out of time. So, okay so we have this definition, There we go. That's the matrix definition. That's if you have CRPs, this is how you go to them and the inverse, well there's no easy one. The easiest way to do inverse is really go to quaternions, so we use Shepard's method as we explained last time and then you just have to beta I's over beta naught. Do that last step right there. I haven't seen any more compact, elegant methods that do it in a completely nonsingular way. So just go there and down to zero, you know you have something that's going to be singular and you can do a nice check in your code to make sure you don't feedback control on them. Now this is another interesting identity. If you look at these apparatus, we know that the inverse of DCN, is the transpose of DCN. Its an orthogonal matrix. So, we get that's cool. But, here,if we have an attitude Q,takes you from N to B, right, gives you the BN matrix. If I do minus Q. The corresponding DCM is a transpose of the original, which means minus q, instead of being b relative to n, minus q is all of a sudden n relative to b. It's the inverse rotation. This doesn't work with Euler angles, for example. If I give you 20 degrees, 30 degrees, ten degrees, three, two, one, to go backwards, you can't do, minus 30, minus 20, minus ten. You don't end up in the same orientation. But if you deal with CRPs, it literally just means, If you have a q set of CRPs of zero point, one, zero point two, zero point three, and this takes you, b relative to n, and for some reason you need the inverse attitude to go from b to n using the CLP descriptions. It's trivial, all you have to do is reverse the sign and that's actually very handy in lots of little applications. But that's a very convenient feature that we have. Now why is that the case? Things drive me nuts. Okay, if you look at this, if you transpose, this is just a scaler. Choose a scaler, choose a scaler. This is a scaler. So, the matrix transpose, scailers don't mater.Identity transpose,is back itself. That is easy. Auto product transposed, A times B, transpose, is B transpose, A transpose, you get on auto product, is actually symmetric thing, it gives you back the same thing. So transposing this doesn't do anything. Transposing,the q, here, q tilde. If you transpose q tilde, Jordan, what happens then? >> Skew symmetric, so you get the negative. >> You get a negative sign all of a sudden. Right, so if I just took the C and transposed it, this doesn't matter, this doesn't matter, but this has a sign flip. Now let's look more carefully, how does that compare? If I do minus q, the scale apart here is q squared. So minus q squared is the same thing as q squared, right, the minus signs don't matter. Here two the outer product, minus q minus q transpose is the same thing as q q transpose, doesn't matter. Here also quadratic, doesn't matter. The only place it matters is here and that's how you can quickly validate, that's why minus q is the same thing as C transposed, the inverse rotation. So that's another good thing that we have. Also let me do matrix addition and subtractions, that's what we'll do next, this becomes handy.