0:00

As I've already said, the world is fundementally dynamic and changing and

Â unknown to the robot, so it does not make sense to overplan and think very hardly

Â about how do you act optimally given these assumptions about what the world looks

Â like. That may make sense if your designing controllers for an industrial

Â robot at a manufacturing plant where the robot is going to repeat the same. Motion

Â over and over and over again. You're going to do spot welding, and you're going to

Â produce the same motion 10,000 times in a single day. Then you'll overplan. Then

Â you'll make sure that you're optimal. But if a robot is out exploring an area where

Â it doesn't know exactly what's going on, you don't want to spend all your

Â computational money on Finding the best possible way to move. Because, it's not

Â actually going to be best. Because the world is not what you thought it was. So

Â the key idea to overcome this that's quite standard in robotics, is to simply develop

Â a library of useful controllers. So these are controllers that do different things.

Â Like going to landmarks, avoiding obstacles. We saw one that tried to make

Â robots drive through center of gravity of their neighbors. Basically, we can have a

Â library of these useful controllers, behaviors if you will, and then we switch

Â among these behaviors in response to what the environment throws at us. If all of a

Â sudden an obstacle appears, then we avoid it. Then if we see a power outlet and

Â we're low on battery then we go and recharge. So we're switching to different

Â controllers in response to what is going on. So what I would like to do is to start

Â designing some Behaviors just to see how what we learned in module one, a little

Â bit about control design can be used to build some behaviors. So let's assume we

Â have our differential-drive mobile robot. And to make matters a little easier up

Â front, we're going to assume that the velocity. The speed is, is constant. So v

Â not. We're not going to change how quickly the robot is moving. So what we can change

Â i s how we're steering. So you're basically sitting in a car on cruise

Â control, where the velocities are changing, and you steer it. That's your

Â job. And the question is, how should you actually. >> Steer the robot around. So,

Â this is the equation then, that's governing how the input Omega, it's the

Â state that we're interested in, in this case pi, which is the heading of the

Â robot. So, pi dot is equal to Omega. Okay, so, let's say that we have our. >> Yellow

Â triangle robot, it's a unicycle or differential-drive robot. It's headed in

Â direction five, so this is the direction it's in. And, for some reason, we have

Â figured out that we want to go in this direction, five desired or 5 sum D. Maybe

Â there is something interesting over here, that were interested in. So, we want to

Â drive in this direction. Well, how should we actually do this? Well, pi dot is equal

Â to Omega. So, our job clearly is that of figuring out what Omega is equal to, which

Â is the control input. Alright, so, how do we do that? Well, you know what? We have a

Â reference, pi desired. Well, in module one. we called references r. Right? We

Â have an error, meaning, that compares the reference pi desired to what the system is

Â doing. In this case, pi. So it's comparing the headings. So we have an error, we have

Â a reference. You know what? We have a dynamics. pi dot is equal to Omega. So we

Â have everything we had towards the end of module one. So we even know how to design

Â controllers for that. How should we do that? Well, we saw PID, right? That's the

Â only controller we've actually seen. So, why don't we try a PID regulator? That

Â seems like a perfectly useful way of building a controller. So, you know what,

Â Omega is Kp times e, where Kp was the proportional gain. So this response to

Â what the error is right now. You make Kp large it responds quicker but you may

Â induce oscillations, then you have the integral of the error. So you take the e

Â of tau, the tau times k sub i, which is the integral gain. And this thing, this

Â integral, has the nice property that it's integrating up all these tiny little

Â tracking errors that we may have, and after a while this integral becomes large

Â enough that it pushes the system Up to no tracking errors, that's a very good

Â feature of the, the interval. Even though as we saw we need to be aware of the fact

Â that a big KI can actually also induce oscillations and then we could have a d

Â terms. A KD times e dot and that where KD is the, the gain for derivative part. This

Â makes the system. Very responsive but can become a little bit oversensitive to

Â noise. So will this work? No it won't. And I will now tell you why. In this case

Â we're dealing with angles. And angles are. Rather peculiar beasts. Let's say that phi

Â desired a 0 radiance. And my actual heading now, phi is 100 radiance. Then the

Â error is minus 100 radiance. Which means that this is a really, really large error.

Â So Omega is going to be ginormous. But, that doesn't seem right. Because 100 pi

Â radius is the same as zero radius, right? So, the error should actually be zero, so

Â we should not be niave when we're dealing with angles. And, in fact this is

Â something we should be aware of. Is angles are rather peculiar beasts. And we need to

Â be, be dealing with them. And there are famous robotic crashes that have been

Â caused by this. When the robot starts spinning like crazy. Even though it

Â shouldn't. But it's doing it because it thinks it's 200 pi off instead of zero

Â radius off. So what do we do about it? Well the solution is to ensure that the

Â error is always between minus pi and pi. So minus 100 pi, well that's the same

Â thing as zero. So we need to ensure that whatever we're doing is we're staying

Â within minus pi and pi. And there is a really easy way of doing that. We can use

Â a function, arc tangents two. Any language there is a library with and it operates in

Â the same way. It's a way of producing angles between minus pi and pi. C plus,

Â plus has it, Java has it, MATLAB has it, whatever you, Python has it. So you c an

Â always do this and how do you do that? Well you take the angle that's now

Â 1,000,000 pi right and You take sine of it comma cosine of it. So this is the same as

Â saying that we're really doing arc tan. So I'm going to write this as tan inverse

Â sine e over cosine e. But arc tan or tan inverse. Doesn't, it's not clear what that

Â always returns but arc tan 2, where you have a coma in it, you always get

Â something that's within minus Pi and Pi. So here's what you need to do, whenever

Â you're dealing with angles and you're acting on them, it's not a bad idea to

Â wrap one of these arc tan two lines around it to ensure That you are indeed having

Â values that aren't crazy. So, with a little caveate that we're going to use e

Â prime instead of e, the PID regulator will work like a charm. Okay, so here is an

Â example problem. We've already seen this picture. this is the problem of driving

Â the robot, which is the little blue ball, to. The goal, which is the sun,

Â apparently, and lets see if we can use this PID control design on Omega to design

Â controllers that take us to the sun, or to the goal. and since we're dealing with

Â obstacles and we're dealing with goal locations, and we're also talking about

Â behaviors. at the minmum we really need two behaviors. Goal to goal, and avoid

Â obstacles. So what we're going to do over the next couple of lectures, is develop

Â these behaviors, and then deploy them on a robot and see if there any good or not.

Â