When we model a robot, we usually assume that we have control of the forces and torques at the joints, and the resulting motion of the robot is determined by its dynamics. This is the model we will use starting in Chapter 11.4. It's simpler, however, and occasionally even more appropriate, to ignore the dynamics and assume that we have direct control of the joint velocities. This assumption might make sense if we trust local joint controllers to achieve the velocities we request. Also, for wheeled mobile robots it's common that a higher-level control system commands velocities of the wheels or chassis, letting a lower-level control system achieve those velocities. In Chapter 11.3, we study robot control when the controller directly commands velocities, not forces or torques. We'll start with a robot with a single joint, since the ideas generalize easily. The first idea is to use open-loop control. Since we know the desired joint velocity at any instant, our controller could simply command this desired velocity at all times. This is called open-loop control, or feedforward control, because there is no sensing of the actual joint position to close a feedback loop. If there is ever any error in the joint position, however, this open-loop approach cannot recover. Essentially all robot controllers employ feedback, and the simplest closed-loop controller commands a joint velocity equal to a gain K_p times the error theta_e. The gain K_p is called a proportional gain, since the control theta-dot is proportional to the error. This type of control is called proportional control, or P control for short. The gain K_p should be positive to ensure stability. For example, if the goal configuration is 1 radian and the actual configuration is zero, the error is positive, and a positive gain K_p would command a positive velocity of the joint, pulling the joint to the goal configuration. If the gain K_p were negative, the joint would move away from the goal configuration with increasing velocity the further it is from the goal. Let's take a look at the case where the desired velocity is zero. This is called setpoint control, because we are controlling the joint to a constant value. Then the rate of change of the error is just the negative of the joint velocity. Plugging in the P controller theta-dot equals K_p theta_e, we get this differential equation in theta_e. This can be written in our standard first-order form with a time constant of 1 over K_p. The unit step error response is shown here. The larger K_p, the faster the error converges to zero. In practice, there are limits on how large we can choose K_p. With a large K_p, the joint might have excessive vibration, as small position errors produce large velocities. Also, actuators have limited maximum velocity, and if the control law is often hitting those limits, then the response of the controller is no longer well modeled by our simple linear differential equation. Now assume the desired trajectory has a constant velocity. Then the rate of change of the error can be expressed as theta_d-dot minus theta-dot, and plugging in c for theta_d-dot and the P controller for theta-dot, we get this first-order nonhomogeneous differential equation. The dynamics are stable for a positive K_p, but the solution to the differential equation shows us that as t goes to infinity, the steady-state error is c over K_p, not zero. Although this error can be made small by choosing K_p large, as we just discussed, there are limits as to how large we can reasonably choose K_p. The key limitation is that the P controller needs error to command a nonzero velocity. So, while proportional control can eliminate all error when stabilizing a setpoint, it cannot eliminate all error when the desired motion has a nonzero velocity. In the next video, we will introduce another feedback controller, called a proportional-integral controller, to address this issue.