机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

Loading...

来自 宾夕法尼亚大学 的课程

机器人学：估计和学习

255 评分

机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

从本节课中

Bayesian Estimation - Target Tracking

We will learn about the Gaussian distribution for tracking a dynamical system. We will start by discussing the dynamical systems and their impact on probability distributions. This linear Kalman filter system will be described in detail, and, in addition, non-linear filtering systems will be explored.

- Daniel LeeProfessor of Electrical and Systems Engineering

School of Engineering and Applied Science

In this lecture we will discuss the dynamical system and

measurement models that underlie the common filter.

The mathematical lenses of dynamical systems and

probabilities help to model motions and noise.

To motivate us, we will use a position tracking example.

In order to track a moving object,

the robot must model the dynamical system of motion.

The dynamical system describes how the state of the object changes in time,

as well as how the robot measures the state.

In a simple example, the state, x, will be indexed by time steps, t.

The state will be comprised of position v, in meters, and

velocity dvdt measured in meters per second.

Due to dynamics, the state changes in each time step.

Going from the current time step t to the next time step t+1.

This change is captured by A, the state transition matrix.

Sometimes notated as pi.

The state transition matrix combines state information

to describe the state at the next step in time.

The transition simplifies the current state to depend only on the previous state

making our mathematical lives easier.

With the state being in position in velocity,

we know that the position must change in time based on the velocity.

The state transition matrix captures this with the given formulation.

The robot will not directly measure X unfortunately, but

the robot may observe portions of x through it's sensors.

This portion is labeled z, where the relationship between the state and

measurement is given by the mixing matrix, c.

For completeness, the term u is included.

Which represents any external input not dependent on the state, x.

We will not explore this extra term in this module, instead, it is set to zero.

Creditly both X and Z contain noise even in this model.

State X is noisy because the linear model does not capture all

physical interactions.

Observation Z are noisy because sensors contain noise in their measurements.

Based on the dynamical model,

we can construct a graph of the information that we receive in time.

At any given time, we know the two pieces of information.

The previous state, x of t-1, and the current measurement, z of t.

With this information we want to compute the current state, x of t.

Remember, we never trust a single value to represent our state in our measurement

due to noise.

To provide an estimate that captures this uncertainty,

we will transform this dynamical model into a set of probability expressions.

Given the state dynamics, the probability of the current state

is conditioned on the probability of the previous state.

Essentially, this means that our current state cannot move too far from our

previously known state.

A large movement would probabilistically be very unlikely.

One tricky thing is to relate the sensor reading z to the true state x.

Our sensors only give us a single measurement but

we want a probability distribution.

To do this, we estimate the probability of drawing the observation

conditioned on where we are now, state x of t.

For instance, seeing a ball right in front of us provides a very certain estimate

with very little variance.

Conversely, an observation made at a distance

will have a wide variance since we are not sure exactly how far away it is.

We will continue to use the Gaussian distribution as a nice mathematical model.

For the common filter that means choosing a gas unit to model the state

with mean and covariance notated by n.

Now given both the dynamical system model and the probabilistic model,

we can combine the two ideas.

First, the linear dynamical system means that we can utilize the state transition

matrix A to model the probability of the next state based on the current state.

Additionally the probability of an observation z of t can be modeled with

the relationship to the matrix c of the dynamical system model.

We then append the notion of noise to the system in both measurements and

observations through new m and new o.

Applying the Gaussian probability distribution model, both the state and

the noise are represented with means and variances.

Before combining the knowledge of state evolution with the measurements made,

we can consolidate the expressions based on certain properties of the Gaussian

distribution.

First we can apply the linear transformation

through the matrixes A and C.

Applying a linear transformation on a Gaussian distribution

yields another Gaussian distribution with modified means and variances.

Similarly, we can add two Gaussian distributions to form yet another Gaussian

where the new mean and variance are the summation of the original two.

In the next section, we will show how these two probability distributions

can be fused to provide a better estimate of the true state.