机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

Loading...

来自 University of Pennsylvania 的课程

机器人学：估计和学习

292 个评分

机器人如何实时确定他们的状态，并从带有噪声的传感器测量量获得周围环境的信息？在这个模块中，你将学习怎样让机器人把不确定性融入估计，并向动态和变化的世界进行学习。特殊专题包括用于定位和绘图的概率生成模型和贝叶斯滤波器。

从本节课中

Bayesian Estimation - Target Tracking

We will learn about the Gaussian distribution for tracking a dynamical system. We will start by discussing the dynamical systems and their impact on probability distributions. This linear Kalman filter system will be described in detail, and, in addition, non-linear filtering systems will be explored.

- Daniel LeeProfessor of Electrical and Systems Engineering

School of Engineering and Applied Science

In this lecture we will discuss the Maximum-A-Posterior Estimation technique

in relation to the Kalman filter.

We will apply the Maximum-A-Posterior abbreviated MAP estimate to a Bayes' Rule

formulation of the state, and observation information from the previous lecture.

We will then solve the maximization

in order to establish the Kalman Filter update method.

Bayes' Rule showcases a certain relationship between random variables.

A random variable alpha conditioned on prior information beta, can be expressed

as the probability of alpha scaled by a factor based on prior information beta.

From our previous lecture,

we are provided with certain probabilities of state and measurement.

We want to use this information to recover the true state x of t and

we can apply Bayes' Rule in our formulation.

From the dynamical system, the probability of the state given only the previous

state can be represented with the prior information alpha.

Representing the information from our measurement model,

beta provides observational evidence.

Conditioned on a state, this evidence presents

a constrained probability distribution known as the likelihood.

Altogether, Bayes' Rule helps us to formulate an expression for

the posterior probability.

The posterior probability represents our best estimate of the state x of t,

given information from both the previous state, x of t- 1, and observation zt.

The Maximum A-Posterior estimation technique can provide optimal best

estimate of the distribution.

This estimate will provide a basis for

the new mean over Gaussian distribution representing our state x of t.

The map estimate is formed as an optimization problem

over all values in the posterior distribution.

We drop the probabilities that are independent of the state such as

the distribution of all measurements z of t unconditioned on the state x of t.

Fully expanded, we see a maximization over the product of Gaussians.

A trick to calculate the MAP estimate is to take the logarithm of the product.

The logarithm represents a monotonic function.

So the optimal value of x sub t in a logarithmic function

remains the optimal value of x sub t in the original function.

We will make some simple substitutions for

the variances or co0-variances in two dimensions to condense the expressions.

In the new summation expressions,

we solve the optimization by taking the derivative and setting it to 0.

The summation expression makes it easy to collect terms and solve for x of t.

Using the Matrix Inversion Lemma,

we can establish what is known as the common gain.

After expanding terms, we can see that the common gain shows how to update our state

based on the difference between our measurement, z of t, and

our predicted measurement.

The most likely estimate of the state is very important, but

the uncertainty matters as well.

The covariance of the state must be updated using the common gain.

The expression for updating the covariance is provided here.

And you should derive this expression as a good exercise for comprehension.

Overall, we can present a pictorial example.

Presented as the ball position,

we see that the ball is moving from left to right, from t- 1 to t.

The predicted estimate is shown as the probability of x of t given x of

t minus 1 has a large spread because the motion model is not fully trusted.

This spread is the result of the motion model noise distribution

with co-variants sigma m applied to the state distribution.

The observation estimate has a spread given by sigma o

with a mean further to the right than the motion model.

This observation helps both to shift the distribution of p of x of t

given x of t minus 1 to the final probability of x of t.

As well as constraining the uncertainty.

This concludes the linear common filter model.

The linear filter has certain limitations.

And in the next section we will explore ways to model non-linear behaviors.