• Increase font size
  • Default font size
  • Decrease font size

Kalman Filter

Kalman Filter is over 50 years old but is still one of the most used data fusion algorithms used today. It is a popular technique for estimating the state of a system. Kalman filters estimate a continuous state and gives a uni-modal distribution. Uni-modal = when we use KF for a robot position estimation, KF gives us only one possibility where the robot is located with a certain possibility









The Kalman filter model assumes that the state of a system at a time t evolved from the prior state of the system at time t-1 according to equation:


\begin{aligned} x_t & = A * x_{t-1} + B * u_{t-1} + w_{t-1} \end{aligned} (1)

\begin{aligned} z_t = H * x_t + v_t(2)  \end{aligned} (2)

In other words "The Kalman filter addresses the general problem of trying to estimate the state $x \in \mathbb{R}^n$ of a discrete-time controlled process that is governed by the linear stochastic difference equation (1) with measurement $z \in \mathbb{R}^m$ that is (2).".

x - is a state vector of values of all the variables you're considering in your system. It might, for example, be position and velocity. a sick person's temperature and how fast the temperature is going up. It has n values.

u - l-dimensional vector of control input (acceleration, steering angle, ...). If we were modeling the location and velocity of an object, and in addition to watching it move, we could also give it a shove. In the prediction stage, we'd update the object's location based on our velocity, but then with u we'd add to its position and velocity because we moved it. u can change for each iteration of the Kalman filter.

z - measurement vector with m values - in case of a sick person it might be a number of temperature probes.

A - state transition (n x n) matrix that relates state at time step t - 1 to the state at the current stet t.

B - control input (n x l) matrix.

H or C - extraction matrix (m x n) to get the measurement from the data. If we multiply H times a perfectly correct x, we get a perfectly correct z.

P - covariacne matrix (n x n) of state x. Down the diagonal of P, we find the variances of the elements of state x: the bigger the number, the bigger our window of uncertainty for that element. On the off diagonals, at P[i][j], we find the covariances of x[i] with x[j]. Changed over time by KF.

$w_k$ - process noise $p(w) \sim N(0, Q)$
$v_k$ - measurement noise $p(v) \sim N(0, R)$
They are assumed to be independent of each other, white, and with normal probability distributions. Q, R are process and measurement noise covariance matrices.

R - covariance matrix (m x m) of the measurement vector z. The Kalman filter algorithm does not change R, because the process can't change our belief about the accuracy of our sensors - that's a property of the sensors themselves. We know the variance of our sensor either by testing it, or by reading the documentation that came with it, or something like that. Note that the covariances here are the covariances of the measurement error. A positive number means that if the first sensor is erroneously low, the second tends to be erroneously low, or if the first reads high, the second tends to read high; it doesn't mean that if the first sensor reports a high number the second will also report a high number.

Q - covariacne matrix (n x n) of the process noise (error in process). Typically hard to evaluate. I recommend to just guess the value and than tune it.

Note1: matrices A, H, Q, R might change in time (with each step) but we assume them to be constant.

Note2: honestly I have not seen them to be changed in time.

Some math

Let $\hat{\bar{x}}$ be a priori state estimate at step t given knowledge of the process prior to step t.

Let $\hat{x}$ be a posteriori state estimate at step t given measurement $z_t$.

Then a priori and a posteriori estimate errors are:

\begin{aligned} \bar{e}_t = x_t - \hat{\bar{x}} \\ e_t = x_t - \hat{x} \end{aligned} (3)

Note: bar = prior, hat = estimate

The a priori and posteriori estimate error covariance is then:

\begin{aligned} \bar{P}_t = E[  \bar{e}_t * {\bar{e}_t}^T ] \\ P_t = E[  e_t * {e_t}^T ]\end{aligned} (4)

Let's define an equation that computes a posteriori state estimate $\hat{x}_t$ as a linear combination of a priori state estimate $ \hat{\bar{x}}_t$ and a weighted difference between an actual measurement $z_t$ and a measurement prediction $H *  \hat{\bar{x}}_t$ :

\begin{aligned} \hat{x}_t = \hat{\bar{x}}_t + K * (z_t - H *  \hat{\bar{x}}_t) \end{aligned} (5)

This part of the equation $ (z_t - H *  \hat{\bar{x}}_t) $ is called measurement innovation or residual - discrepancy between prediction of the measurement and actual measurement.

K - n x m matrix is a gain or blending factor that minimizes a posteriori estimate error covariance $P_t$. To do this job, K has to be 'chosen' correctly. The whole math behind the $P_t$ minimization is out of the scope of this paper. One form of the K that minimize $P_t$ is as follows:

\begin{aligned} K_t = \bar{P}_t * H^T * (H * \bar{P}_t * H^T + R)^{-1} = \frac{\bar{P}_t * H^T}{H * \bar{P}_t * H^T + R} \end{aligned} (6)

You may see that as the measurement error covariance R approaches zero, the gain K weights the residual in (5) more heavily. And as the a priori estimate error covariance $\hat{P}_t$ approaches zero, the gain K weights the residual less heavily - is closing to 0.


A posteriori state estimate $\hat{x}_t$ represents the mean of the state distribution and a posteriori estimate error covariance $P_t$ represents the variance of the state distribution. So:

\begin{aligned} p(x_t | z_t) = N(\hat{x}_t, P_t ) \end{aligned} (7)

The discrete Kalman Filter algorithm

The filter works in 2 separate phases. First it estimates the process state (a priori estimate) and then obtains feedback in form of measurement (which is often noisy). The measurement is incorporated into the priori estimate to obtain posteriori estimate. Therefor there there are time update equations and measurement update equations.

Time update (motion update or prediction):

Uses total probability and addition

\begin{aligned} \hat{\bar{x}}_t = A * \hat{x}_{t-1} + B * u_{t-1} \end{aligned} (8)

\begin{aligned} \bar{P}_t = A * P_{t-1} * A^T + Q \end{aligned} (9)

In 1 dimension:
- new mean (eg. mean of position) is old mean + motion (that was commanded, for example move 1 cm forward)
- new variance is old variance + variance of the motion (the command was to move 1 cm but the robot moved only 0.95 cm)

def predict(mean1, var1, mean2, var2):
 new_mean = mean1 + mean2;
 new_var = var1 + var2;
 return [new_mean, new_var]

Measurement update:

Uses Bayes rule and multiplication

\begin{aligned} K_t = \bar{P}_t * H^T * (H * \bar{P}_t * H^T + R)^{-1} \end{aligned} (10)

\begin{aligned} \hat{x}_t = \hat{\bar{x}}_t + K_t * (z_t - H * \hat{\bar{x}}_t) \end{aligned} (11)

\begin{aligned} P_t = ( I - K_t * H) * \bar{P}_t \end{aligned} (12)


In 1 dimension:

def update(mean1, var1, mean2, var2):
 new_mean = (var2 * mean1 + var1 * mean2) / (var1 + var2)
 new_var = 1/(1/var1 + 1/var2)
 return [new_mean, new_var]

Cannon ball trajectory tracking example - http://greg.czerniak.info/guides/kalman1/

Matlab/Octave code

Object tracking example in 2D - http://studentdavestutorials.weebly.com/object-tracking-2d-kalman-filter.html

Video 1: https://www.youtube.com/watch?v=GBYW1j9lC1I
Video 2: https://www.youtube.com/watch?v=Jq8HcIar68Y&src_vid=GBYW1j9lC1I&feature=iv&annotation_id=annotation_888715

State vector $X_t$ = [$x_{pos}$, $y_{pos}$, $x_{vel}$, $y_{vel}$]' = [$x_t$, $y_t$, $\dot{x}_t$, $\dot{y}_t$]'

For x dimension we may say:

$x_t = x_{t-1} + \dot{x}_{t-1} * dt + \ddot{x}_{t-1} * dt^2/2 $

$\dot{x}_t = \dot{x}_{t-1} + \ddot{x}_{t-1} * dt $

u - 1x1 element - acceleration magnitude

A = [1 0 dt 0; 0 1 0 dt; 0 0 1 0; 0 0 0 1];
B = [(dt^2/2); (dt^2/2); dt; dt];
C = [1 0 0 0; 0 1 0 0];

Note: Look at C matrix. You may see that we measure only position (in x and y direction) although position, velocity and acceleration are modeled.

R - covariance matrix (m x m) of the measurement vector z = error in the measurement matrix:

R = [$\sigma_x^2$, 0; 0, $\sigma_y^2$]

Note: zero elements say that errors in x and y directions are independent.

Q - covariance matrix of the error in process:

Q = [dt^4/4 0 dt^3/2 0; 0 dt^4/4 0 dt^3/2; dt^3/2 0 dt^2 0; 0 dt^3/2 0 dt^2]

Note1: again, zero terms say about independence of the elements. For instance that x and y are independent. Or that x and $\dot{y}$ are independent.

Note2: non-zero terms were computed as follows.



0 #23 Aurelia 2018-07-01 00:44
Appreciate the recommendation. Will try it out.
0 #22 Tanya 2018-06-13 23:57
Great goods from you, man. I've take into accout your stuff prior to and you're
simply extremely great. I really like what you have obtained
right here, certainly like what you're saying and the way through which
you say it. You make it entertaining and you still take care
of to stay it sensible. I can't wait to read much more from you.

That is really a wonderful website.
0 #21 Britney 2018-05-31 21:33
I simply couldn't depart your website prior to suggesting that
I extremely loved the standard information an individual provide on your guests?
Is going to be back often in order to investigate cross-check new posts
0 #20 Carin 2018-05-25 04:09
Hey there, I think your site might be having browser compatibility issues.

When I look at your blog site in Chrome, it looks fine but when opening
in Internet Explorer, it has some overlapping.
I just wanted to give you a quick heads up! Other then that, superb blog!
0 #19 Carmelo 2018-05-15 05:44
You're so awesome! I do not think I've truly read through
something like this before. So great to find another
person with a few unique thoughts on this subject matter.
Really.. thanks for starting this up. This site is something that's needed
on the internet, someone with a little originality!
0 #18 Yasmin 2018-05-12 22:07
Hi, its pleasant article on the topic of media print, we
all be familiar with media is a enormous source of data.
0 #17 Sally 2018-05-12 01:13
This website definitely has all the information I needed about
this subject and didn't know who to ask.
0 #16 VincentChief 2018-04-08 03:40
I have checked your page and i have found some duplicate content,
that's why you don't rank high in google, but there is a tool that
can help you to create 100% unique articles, search for: boorfe's tips unlimited content
0 #15 BettinaChief 2018-03-29 09:51
I have checked your blog and i have found some duplicate content, that's why you don't rank high in google, but there is a tool that can help you
to create 100% unique content, search for: boorfe's tips unlimited content
0 #14 MathiasSmall 2018-01-29 12:22
I have checked your website and i've found some duplicate content, that's why you don't
rank high in google's search results, but there is a tool that can help you to create 100% unique articles, search for:
Boorfe's tips unlimited content

Add comment

No bad words.

Security code

Design by i-cons.ch / etosha-namibia.ch