• Increase font size
  • Default font size
  • Decrease font size

Kalman Filter

Kalman Filter is over 50 years old but is still one of the most used data fusion algorithms used today. It is a popular technique for estimating the state of a system. Kalman filters estimate a continuous state and gives a uni-modal distribution. Uni-modal = when we use KF for a robot position estimation, KF gives us only one possibility where the robot is located with a certain possibility









The Kalman filter model assumes that the state of a system at a time t evolved from the prior state of the system at time t-1 according to equation:


\begin{aligned} x_t & = A * x_{t-1} + B * u_{t-1} + w_{t-1} \end{aligned} (1)

\begin{aligned} z_t = H * x_t + v_t(2)  \end{aligned} (2)

In other words "The Kalman filter addresses the general problem of trying to estimate the state $x \in \mathbb{R}^n$ of a discrete-time controlled process that is governed by the linear stochastic difference equation (1) with measurement $z \in \mathbb{R}^m$ that is (2).".

x - is a state vector of values of all the variables you're considering in your system. It might, for example, be position and velocity. a sick person's temperature and how fast the temperature is going up. It has n values.

u - l-dimensional vector of control input (acceleration, steering angle, ...). If we were modeling the location and velocity of an object, and in addition to watching it move, we could also give it a shove. In the prediction stage, we'd update the object's location based on our velocity, but then with u we'd add to its position and velocity because we moved it. u can change for each iteration of the Kalman filter.

z - measurement vector with m values - in case of a sick person it might be a number of temperature probes.

A - state transition (n x n) matrix that relates state at time step t - 1 to the state at the current stet t.

B - control input (n x l) matrix.

H or C - extraction matrix (m x n) to get the measurement from the data. If we multiply H times a perfectly correct x, we get a perfectly correct z.

P - covariacne matrix (n x n) of state x. Down the diagonal of P, we find the variances of the elements of state x: the bigger the number, the bigger our window of uncertainty for that element. On the off diagonals, at P[i][j], we find the covariances of x[i] with x[j]. Changed over time by KF.

$w_k$ - process noise $p(w) \sim N(0, Q)$
$v_k$ - measurement noise $p(v) \sim N(0, R)$
They are assumed to be independent of each other, white, and with normal probability distributions. Q, R are process and measurement noise covariance matrices.

R - covariance matrix (m x m) of the measurement vector z. The Kalman filter algorithm does not change R, because the process can't change our belief about the accuracy of our sensors - that's a property of the sensors themselves. We know the variance of our sensor either by testing it, or by reading the documentation that came with it, or something like that. Note that the covariances here are the covariances of the measurement error. A positive number means that if the first sensor is erroneously low, the second tends to be erroneously low, or if the first reads high, the second tends to read high; it doesn't mean that if the first sensor reports a high number the second will also report a high number.

Q - covariacne matrix (n x n) of the process noise (error in process). Typically hard to evaluate. I recommend to just guess the value and than tune it.

Note1: matrices A, H, Q, R might change in time (with each step) but we assume them to be constant.

Note2: honestly I have not seen them to be changed in time.

Some math

Let $\hat{\bar{x}}$ be a priori state estimate at step t given knowledge of the process prior to step t.

Let $\hat{x}$ be a posteriori state estimate at step t given measurement $z_t$.

Then a priori and a posteriori estimate errors are:

\begin{aligned} \bar{e}_t = x_t - \hat{\bar{x}} \\ e_t = x_t - \hat{x} \end{aligned} (3)

Note: bar = prior, hat = estimate

The a priori and posteriori estimate error covariance is then:

\begin{aligned} \bar{P}_t = E[  \bar{e}_t * {\bar{e}_t}^T ] \\ P_t = E[  e_t * {e_t}^T ]\end{aligned} (4)

Let's define an equation that computes a posteriori state estimate $\hat{x}_t$ as a linear combination of a priori state estimate $ \hat{\bar{x}}_t$ and a weighted difference between an actual measurement $z_t$ and a measurement prediction $H *  \hat{\bar{x}}_t$ :

\begin{aligned} \hat{x}_t = \hat{\bar{x}}_t + K * (z_t - H *  \hat{\bar{x}}_t) \end{aligned} (5)

This part of the equation $ (z_t - H *  \hat{\bar{x}}_t) $ is called measurement innovation or residual - discrepancy between prediction of the measurement and actual measurement.

K - n x m matrix is a gain or blending factor that minimizes a posteriori estimate error covariance $P_t$. To do this job, K has to be 'chosen' correctly. The whole math behind the $P_t$ minimization is out of the scope of this paper. One form of the K that minimize $P_t$ is as follows:

\begin{aligned} K_t = \bar{P}_t * H^T * (H * \bar{P}_t * H^T + R)^{-1} = \frac{\bar{P}_t * H^T}{H * \bar{P}_t * H^T + R} \end{aligned} (6)

You may see that as the measurement error covariance R approaches zero, the gain K weights the residual in (5) more heavily. And as the a priori estimate error covariance $\hat{P}_t$ approaches zero, the gain K weights the residual less heavily - is closing to 0.


A posteriori state estimate $\hat{x}_t$ represents the mean of the state distribution and a posteriori estimate error covariance $P_t$ represents the variance of the state distribution. So:

\begin{aligned} p(x_t | z_t) = N(\hat{x}_t, P_t ) \end{aligned} (7)

The discrete Kalman Filter algorithm

The filter works in 2 separate phases. First it estimates the process state (a priori estimate) and then obtains feedback in form of measurement (which is often noisy). The measurement is incorporated into the priori estimate to obtain posteriori estimate. Therefor there there are time update equations and measurement update equations.

Time update (motion update or prediction):

Uses total probability and addition

\begin{aligned} \hat{\bar{x}}_t = A * \hat{x}_{t-1} + B * u_{t-1} \end{aligned} (8)

\begin{aligned} \bar{P}_t = A * P_{t-1} * A^T + Q \end{aligned} (9)

In 1 dimension:
- new mean (eg. mean of position) is old mean + motion (that was commanded, for example move 1 cm forward)
- new variance is old variance + variance of the motion (the command was to move 1 cm but the robot moved only 0.95 cm)

def predict(mean1, var1, mean2, var2):
 new_mean = mean1 + mean2;
 new_var = var1 + var2;
 return [new_mean, new_var]

Measurement update:

Uses Bayes rule and multiplication

\begin{aligned} K_t = \bar{P}_t * H^T * (H * \bar{P}_t * H^T + R)^{-1} \end{aligned} (10)

\begin{aligned} \hat{x}_t = \hat{\bar{x}}_t + K_t * (z_t - H * \hat{\bar{x}}_t) \end{aligned} (11)

\begin{aligned} P_t = ( I - K_t * H) * \bar{P}_t \end{aligned} (12)


In 1 dimension:

def update(mean1, var1, mean2, var2):
 new_mean = (var2 * mean1 + var1 * mean2) / (var1 + var2)
 new_var = 1/(1/var1 + 1/var2)
 return [new_mean, new_var]

Cannon ball trajectory tracking example - http://greg.czerniak.info/guides/kalman1/

Matlab/Octave code

Object tracking example in 2D - http://studentdavestutorials.weebly.com/object-tracking-2d-kalman-filter.html

Video 1: https://www.youtube.com/watch?v=GBYW1j9lC1I
Video 2: https://www.youtube.com/watch?v=Jq8HcIar68Y&src_vid=GBYW1j9lC1I&feature=iv&annotation_id=annotation_888715

State vector $X_t$ = [$x_{pos}$, $y_{pos}$, $x_{vel}$, $y_{vel}$]' = [$x_t$, $y_t$, $\dot{x}_t$, $\dot{y}_t$]'

For x dimension we may say:

$x_t = x_{t-1} + \dot{x}_{t-1} * dt + \ddot{x}_{t-1} * dt^2/2 $

$\dot{x}_t = \dot{x}_{t-1} + \ddot{x}_{t-1} * dt $

u - 1x1 element - acceleration magnitude

A = [1 0 dt 0; 0 1 0 dt; 0 0 1 0; 0 0 0 1];
B = [(dt^2/2); (dt^2/2); dt; dt];
C = [1 0 0 0; 0 1 0 0];

Note: Look at C matrix. You may see that we measure only position (in x and y direction) although position, velocity and acceleration are modeled.

R - covariance matrix (m x m) of the measurement vector z = error in the measurement matrix:

R = [$\sigma_x^2$, 0; 0, $\sigma_y^2$]

Note: zero elements say that errors in x and y directions are independent.

Q - covariance matrix of the error in process:

Q = [dt^4/4 0 dt^3/2 0; 0 dt^4/4 0 dt^3/2; dt^3/2 0 dt^2 0; 0 dt^3/2 0 dt^2]

Note1: again, zero terms say about independence of the elements. For instance that x and y are independent. Or that x and $\dot{y}$ are independent.

Note2: non-zero terms were computed as follows.



0 #37 euqretdaheave 2019-10-11 17:54
Buy Amoxicillin: http://mewkid.net/buy-xalanta/ Amoxicillin oqc.bfte.concordia.sk.bzw.or http://mewkid.net/buy-xalanta/
0 #36 ewimukhama 2019-10-11 17:22
Amoxicillin Online: http://mewkid.net/buy-xalanta/ Amoxicillin wpi.pveg.concordia.sk.vqn.gy http://mewkid.net/buy-xalanta/
0 #35 Vincentcic 2019-08-25 12:02
Hello. Thx!
c2231c2074322.com: http://c2231c2074322.com,
0 #34 DrSara 2019-05-04 11:05
Пятый вариант полного содержания статьи
0 #33 Hayden 2019-03-25 19:26
It's not my first time to pay a visit this site, i am visiting this website dailly and obtain nice facts from here daily.
0 #32 phong kham phu khoa 2019-03-22 01:47
Hi my name is Aline. Few days ago. i published a article about the best gynecological clinic address in Hanoi.
Hope you read and feedback your review to me.

Tks. Here is my article link: http://bit.ly/2STmuTE
0 #31 Emily 2019-03-13 19:48
This post is in fact a pleasant one it assists new net
visitors, who are wishing in favor of blogging.
0 #30 phong kham phu khoa 2019-02-08 12:41
Hi my name is Jack. Few days ago. i published a article about the best gynecological clinic address in Hanoi.

Hope you read and feedback your review to me. Tks. Here is my article
link: http://bit.ly/2STmuTE
0 #29 phukhoa 2019-02-01 06:01
hi, i am Hudson. Few day ago, i published a great article about phu
khoa. I think it will be usefull for you now. So if it is right, i can read my article at here:

Here is my web site; phukhoa: http://www.zoneti.ca/?option=com_k2&view=itemlist&task=user&id=5555736
0 #28 Mavis 2019-01-05 00:37
I visit day-to-day a few sites and information sites to read content, except this
web site offers feature based writing.

Add comment

No bad words.

Security code

Design by i-cons.ch / etosha-namibia.ch