# Understanding Kalman Filters ###### tags: `December Goals` MATLAB YouTube Playlist: [Link](https://www.youtube.com/playlist?list=PLn8PRpmsu08pzi6EMiYnR-076Mh-q3tWr) ## Why Use Kalman Filters? A Kalman filter is an optimal estimation algorithm. Examples: 1. *Estimate a system state when it can not be measured directly* ![](https://i.imgur.com/dA8vDr1.png =400x200 ) We need to monitor the internal temp of a combustion chamber, but the sensor can not be placed inside the chamber, so its placed on a cooler surface. Here, a Kalman filter can be used to find the best estimate of the internal temperature from an indirect measurement. Extracting information about `what we can't measure` from `what we can`. 2. *Estimate state of the system by combining measurements from different sources that may be subject to noise* ![](https://i.imgur.com/tn1sRLx.png =400x200 ) In a tunnel its difficult to estimate the position through GPS. IMU readings can be used instead(for acceleration). Position can be obtained by double integrating this, put its prone to drift due to small errors. For better position estimate, IMU measurement along with odometer readings can be used. Thus the sensors for measuring the relative position of the car give fast updates but are prone to drift. GPS gives absolute location but gets updated less frequently and may be noisy. ![](https://i.imgur.com/jO80Psq.png =400x200 ) Kalman filter can be used to fuse these three measurements to find the optimal estimate of the exact position of the car. Summary: ![](https://i.imgur.com/HascOCZ.png =400x200 ) ## State Observers State observation helps you estimate something that you can't see or measure directly. Notation: anything with a 'hat' [$\hat{x}$] is an estimated state. Let's consider the example where we need to measure the internal temperature of the jet engine. The two temperatures are $T_{ext}$ and $T_{int}$. The temp $T_{in}$ will tell us how to regulate fuel flow($w_{fuel}$) to the rocket. But we have access to only $T_{ext}$. ![](https://i.imgur.com/Wp8PgGY.png =400x200 ) We can derive the mathematical model of the real system. $w_{fuel}$ as i/p to the mathematical model will give the o/p $\hat{T}_{ext}$. We have all the governing equations, so we can calculate the internal state of the system $\hat{T}_{int}$. This doesn't solve the problem as the mathematical model is only an approximation of the real system and is subject to uncertainities. Perfect model(no uncertainity), same initial conditions for model and real system -> measurement and estimated values will match. But in real life, the two values of external temp. don't match. So a state estimator is used to estimate the internal states. State estimator working: * Goal is to match estimated external temperature with measured external temperature. * If these two are equal, the model will converge to the real system. Thus $\hat{T}_{int}$ will converge to its true value. * Thus, eliminate difference between $T_{ext}$ and $\hat{T}_{ext}$ so that $T_{in}$ converges to $\hat{T}_{int}$ We need a feedback control system, where we try to control the error b/w measured and estimated external temp. at zero using a controller K. ![](https://i.imgur.com/KU8r6is.png) * Can't directly measure $T_{in}$, but know $w_{fuel}$ * Run this through mathematical model to get $\hat{T}_{ext}$. * Use this along with $T_{ext}$ to estimate $\hat{T}_{int}$. State Observer Mathematically: input: $u$ output: $y$ states to be estimated: $x$ ![](https://i.imgur.com/5ufOPRo.png ) ![](https://i.imgur.com/1ODXUj6.png =400x300 ) Even without KC term from the feedback loop we would have a decaying exponential function for the error. Significance of the feedback loop is that we can control decay rate of error by selecting appropriate K. ![](https://i.imgur.com/nA2YvZG.png =400x200 ) Optimal way of choosing controller gain K? -> through Kalman filters ## Optimal State Estimator ![](https://i.imgur.com/H4MHvGY.png =400x200 ) Example where the car with smallest error variance and an average final position closest to 1 km wins. (Zero bias + Minimum variance) ![](https://i.imgur.com/RE6kttb.png ) * Team 1 would lose due to biased avg final position altough it has small variance. * Team 2 would lose, even though its avg final position is on finish line, it has high variance. * Team 3 wins, smallest variance and avg final position on finish line. We cant rely purely on GPS readings since they can be noisy. Can estimate the car's position using a Kalman Filter. The system: input: throttle($u_k$) output: car's position($y_k$) Such system would have multiple states, but here its simplified where input to the car is velocity. Car's position is the only state for this system. ![](https://i.imgur.com/AKQWx6M.png) y should be known as accurately as possible. But, the GPS readings are noisy. This measurement noise is shown by $v$, a random variable. The process noise( represents effects of the wind, changes in car's velocity) shown by $w$. ![](https://i.imgur.com/D7uZrCI.png) These random variables don't follow a pattern, but probablity theory can be used to determine their avg properties. $v$ is assumed to be drawn from a gaussian distribution with zero mean and covariance R i.e. $v \sim N(0,R)$. Similary, $w \sim N(0,Q)$ ![](https://i.imgur.com/AGr3QmQ.png) Measurement is noisy and thus is not accurate position of the car. We can run the input through a car model to estimate the position. Not perfect estimate as x is uncertain due to w. Kalman filter combines the measurement and the prediction to find optimal estimate of the car's position in the presence of process and measurement noise. Working principle of Kalman Filter with probablity density functions. At time k-1, the actual car position can be around $\hat{x}_{k-1}$. True position can be around the mean. At the next time step, larger variance so more uncertainity(pothole or wheels slipped etc). Another source for car's position comes from measurement where variance represents noise. ![](https://i.imgur.com/nrrBD8Y.png) The optimal way to estimate the car's position is by combining measurement and prediction. Done by multiplying these two probablity functions together. The product is a gaussian function with a smaller variance and its mean gives an optimal estimate of the car's position. This is the basic idea behind Kalman Filters. ## Optimal State Estimator Algorithm Set of equations needed to implement Kalman Filter Algorithm: Example: Same as previous video. Optimal estimate is found by multiplying the prediction and measurement probablity functions together, scaling the result and computing the mean of the resulting probablity density function. Multiplication of these two PDFs relates to the discrete filter equation, which is similar to the state observer equation. ![](https://i.imgur.com/d0DCGDE.png) Kalman Filter is a type of State observer ddesigned for stochastic systems, while state observer is for deterministic system. $\hat{x}_k = A\hat{x}_{k-1} + Bu_k + K_k(y_k - C(A\hat{x}_{k-1} + Bu_k))$ Here $A\hat{x}_{k-1} + Bu_k$ predicts the current state $\hat{x}_k$ by using state estimate from previous time step and current input. ![](https://i.imgur.com/zhmxraA.png) These two state estimates are different. Predicted state estimate is shown by, $\hat{x}_{k}^-$ (called A Priori Estimate) as its calculated before the current measurement is taken. The equation is thus reduced to: $\hat{x}_k = \hat{x}_{k}^- + K_k(y_k - C\hat{x}_{k}^-)$ ![](https://i.imgur.com/cA4Q1cF.png) The second part of the equation uses measurement and incorporates it into prediction to update the a priori estimate. The result is called A Posteriori Estimate. The equations for a Kalman Filter are: ![](https://i.imgur.com/rtH6DZV.png) The Kalman Filter is a two step process 1. Prediction: System model is used to calculate the 'a priori state estimate' and the error covariance P. For the single state system, P is the variance of the a priori estimate and can be said to be a measure of uncertainity in the estimated state. It is due to process noise![](https://i.imgur.com/MDqD0O1.png) 2. The second step of the algorithm uses the a priori estimates from prediction step and updates them to find a posteriori estimates and error covariance. The Kalman gain $K_k$ is calculated such that the a posteriori covariance is minimized. ![](https://i.imgur.com/KNW2ogB.png) The Kalman gain determines how heavily the measurement and the a priori estimate contributes to the calculation of $\hat{x}_k$. Depends on measurement noise and error in a priori estimate. Illustration for two extreme cases: * Measurement covariance is zero![](https://i.imgur.com/sROL0pQ.png) * A prior error covariance is close to zero![](https://i.imgur.com/sfON6gu.png) If we have two measurements, we will multiply three probablity density functions to find optimal estimate ![](https://i.imgur.com/0TiQNs8.png) ## Nonlinear State Estimators Kalman filters are defined for linear systems. Following are the equation for a non-linear system. There can be systems where the noise is not additive. In a general system, the state transition function 'f' ot the measurement function 'g' or both can be non-linear ![](https://i.imgur.com/9xAzHn4.png) Here we use non-linear state estimators instead of Kalman filters. Problem with using Kalman filter for state estimation of a nonlinear system: Kalman filter assumes gaussian distribution. After linear transformation distribution maintains its gaussian property. ![](https://i.imgur.com/56zT9HQ.png) But for non-linear transformation the result may not be gaussian and so Kalman filter algorithm may not converge.![](https://i.imgur.com/o0HjXTJ.png) **Extended Kalman Filters(EKF):** Here a extended kalman filter can be used which linearizes the non-linear function about the mean of current state estimate. At each time step, perform linearization locally and use the resulting Jacobian matrices in prediction and to update states. ![](https://i.imgur.com/B3V1xjz.png) Drawbacks of EKF:![](https://i.imgur.com/1I6mnQQ.png) **Unscented Kalman Filters(UKF):** Instead of approximating the nonlinear function like EKF, it approximates the probablity distribution. UKF selects a minimal set of sample points(sigma pts) such that their mean and covariance is same as their distribution. Each sigma pt is put through the nonlinear system model. Mean and Covariance of nonlinearly transformed pts is calculated and an empirical gaussian distribution is computed. ![](https://i.imgur.com/aVYZjpy.png) **Particle Filter(PF):** It approximates any arbitray distribution ie not limited to gaussian distribution. Number of particles(similar to sigma pts) is much larger than UKF![](https://i.imgur.com/MXlFjnn.png) Summary: ![](https://i.imgur.com/hnbcGyh.png) Note- The [video description](https://www.youtube.com/watch?v=Vefia3JMeHE&list=PLn8PRpmsu08pzi6EMiYnR-076Mh-q3tWr&index=5) contains useful additional resources. ## How to use Kalman Filter in Simulink We use a Kalman filter to estimate the pendulum's angular position in Simulink. ![](https://i.imgur.com/3643SHE.png) For small angles $sin\theta \sim \theta$ ![](https://i.imgur.com/XNON3vX.png) State space representation: ![](https://i.imgur.com/g16iznJ.png) ## Kalman Filter and EKF (Cyrill Stachniss) Here goes my handwritten notes :)