---
title: Particle Filter - non-Gaussian distribution
tags: wiki
disqus: hackmd
---
Particle Filter - non-Gaussian Distribution
===
**Author:** Teng-Hu Cheng
**email:** tenghu@nycu.edu.tw
**Date:** Nov. 2024
**Website:** https://ncrl.lab.nycu.edu.tw/member/
# 目錄
[toc]
# Preliminary
https://tomohiroliu22.medium.com/機率與貝葉氏機器學習-學習筆記系列-01-貝氏推斷-bayesian-inference-d81b01dc5c89
# Assumptions
The following assumtion are made in this article:
- The state transition matrix follows Markov property (in Section 2).
- Posterior $P(x_k | y_k)$ under Gaussian distribution (in Section 3).
# **Scenario**
Here’s an example for **robot localization**:
- A robot is navigating a 2D environment.
- It has a laser rangefinder that measures the distance to walls.
- The environment is known (map available).
- The robot is uncertain about its position and orientation.

# Architecture
Below depicts the architecture of the algorithm

# **1. Initial Setup**
- **Particles**: Each particle represents a possible robot pose, $x$, $y$, $\theta$.
- **Map**: The map provides the positions of walls.
- **Sensor Model**: The laser sensor provides distance measurements to the nearest obstacle.
# 2. **Prediction (State Transition Probability)**
Below is a commonly used state transition models.
For a 2D motion system with position $\mathbf{x}_k$:
$$\mathbf{x}_{k} =
\begin{bmatrix}
x_{1k} \\
x_{2k} \\
\end{bmatrix}
$$
The updated state becomes:
$$\mathbf{x}_k = \mathbf{F} \mathbf{x}_{k-1} $$
where the state transition matrix $\mathbf{F}$ is defined as
$$
\mathbf{F} =
\begin{bmatrix}
0.7 & 0.8 \\
0.3 & 0.2
\end{bmatrix}.
\quad$$
Note that the elements in $\mathbf{F}$ can be transition matrix following Markov property.
# 3. Update
Now assume that the robot's laser rangefinder reports a distance of ==3 meters**== to the nearest wall.
## **3.1 Compute Weights**
### 3.1.1 non-Gaussian Observation Model
Use a Gaussian likelihood function to compare the predicted sensor reading with the actual reading ($\mathbf{y}_k$=3 from Step 3.) and the state $\mathbf{x}_k$ (i.e., from the state transition in Step 2.):
$$P(y_k | x_k) = \kappa - \frac{1}{2} ( y_k - \mathbf{C} \mathbf{x}_k)^{T} \mathbf{R}^{-1} (y_k-\mathbf{C} \mathbf{x}_k)
$$
where:
- $\kappa$ is some positive constant to make $P(y_k | x_k)$ positive.
- $y$: Actual sensor reading (3.0 meters).
- $y_{\text{pred}}=\mathbf{C} \mathbf{x}_k$: Predicted sensor reading for the particle.
- $\mathbf{R}$: The covariance of the observation noise.
**Note: [You can check out "normpdf" function in Matlab for details.](https://www.mathworks.com/help/stats/normpdf.html#d126e857153)**
### 3.1.2 Weights
By using the observed output $\mathbf{y}_k$, and the weight $w_{k-1}^{(i)}$ computed at the previous time step $k-1$, calculate the intermediate weight update, denoted by $\hat{w}_k^{(i)}$, by using the density function of the measurement probability.
$$
\hat{w}_k^{(i)} = p(\mathbf{y}_k \mid \mathbf{x}_k^{(i)}) \cdot w_{k-1}^{(i)}, \quad i = 1, 2, 3, \dots, N$$
where $i$ denotes the $i^{th}$ particles, and computing the Gaussian likelihood $p(\mathbf{y}_k \mid \mathbf{x}_k^{(i)})$ can be found in Section [3.1.1](#5.1-compute-weights).
After all the intermediate weights $\hat{w}_k^{(i)}, i = 1, 2, 3, \dots, N$ are computed, compute the normalized weights $w_k^{(i)}$ as follows:
$$
w_k^{(i)} = \frac{1}{\bar{w}} \hat{w}_k^{(i)}$$
where $\bar{w} = \sum_{i=1}^{N} \hat{w}_k^{(i)}.$
### Multiple sensor fusion for State Estimation
- [Sensor Fusion and Probabilistic Methods using Force and Vision](/mZi-r7LMTL2qflO7b9W0dw)
### 3.1.3 Find Pairs
After finding weights in Section [3.1.2](#5.2-weights), we computed the set of particles for the time step $k$:
$$\{(\mathbf{x}_k^{(i)}, w_k^{(i)}) \mid i = 1, 2, 3, \dots, N\}$$
# 4. Resample
## 4.1 Check the Conditions for Resample
This is a resampling step that is performed only if the condition given below is **satisfied**. Calculate the constant $N_\text{eff}$:
$$N_\text{eff} = \frac{1}{\sum_{i=1}^{N} \left( w_k^{(i)} \right)^2} $$
- ==If $N_\text{eff} < \frac{N}{3}$, then generate a new set of $N$ particles from the computed set of particles==:
$$
\{ (\mathbf{x}_k^{(i)}, w_k^{(i)}) \mid i = 1, 2, 3, \dots, N \}^\text{NEW} = \text{RESAMPLE} \left( \{ (\mathbf{x}_k^{(i)}, w_k^{(i)}) \mid i = 1, 2, 3, \dots, N \}^\text{OLD} \right)$$
- ==If $N_\text{eff} > \frac{N}{3}$, the result is accepted.==
Resample particles based on their weights:
- High-weight particles are likely to be duplicated.
- Low-weight particles are likely to be discarded.
### 4.2 Resample
- Resampling is done based on the normalized weights ${w}_k^{(i)}$.
- Particles with higher weights are more likely to be selected multiple times, while particles with very low weights are likely to be discarded.
- [Example of resampling](/LEKChqMrR-qBh2vERVyDBg)
### 4.3 Generate the New Particle Set
- Replace the old set of particles with the new set drawn during resampling.
- Assign equal weights to all particles in the new set (e.g., $\tilde{w}_i = 1/N$).
# **5. Results**
After the sensor update, the particle filter is more confident about the robot’s pose, as particles now concentrate around states consistent with the sensor measurement.
# References
1. [Particle Filter](https://aleksandarhaber.com/clear-and-concise-particle-filter-tutorial-with-python-implementation-part-2-derivation-of-particle-filter-algorithm-from-scratch)
2. [Joint Probability](https://www.youtube.com/watch?v=CQS4xxz-2s4)
3. [J. V. Candy, "Bootstrap Particle Filtering," in IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 73-85, July 2007, doi: 10.1109/MSP.2007.4286566.](https://ieeexplore.ieee.org/document/4286566)
4. [Resampling Methods for Particle Filtering](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079001)
# Example 1
## 1. Initial Setup
Given a set of particles $\{(\mathbf{x}_0^{(i)}, w_0^{(i)}) \mid i = 1, 2, 3, \dots, N\}$, where the position of each particle represents a possible robot pose $\mathbf{x}_0^{(i)}$=($x$, $y$,$v_x$, $v_y$), where the state is defined as
$$\mathbf{x}_k =
\begin{bmatrix}
x_k \\
y_k \\
v_{x_k} \\
v_{y_k}
\end{bmatrix}
$$
and the weight are equal, $w_0^{(i)}=1/N$.
## 2. Prediction
By using the dynamics model, the predicted (a priori) state estimation becomes:
$$\hat {\mathbf{x}}_k = \mathbf{F} \mathbf{x}_{k-1} + \mathbf{B} \mathbf{u}_{k-1} + \mathbf{w}_{k-1}$$
where the state transition matrix $\mathbf{F}$ and control input matrix $\mathbf{B}$ are defined as
$$
\mathbf{F} =
\begin{bmatrix}
1 & 0 & \Delta t & 0 \\
0 & 1 & 0 & \Delta t \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix},
\quad
\mathbf{B} =
\begin{bmatrix}
0.5 \Delta t^2 & 0 \\
0 & 0.5 \Delta t^2 \\
\Delta t & 0 \\
0 & \Delta t
\end{bmatrix}$$
where $\mathbf{u}_{k-1}$ is the control input (i.e., acceleration) at the previous time step $k-1$, and $\mathbf{w}_{k-1} \sim \mathcal{N}(0, \mathbf{Q}$), where $\mathbf{Q}$ is the process noise covariance matrix.
## 3. Update
### 3.1 **Measurement Model**
- linear model
Defined the measurement model $\mathbf{y}_k=\mathbf{C} \mathbf{x}_k$ (linear model), where
$$\mathbf{C}=\begin{bmatrix}
1 & 0 & 0 & 0
\end{bmatrix}$$
### 3.2 **Intermediate Weights**
By using the following relation between the predicted measurement in [3.1 Measurement Model]() and ==the observed output $\mathbf{y}_k=3$==, and the weight $w_{k-1}^{(i)}$ computed at the previous time step $k-1$, the intermediate weight update is calculated.
$$p(\mathbf{y}_k \mid \mathbf{x}_k) = (2\pi)^{-\frac{1}{2}} \det(\mathbf{R})^{-\frac{1}{2}} e^{-\frac{1}{2} (\mathbf{y}_k - \mathbf{C} \hat {\mathbf{x}}_k)^\top \mathbf{R}^{-1} (\mathbf{y}_k - \mathbf{C} \hat {\mathbf{x}}_k)},$$
the likelihood can be computed, so that the intermediate weight $\hat{w}_k^{(i)}$ can be calculated as follows:
$$
\hat{w}_k^{(i)} = p(\mathbf{y}_k \mid \mathbf{x}_k^{(i)}) \cdot w_{k-1}^{(i)}, \quad i = 1, 2, 3, \dots, N$$
### 3.3 **Normalization**
After all the intermediate weights $\hat{w}_k^{(i)}, i = 1, 2, 3, \dots, N$ are computed, compute the normalized weights $w_k^{(i)}$ as follows:
$$
w_k^{(i)} = \frac{1}{\bar{w}} \hat{w}_k^{(i)}$$
where $\bar{w} = \sum_{i=1}^{N} \hat{w}_k^{(i)}.$
Then use [3.2.3 Find Pairs]() to find the new pairs $$\{(\mathbf{x}_k^{(i)}, w_k^{(i)}) \mid i = 1, 2, 3, \dots, N\}.$$
## 4. Resample
### 4.1 Check conditions for resample
Follows are the conditions to determine resample or accept.
- If $N_\text{eff} < \frac{N}{3}$, then generate a new set of $N$ particles from the computed set of particles:
$$\{ (\mathbf{x}_k^{(i)}, w_k^{(i)}) \mid i = 1, 2, 3, \dots, N \}^\text{NEW} = \text{RESAMPLE} \left( \{ (\mathbf{x}_k^{(i)}, w_k^{(i)}) \mid i = 1, 2, 3, \dots, N \}^\text{OLD} \right)$$
- If $N_\text{eff} > \frac{N}{3}$, the result is accepted.
### 4.2 Resample
- Resampling is done based on the normalized weights ${w}_k^{(i)}$.
- Particles with higher weights are more likely to be selected multiple times, while particles with very low weights are likely to be discarded.
### 4.3 Generate the New Particle Set
- Replace the old set of particles with the new set drawn during resampling.
- Assign equal weights to all particles in the new set (e.g., $\tilde{w}_i = 1/N$).