Try   HackMD

Bayesian Particle Filter

Author: Teng-Hu Cheng
email: tenghu@nycu.edu.tw
Date: Nov. 2024
Website: https://ncrl.lab.nycu.edu.tw/member/

目錄

Preliminary

https://tomohiroliu22.medium.com/機率與貝葉氏機器學習-學習筆記系列-01-貝氏推斷-bayesian-inference-d81b01dc5c89


Assumptions

The following assumtion are made in this article:

  • Posterior
    P(xk|yk)
    under Gaussian distribution (in Section 3).
  • The state transition matrix is designed based on the linear motion model (in Section 2).

Scenario

Here’s an example for robot localization:

  • A robot is navigating a 2D environment.
  • It has a laser rangefinder that measures the distance to walls.
  • The environment is known (map available).
  • The robot is uncertain about its position and orientation.
    Image Not Showing Possible Reasons
    • The image was uploaded to a note which you don't have access to
    • The note which the image was originally uploaded to has been deleted
    Learn More →

Architecture

Below depicts the architecture of the algorithm

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

1. Initial Setup

  • Particles: Each particle represents a possible robot pose,
    x
    ,
    y
    ,
    θ
    .
  • Map: The map provides the positions of walls.
  • Sensor Model: The laser sensor provides distance measurements to the nearest obstacle.

2. Prediction (State Transition Probability)

Below are two commonly used state transition models:

2.1 General Form of a Linear Motion Model

The linear state transition model can be expressed as:

xk=Fxk1+Buk1+wk1
where:

  • xk1
    : The state vector at the previous time step.
  • xk
    : The state vector at the current time step.
  • F
    : The state transition matrix, which defines how the state evolves over time.
  • B
    : The control input matrix, which defines how the control input affects the state.
  • uk1
    : The control input at the previous time step.
  • wk1
    : Process noise, typically modeled as Gaussian noise:
    wk1N(0,Q
    ), where
    Q
    is the process noise covariance matrix.

2.2 2D Constant Velocity Model

For a 2D motion system with position

(x,y) and velocity
(vx,vy)
:
xk=[xkykvxkvyk]

The updated state becomes:
xk=Fxk1+Buk1+wk1

where the state transition matrix
F
and control input matrix
B
are defined as
F=[10Δt0010Δt00100001],B=[0.5Δt2000.5Δt2Δt00Δt]


3. Update

Now assume that the robot's laser rangefinder reports a distance of 3 meters to the nearest wall.


3.1 Compute Weights

3.2 Compute Gaussian Likelihood

Use a Gaussian likelihood function to compare the predicted sensor reading with the actual reading (

yk=3 from Step 3.) and the state
xk
(i.e., from the state transition in Step 2.):
p(ykxk)=(2π)12det(R)12e12(ykCxk)R1(ykCxk)

or
P(yk|xk)=12πσ2exp((zzpred)22σ2)

where:

  • y
    : Actual sensor reading (3.0 meters).
  • ypred=Cxk
    : Predicted sensor reading for the particle.
  • σ
    : Sensor noise standard deviation (e.g., 0.2 meters).
  • R
    : The covariance of the observation noise.

Note: You can check out "normpdf" function in Matlab for details.

Example Calculations:

  • Particle 1:
    ypred=3.2
    , likelihood =
    exp((3.03.2)2/(2×0.22))=0.8
    .
  • Particle 2:
    ypred=4.0
    , likelihood =
    exp((3.04.0)2/(2×0.22))=0.01
    .
  • Particle 3:
    ypred=2.8
    , likelihood =
    exp((3.02.8)2/(2×0.22))=0.8
    .

3.3 Weights

By using the observed output

yk, and the weight
wk1(i)
computed at the previous time step
k1
, calculate the intermediate weight update, denoted by
w^k(i)
, by using the density function of the measurement probability.
w^k(i)=p(ykxk(i))wk1(i),i=1,2,3,,N

where
i
denotes the
ith
particles, and computing the Gaussian likelihood
p(ykxk(i))
can be found in Section 3.1.1.
After all the intermediate weights
w^k(i),i=1,2,3,,N
are computed, compute the normalized weights
wk(i)
as follows:
wk(i)=1w¯w^k(i)

where
w¯=i=1Nw^k(i).

Multiple sensor fusion for State Estimation

3.4 Find Pairs

After finding weights in Section 3.1.2, we computed the set of particles for the time step

k:
{(xk(i),wk(i))i=1,2,3,,N}

4. Resample

4.1 Check the Conditions for Resample

This is a resampling step that is performed only if the condition given below is satisfied. Calculate the constant

Neff:
Neff=1i=1N(wk(i))2

  • If
    Neff<N3
    , then generate a new set of
    N
    particles from the computed set of particles
    :

{(xk(i),wk(i))i=1,2,3,,N}NEW=RESAMPLE({(xk(i),wk(i))i=1,2,3,,N}OLD)

  • If
    Neff>N3
    , the result is accepted.

    Resample particles based on their weights:
  • High-weight particles are likely to be duplicated.
  • Low-weight particles are likely to be discarded.

4.2 Resample

  • Resampling is done based on the normalized weights
    wk(i)
    .
  • Particles with higher weights are more likely to be selected multiple times, while particles with very low weights are likely to be discarded.
  • Example of resampling

4.3 Generate the New Particle Set

  • Replace the old set of particles with the new set drawn during resampling.
  • Assign equal weights to all particles in the new set (e.g.,
    w~i=1/N
    ).

5. Results

After the sensor update, the particle filter is more confident about the robot’s pose, as particles now concentrate around states consistent with the sensor measurement.

References

  1. Particle Filter
  2. Joint Probability
  3. J. V. Candy, "Bootstrap Particle Filtering," in IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 73-85, July 2007, doi: 10.1109/MSP.2007.4286566.
  4. Resampling Methods for Particle Filtering

Example 1

1. Initial Setup

Given a set of particles

{(x0(i),w0(i))i=1,2,3,,N}, where the position of each particle represents a possible robot pose
x0(i)
=(
x
,
y
,
vx
,
vy
), where the state is defined as
xk=[xkykvxkvyk]

and the weight are equal,
w0(i)=1/N
.

2. Prediction

By using the dynamics model, the predicted (a priori) state estimation becomes:

x^k=Fxk1+Buk1+wk1
where the state transition matrix
F
and control input matrix
B
are defined as
F=[10Δt0010Δt00100001],B=[0.5Δt2000.5Δt2Δt00Δt]

where
uk1
is the control input (i.e., acceleration) at the previous time step
k1
, and
wk1N(0,Q
), where
Q
is the process noise covariance matrix.

3. Update

3.1 Measurement Model

  • linear model
    Defined the measurement model
    yk=Cxk
    (linear model), where
    C=[1000]

3.2 Intermediate Weights

By using the following relation between the predicted measurement in 3.1 Measurement Model and the observed output

yk=3, and the weight
wk1(i)
computed at the previous time step
k1
, the intermediate weight update is calculated.
p(ykxk)=(2π)12det(R)12e12(ykCx^k)R1(ykCx^k),

the likelihood can be computed, so that the intermediate weight
w^k(i)
can be calculated as follows:

w^k(i)=p(ykxk(i))wk1(i),i=1,2,3,,N

3.3 Normalization

After all the intermediate weights

w^k(i),i=1,2,3,,N are computed, compute the normalized weights
wk(i)
as follows:
wk(i)=1w¯w^k(i)

where
w¯=i=1Nw^k(i).

Then use 3.2.3 Find Pairs to find the new pairs

{(xk(i),wk(i))i=1,2,3,,N}.

4. Resample

4.1 Check conditions for resample

Follows are the conditions to determine resample or accept.

  • If
    Neff<N3
    , then generate a new set of
    N
    particles from the computed set of particles:

{(xk(i),wk(i))i=1,2,3,,N}NEW=RESAMPLE({(xk(i),wk(i))i=1,2,3,,N}OLD)

  • If
    Neff>N3
    , the result is accepted.

4.2 Resample

  • Resampling is done based on the normalized weights
    wk(i)
    .
  • Particles with higher weights are more likely to be selected multiple times, while particles with very low weights are likely to be discarded.
  • Example of resampling

4.3 Generate the New Particle Set

  • Replace the old set of particles with the new set drawn during resampling.
  • Assign equal weights to all particles in the new set (e.g.,
    w~i=1/N
    ).