February 29, 2024 Filtering Lecture Outline # Crash Course on Bayes Filtering / Online State Estimation: Kalman Filtering and More ### Goals and Priorities My goals for this lecture, in descending order of priority, are for you to come away with: 1. Foundational concepts underlying Bayes Filtering and Online State Estimation 2. Unified cognitive schema of methods and terminology you may encounter that is related to the foundational knowledge and intuition you now have 3. Intuition of how these core concepts can be extended beyond Kalman filtering 4. Algorithmic understanding of Kalman Filtering (KF), Extended Kalman Filtering (EKF), Unscented Kalman Filtering (UKF), and Particle Filtering (PF) I am ordering these priorities according to the challenges lab members have faced that initially motivated this lecture. Specifically #1 stems from the mathematical overlap some of your work exhibits with Kalman Filtering - even though your problem may not immediately scream out "online state estimation" or "Bayes filtering." Understanding the core mathematics and principles will help you identify these connections between seemingly disparate problems. Along those same lines, #2 is not only about unifying concepts that may not seem connected. It is also about helping you identify terminology or problem formulations that fall under this same umbrella of Bayes filtering or online state estimation. The reasoning for #3 is less motivated by apparent need and more related to anticipated need. If you are to ever work on a problem in or related to this domain, you will likely not be implementing an approach from scratch. Instead, you will have to make design choices as to which algorithm you will use/adapt for your problem - the intition you will get via #3 will help you make those design choices. Finally, if time permits, we will also discuss UKF and PF algorithmic details. The algorithmic details for KF are, in my opinion, necessary and fall under #1. EKF is a simple extension that is fairly easy to implement once you understand the core idea of linearization. However, UKF and PF algorithms will require nontrivial extensions related to unscented transforms and sequential importance sampling that we will definitely discuss, but may not have the time to walk through. ### Lecture Outline The lecture will proceed as follows: #### Setup 1. Introduction of the fundamental problem of latent variable estimation <img src="https://hackmd.io/_uploads/BJ715Jpha.png" width="150" style="display: block; margin: 0 auto"> 2. Interrogating the mathematics reveals two sub-problems of learning and inference $\space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space \space o = h_\theta(s) + \epsilon \space \space \space \space \space \space \space \space (o, s) \rightarrow \theta? \space \space \space \space \space \space \space \space (o, \theta) \rightarrow s?$ 3. What if my latent variable is meaningfully changing, timestep to timestep? $s_{t} = f_{\phi}(s_{t-1}) + w_{\bullet},\space \space \space w_{\bullet}\sim \mathcal{N}(0, Q) \space \space \space$ vs. $s_t = w_{\bullet}, \space \space \space w_{\bullet} \sim \mathcal{N}(0, Q)$ #### Focus 4. Filtering is to inference as System identification is to learning 5. Deep dive into Kalman Filtering 6. Challenges lead to EKF, UKF, Ensemble Kalman Filtering (EnKF), and PF 7. Overview of EKF and EnKF 8. Dive into UKF and PF #### Conclusion 9. Summary Comparison 10. Final notes on discrete state and input-driven systems ### Examples Relevant to the Lab's Current Work to Think About ![realLifeExamples](https://hackmd.io/_uploads/SJz0u1Tnp.png)