# 最小平方法 需先有基本概念: * <a href ="https://www.bilibili.com/video/BV17h411z7A5/?vd_source=ff85b2bd66e06557f415b9e79256188e">最小平方法概念推導</a> :::info $y = [x_1 \quad ... x_n] \begin{bmatrix} \theta_1 \\ \vdots \\ \theta_n \end{bmatrix}$ 已知x和y的一系列數據,求解參數theta的估計。用矩陣的形式來表達更方便 $\begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix} = \begin{bmatrix} \phi_1^T \\ \vdots \\ \phi_n^T \end{bmatrix} \Theta$ $\phi_i^T = [x_1^i \quad ... x_n^i] \in R^{1 \times n}, \Theta=\begin{bmatrix} \theta_1 \\ \vdots \\ \theta_n \end{bmatrix} \in R^{n \times 1}$ ::: $設 \quad \large e = y - \hat y = y - \Phi \hat\theta$ 最小二乘估計的要求偏差的平方和最小,即它的目標函數為 $$\large J = e^Te = (y - \Phi \hat\theta)^T(y - \Phi \hat\theta)$$ 為最小來確定估值 $\hat\theta$。 求 $J$ 對 $\hat\theta$ 的偏導數並令其等於 $0$ 可得 $\hat\theta$ 的最小平方估計 $$\large \hat \theta = (\Phi^T \Phi)^{-1}\Phi^Ty$$ :::warning $令 \ \ \Phi_k = \begin{bmatrix} \phi_1^T \\ \vdots \\ \phi_k^T \end{bmatrix} \in R^{k \times n}, \ Y_k= \begin{bmatrix} y_1 \\ \vdots \\ y_k \end{bmatrix} \in R^{k \times 1}$ 其中 $k$ 代表有 $k$ 組觀測到的數據 ::: 得到 $$\large \hat \theta_k = (\Phi_k^T \Phi_k)^{-1}\Phi_k^TY_k$$ 如果數據是持不斷的過來,不停的採用最小平方的解法來解是相當消耗資源與內存的,所以要有一種遞推的形式來保證對 $\hat \Theta_k$ 的持續更新。 # 遞歸最小平方法(Recursive Least Square, RLS) ## 概念: 我們的目的是從一般最小平方法的解$\hat \Theta_k = (\Phi_k^T \Phi_k)^{-1}\Phi_k^T Y_k$ ,推導出 ==$\large \hat \Theta_k = \hat \Theta_{k-1} \ +$ 修正量== > 這裡下標 $k$ 代表在有 $k$ 組數據情況下的預測,$k$ 比 $k-1$ 多了一組數據,所以可以用多來的這一組數據來對原本的估計進行修正 ## 總結: $$\large \begin{split} \hat \Theta_k &= \hat\Theta_{k-1}+K_k\varepsilon_k \\ K_k &= P_k\phi_k = \cfrac{P_{k-1}\phi_k}{\lambda + \phi_k^TP_{k-1}\phi_k}\\ \varepsilon_k &= y_k - \phi_k^T\hat\Theta_{k-1} \\ P_k &= \cfrac{1}{\lambda}P_{k-1} - \cfrac{1}{\lambda}\cfrac{P_{k-1}\phi_k\phi_k^TP_{k-1}}{\lambda + \phi_k^TP_{k-1}\phi_k} \end{split}$$ ## 推導 先看一般最小二乘法的解 $\hat \Theta_k = (\Phi_k^T \Phi_k)^{-1}\Phi_k^T Y_k$,下面分別對 $\Phi_k^T \Phi_k$ 和 $\Phi_k^T Y_k$ 進行推導變換 ==$令 \ \ \large R_k = \Phi_k^T \Phi_k$== 得到下面公式 $\large R_k = [\phi_1 \quad... \quad \phi_k] \begin{bmatrix} \phi_1^T \\ \vdots \\ \phi_k^T \end{bmatrix} = \sum_{i=1}^{k} \phi_i\phi_i^T = \sum_{i=1}^{k-1} \phi_i\phi_i^T + \phi_k\phi_k^T = R_{k-1}+\phi_k\phi_k^T$ 在實際應用中,新數據往往比舊數據更有價值,故我們在地推中加入遺忘因子$\lambda$, $\lambda \in[0 \ \ 1]$ 讓之前的數據的權重越來越小 $$\large R_k = \lambda R_{k-1}+\phi_k\phi_k^T$$ 再透過 矩陣求逆引理 :::success $$\large \begin{split} A &= B^{-1}+CD^{-1}C^T \\ A^{-1} &= B - BC(D+C^TBC)^{-1}C^TB \end{split}$$ ::: ==$\large 令 \ A=R_k, \ B=(\lambda R_{k-1})^{-1}, \ C=\phi_k,\ D=1$== $$\large \begin{align} R_k^{-1} &= (\lambda R_{k-1})^{-1} - (\lambda R_{k-1})^{-1}\phi_k(1+\phi_k^T(\lambda R_{k-1})^{-1}\phi_k)^{-1}(\lambda R_{k-1})^{-1} \\ &= \cfrac{1}{\lambda}R_{k-1}^{-1} - \cfrac{1}{\lambda}R_{k-1}^{-1}\phi_k\cfrac{1}{1+\cfrac{1}{\lambda}\phi_k^T R_{k-1}^{-1}\phi_k}\phi_k^T\cfrac{1}{\lambda}R_{k-1}^{-1} \\ &= \cfrac{1}{\lambda}R_{k-1}^{-1} - \cfrac{\cfrac{1}{\lambda^2}R_{k-1}^{-1}\phi_k\phi_k^TR_{k-1}^{-1}}{1+\cfrac{1}{\lambda}\phi_k^TR_{k-1}^{-1}\phi_k} \end{align}$$ ==$\large 令 \ \ R_{k-1}^{-1} = P_k$== $$\large \begin{align} P_k &= \cfrac{1}{\lambda}P_{k-1} - \cfrac{\cfrac{1}{\lambda}P_{k-1}\phi_k\phi_k^TP_{k-1}}{\lambda + \phi_k^TP_{k-1}\phi_k} \\ &= \cfrac{1}{\lambda}P_{k-1} - \cfrac{1}{\lambda}\cfrac{P_{k-1}\phi_k}{\lambda + \phi_k^TP_{k-1}\phi_k} \phi_k^TP_{k-1} \end{align}$$ ==$\large 令 \ \ K_k =\cfrac{P_{k-1}\phi_k}{\lambda + \phi_k^TP_{k-1}\phi_k}$== $$\large P_k = \cfrac{1}{\lambda}P_{k-1}- \cfrac{1}{\lambda}K_k\phi_k^TP_{k-1} \tag{1}$$ ==$\large 兩邊同乘\phi_k$== $$\large \begin{align} P_k\phi_k &= \cfrac{1}{\lambda}P_{k-1}-\cfrac{\cfrac{1}{\lambda}P_{n-1}\phi_k\phi_k^TP_{n-1}\phi_k}{\lambda + \phi_k^TP_{k-1}\phi_k} \\ &= \cfrac{P_{k-1}+(\cfrac{1}{\lambda}P_{n-1}\phi_k\phi_k^TP_{k-1}\phi_k) - (\cfrac{1}{\lambda}P_{n-1}\phi_k\phi_k^TP_{k-1}\phi_k)}{\lambda + \phi_k^TP_{k-1}\phi_k} \\ &= \cfrac{P_{k-1}\phi_k}{\lambda + \phi_k^TP_{k-1}\phi_k} \\ &= K_k \tag{2} \end{align}$$ ==$\large 回到最初的定義 \ \hat \Theta_k = R_k^{-1}\Phi_k^T Y_k$== ==$\large 令z_k = \Phi_k^T Y_k$== $\large z_k = [\phi_1 \quad... \quad \phi_k] \begin{bmatrix} y_1 \\ \vdots \\ y_k \end{bmatrix} = \sum_{i=1}^{k} \phi_iy_i = \sum_{i=1}^{k-1} \phi_iy_i + \phi_ky_k = z_{k-1}+\phi_ky_k$ 考慮遺忘因子$\lambda$ $$\large z_k = \lambda z_{k-1}+\phi_ky_k \tag{3}$$ 有了上面三條等式,下面推導簡單了 $$\large \begin{align} \large \hat \Theta_k &= R_k^{-1}z_k \\ &= P_kz_k\\ &= P_{k}(\lambda z_{k-1}+\phi_ky_k) \qquad \normalsize 代入(3)\\ &= \lambda P_{k}z_{k-1} + P_k\phi_ky_k \\ &= \lambda(\cfrac{1}{\lambda}P_{k-1}-\cfrac{1}{\lambda}K_k\phi_k^TP_{k-1})z_{k-1}+P_k\phi_ky_k \qquad \normalsize 代入(1)\\ &= P_{k-1}z_{k-1}-K_k\phi_k^TP_{k-1}z_{k-1}+P_k\phi_ky_k \\ &= \hat\Theta_{k-1}-K_k(\phi_k^T\hat\Theta_{k-1}-y_k) \\ &= \hat\Theta_{k-1}+K_k(\varepsilon_k) \end{align}$$ 至此,我們得到 $\large \hat \Theta_k = \hat \Theta_{k-1} \ +$ 修正量 的形式了