Coordinate calibration and sensor accuracy analysis === Method --- #### *Rotation calibration* $$Variables:$$ $$ ^w_tR_{vicon_{t}}:\ rotation\ matrix\ of\ vicon\ at\ time\ t.\\ ^w_tR_{sensor_{t}}:rotation\ matrix\ of\ sensor\ at\ time\ t.\\ ^w_{t_0}R_{vicon_{t_0}}:rotation\ matrix\ of\ vicon\ at\ time\ t_0.\\ ^w_{t_0}R_{sensor_{t_0}}:rotation\ matrix\ of\ sensor\ at\ time\ t_0.\\ $$ $$ for\quad each\quad ^w_tR_{vicon_{t}}\quad and\quad ^w_{t_0}R_{sensor_{t_0}} \\ ^{t_0}_tR_{vicon_{t}} = ^w_{t_0}R_{vicon_{t_0}}^{-1}\times^{w}_tR_{vicon_{t}} \\^{t_0}_tR_{sensor_{t}} = ^w_{t_0}R_{sensor_{t_0}}^{-1}\times^{w}_tR_{sensor_{t}} $$ $$On\ above,\ we\ have\ quaternion\ based\ on\ t_0\ instead\ of\ world.$$ $$q_{vicon_{t}}:\ quaternion\ of\ vicon\ at\ time\ t,\ based\ on\ vicon\ t_0.\\ q_{sensor_{t}}:\ quaternion\ of\ sensor\ at\ time\ t,\ based\ on\ sensor\ t_0.\\ $$ $$ From\ quaternion,\ we\ have\ each\ (w,x,y,z),\ where :\\ \begin{cases} w = \cos \dfrac{\theta}{2}\\ x = \dfrac{i}{\sin \dfrac{\theta}{2}}\\ y = \dfrac{j}{\sin \dfrac{\theta}{2}}\\ z = \dfrac{k}{\sin \dfrac{\theta}{2}}\\ \end{cases} \\ As\ the\ rigid\ body\ rotate\ with\ specific\ axis \ (i, j, k),\ and\ \theta.\\ For\ q_{vicon_{t}}\ and\ q_{sensor_{t}},\ we\ have\ ^{v_{t_0}}\vec{v}(i_v,j_v,k_v)\ and\ ^{s_{t_0}}\vec{s}(i_s, j_s, k_s). \\Where\ ^{v_{t_0}}\vec{v} = ^{v_{t_0}}_{s_{t_0}}R \times ^{s_{t_0}}\vec{s}. $$ $$Finally,\ we\ have\ thousands\ of\ ^{s_{t_0}}\vec{s}\ and\ ^{v_{t_0}}\vec{v},\ and \ set\ cost\ function\ \\f:=\sum(^{v_{t_0}}\vec{v}\ \odot\ ^{v_{t_0}}_{s_{t_0}}R \times ^{s_{t_0}}\vec{s})^2\\ ^{v_{t_0}}_{s_{t_0}}R = R(\alpha,\ \beta,\ \gamma)\\ min(f(\alpha,\ \beta,\ \gamma))$$ #### ***Transition calibration*** $$Variables:$$ $$^{t_0}_tR_{vicon_{t}} :\ rotation\ matrix\ of \ vicon\ at\ time\ t.\\ ^{s_{t_0}}\vec{V_s}:\ moving\ vector\ of\ sensors\ respect \ to\ t_0. \\ ^{v_{t_0}}\vec{V_v}:\ moving\ vector\ of\ vicon\ respect \ to\ t_0. \\ ^{v_{t_0}}\vec{V_{trans}}: translation\ of\ vicon\ origin\ to \ sensor\ origin\ at\ t_0.\\ ^{v_{t}}\vec{V_{trans}}: translation\ of\ vicon\ origin\ to \ sensor\ origin\ at\ t. $$ $$\begin{cases} ^{v_{t_0}}\vec{V_v} = ^{v_{t_0}}_{s_{t_0}}R \times ^{s_{t_0}}\vec{V_s} + ^{v_{t_0}}\vec{V_{trans}}-^{t_0}_tR_{vicon_{t}} \times ^{v_{t}}\vec{V_{trans}}\\ ^{v_{t}}\vec{V_{trans}}=^{v_{t_0}}\vec{V_{trans}}\end{cases}$$ $$Finally, set\ cost\ function\\ f:=\sum (^{v_{t_0}}\vec{V_v}-^{v_{t_0}}_{s_{t_0}}R \times ^{s_{t_0}}\vec{V_s}+(^{t_0}_tR_{vicon_{t}}-I) \times ^{v_{t_0}}\vec{V_{trans}})^2\\ ^{v_{t_0}}\vec{V_{trans}} = V(x,y,z)\\ min(f(x,y,z))$$ Result --- ##### data 1 ![](https://i.imgur.com/trYgiPs.png) ![](https://i.imgur.com/y9DbeTY.png) ![](https://i.imgur.com/urfZ5km.png) ![](https://i.imgur.com/RSbFizQ.png) ![](https://i.imgur.com/xMf6VRo.png) ![](https://i.imgur.com/wcpDtrU.png) ##### data 2 ![](https://i.imgur.com/rnPXLac.png) ![](https://i.imgur.com/EQ6RoMq.jpg) ![](https://i.imgur.com/ww3PoHO.png) ![](https://i.imgur.com/txZf39L.png) ![](https://i.imgur.com/4Vfcec9.png) ![](https://i.imgur.com/XMJMIMb.jpg) --- Conclusion --- We choose vicon as ground truth, compare T265 and ground truth. The error may come from three sources. 1. Coordinate calibration error 2. Sensor measurement error 3. Initial measurement error And from result figures, we can find these observations directly. 1. Orientation error is at least within range. 2. The trend of translation was correct with error within range. 3. translation on x axis has maximum error. 4. The translation error of data 1 will converge during seconds. From above observations, I conclude some assumptions. 1. Vision slam can not form a close loop check at high speed. 2. The coordinate calibration might have some error, but the error was not big enough to cause such a huge error on translation. Errors may came from sensor inaccuracy. 3. The vision tracking might be weak when moving parallelly on its focal axis. --- 結論 --- 選擇vicon作為參考,並比較T265與其的誤差,而我認為誤差源有三。 1. 座標校正的誤差 2. 感測器量測誤差 3. 初始點量測誤差 而由結果圖,有以下直接的觀察。 1. 姿態的誤差至少在範圍內 2. 座標校正也許有誤差,但從姿態的比對來看,角度的誤差沒有大到能讓位移出現這麼大的誤差,故感測器必定有量測的誤差。 3. 也許視覺感測器在對焦軸軸上的移動估測會較不準確。 Bonus Experiment --- 1. Slowly move the robot, and keep moving in the environment to make sure the sensor slam well, and maybe get better accuracy over time. 2. Change the angle between focal axis and moving direction.