# 論文修改事項 - [x] 緒論的文獻沒人放在句子後。頁尾要再總結一下。 - [ ] 文獻回顧要分三段,分別就三個主題分別說明文獻研究現況。如目標跟隨研究,視覺辨識,機器人研究現況...。 ![](https://hackmd.io/_uploads/H1cEn-zw2.png) - [x] 2.3-2.6 各節要有圖例說明。(錄相機點雲畫面) - [x] 2.7節要有結果圖例(再錄Nav2導航影片) - [x] 2.8節要有網路架構圖,應用辨識範例,及使用yolov5的訓練及使用步驟,跟第三章的不同,在於簡單說明的圖例,不用跟你的應用相同。(錄數據集標記過程,模型測試結果) ![](https://hackmd.io/_uploads/SyZhr3bw2.png) ![](https://hackmd.io/_uploads/HkQprnZv2.png) ![](https://hackmd.io/_uploads/rJJABhZP3.png) - [x] 圖10-13 在資工有標準的呈現方式。 - [x] 3.2.2為何要用四元素轉換? 平面運動應該是不需要(因為ROS2的通訊格式是使用四元數) ![](https://hackmd.io/_uploads/r1xJV83bvn.png) - [x] 方程式3-8 中的變數都要定義 ![](https://hackmd.io/_uploads/BJTP83Wwn.png) - [x] 圖20 中變數要定義清楚,什麼是box, val box ![](https://hackmd.io/_uploads/HJRYUh-Pn.png) - [x] 圖21 下左右小圖都沒說明 * https://blog.ovhcloud.com/object-detection-train-yolov5-on-a-custom-dataset/ - [x] 圖34 要把車子模型放進去。u如何轉換成車子的輸入,要有模型 - [x] 圖34最後的節點怪怪的 * 需要解釋如何將PID的最後輸出的u(t)轉化成車子的輸入,怎麼控制車子移動,需要在這部分再次進行解釋 ![](https://hackmd.io/_uploads/Bkmtdvmw2.png) ![](https://hackmd.io/_uploads/SyVfb1Pw2.png) ![](https://hackmd.io/_uploads/ByQ7-kvP2.png) https://blog.yanjingang.com/?p=5604 - [x] 50 頁要有方程式編號,dt是什麼 - [x] 方程式11下的部分沒有用方程式打 - [x] 變數沒定義 - [ ] 4.4要有被跟隨目標軌跡圖,預測結果軌跡圖。要多幾個不同情況表示跟隨效果。 - [ ] 預測效果 ### 文獻回顧應該可以使用以下論文.... ROS2機器人研究現況 1. ["A Development of Mobile Robot Based on ROS2 for Navigation Application"](https://ieeexplore.ieee.org/document/9593984) 2. ["The Marathon 2: A Navigation System"](https://ieeexplore.ieee.org/document/9341207) 3. ["Development of Following Vehicle Prototype using Robot Operating System"](https://ieeexplore.ieee.org/document/9117418) 4. ["Implementation of a Person Following Robot in ROS-gazebo platform"](https://ieeexplore.ieee.org/document/9726010) 5. ["Open-Source Tools for Efficient ROS and ROS2-based 2D Human-Robot Interface Development"](https://ieeexplore.ieee.org/document/9568801) YOLO目標跟隨 20 1. ["Vehicle Detection and Tracking using YOLO and DeepSORT"](https://ieeexplore.ieee.org/abstract/document/9431784) > M. A. Bin Zuraimi and F. H. Kamaru Zaman, "Vehicle Detection and Tracking using YOLO and DeepSORT," 2021 IEEE 11th IEEE Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang, Malaysia, 2021, pp. 23-29, doi: 10.1109/ISCAIE51753.2021.9431784. 21 2. ["Pedestrian Target Tracking Based On DeepSORT With YOLOv5"](https://ieeexplore.ieee.org/document/9692002) > Y. Gai, W. He and Z. Zhou, "Pedestrian Target Tracking Based On DeepSORT With YOLOv5," 2021 2nd International Conference on Computer Engineering and Intelligent Control (ICCEIC), Chongqing, China, 2021, pp. 1-5, doi: 10.1109/ICCEIC54227.2021.00008. 22 3. [Simple Online Real-Time Tracking Algorithm with Improved YOLOV4 as Extractor](https://ieeexplore.ieee.org/document/9686388) > F. Li, X. Deng, F. Shi, X. Zhou, K. Xia and G. Hu, "Simple Online Real-Time Tracking Algorithm with Improved YOLOV4 as Extractor," 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB), Yilan County, Taiwan, 2021, pp. 266-270, doi: 10.1109/ICEIB53692.2021.9686388. 影像辨識現況 14 1. [Deep learning for smart manufacturing: Methods and applications](https://www.sciencedirect.com/science/article/pii/S0278612518300037?via%3Dihub) > J. Wang, Y. Ma, L. Zhang, R. X. Gao, and D. Wu, "Deep learning for smart manufacturing: Methods and applications," Journal of Manufacturing Systems, vol. 48, no. C, pp. 144-156, 2018. [Online]. Available: https://doi.org/10.1016/j.jmsy.2018.01.003 15 2. [Deep learning in video multi-object tracking: A survey](https://www.sciencedirect.com/science/article/pii/S0925231219315966?via%3Dihub) > G. Ciaparrone, F. L. Sánchez, S. Tabik, L. Troiano, R. Tagliaferri, and F. Herrera, "Deep learning in video multi-object tracking: A survey," Neurocomputing, vol. 381, pp. 61-88, 2020. [Online]. Available: https://doi.org/10.1016/j.neucom.2019.11.023 16 3. [Infrared Image Recognition Technology Based on Visual Processing and Deep Learning](https://ieeexplore.ieee.org/document/9327574) > H. Feng, H. Xuran, L. Bin, W. Haipeng and Z. Decai, "Infrared Image Recognition Technology Based on Visual Processing and Deep Learning," 2020 Chinese Automation Congress (CAC), Shanghai, China, 2020, pp. 641-645, doi: 10.1109/CAC51589.2020.9327574. 17 4. [Recognition of Objects in the Urban Environment using R-CNN and YOLO Deep Learning Algorithms](https://ieeexplore.ieee.org/document/9134080) > R. Sarić, M. Ulbricht, M. Krstić, J. Kevrić and D. Jokić, "Recognition of Objects in the Urban Environment using R-CNN and YOLO Deep Learning Algorithms," 2020 9th Mediterranean Conference on Embedded Computing (MECO), Budva, Montenegro, 2020, pp. 1-4, doi: 10.1109/MECO49872.2020.9134080. 18 5. [Library Attendance System using YOLOv5 Faces Recognition](https://ieeexplore.ieee.org/document/9650628) > Mardiana, M. A. Muhammad and Y. Mulyani, "Library Attendance System using YOLOv5 Faces Recognition," 2021 International Conference on Converging Technology in Electrical and Information Engineering (ICCTEIE), Bandar Lampung, Indonesia, 2021, pp. 68-72, doi: 10.1109/ICCTEIE54047.2021.9650628. 19 6. [Garbage Classification System with YOLOV5 Based on Image Recognition](https://ieeexplore.ieee.org/document/9688725) > G. Yang et al., "Garbage Classification System with YOLOV5 Based on Image Recognition," 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 2021, pp. 11-18, doi: 10.1109/ICSIP52628.2021.9688725.