# Notes on collecting and processing indoor Lidar data ## Data collection 1. Sensor: VLP16 Lidar 2. Data is collected by teleoperating/patrolling the robot around the indoor environment. 3. Lidar-based SLAM method: LeGO-LOAM<sup>[1]</sup> is used to obtained the pose of the robot with respect to the environment. ### Steps: 1. launch lidar: `roslaunch velodyne_pointcloud VLP16_points.launch` 2. launch slam: `roslaunch lego_loam run.launch` 3. record data with rosbag: - velodyne_points - tf and static tf ## Exporting pointclouds in kitti format 1. Pointcloud need to be extracted in kitti format (which is just np array binary file). 2. Poses for each pointcloud is extracted from the recorded tf and saved in poses.txt in kitty format. 3. First (or first few) frame might be ignored because tf cannot find the pose because it cannot extrapolate because lack of data in the tfbuffer. ### Steps: 1. play rosbag with clock `rosbag play --clock recorded.bag` 2. run pointcloud_to_kitti.py `python3 pointcloud_to_kitti.py` or `rosrun package_name pointcloud_to_kitti` depends on where you place the file in. 3. terminate pointcloud_to_kitti.py when rosbag play ended using `CTRL-C`. ## Fine-tuning the poses obtained from Lidar SLAM 1. Poses from real-time SLAM algorithm like LeGO-LOAM might not be accurate. 2. Accurate poses is needed to simplify the label process. Lidar frames can be combined and labelled together if poses is accurate enough. 3. Fine tune is done by using ICP-based scan matching method. Code: [Colab notebook](https://colab.research.google.com/drive/10UMR1C7sOJNiSWTlbg5l66gDbYKH41c2) Replace the old poses with new poses obtained after the fine tuning process. ## Labelling 1. Label using Point Cloud Labeling Tool (used in labelling of SemanticKITTI dataset<sup>[2]</sup>) [https://github.com/jbehley/point_labeler](https://github.com/jbehley/point_labeler) --- ## References: 1. [T. Shan and B. Englot, "LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain," 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 4758-4765, doi: 10.1109/IROS.2018.8594299.](https://ieeexplore.ieee.org/document/8594299) 2. [Behley, Jens, et al. "Semantickitti: A dataset for semantic scene understanding of lidar sequences." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.](https://openaccess.thecvf.com/content_ICCV_2019/html/Behley_SemanticKITTI_A_Dataset_for_Semantic_Scene_Understanding_of_LiDAR_Sequences_ICCV_2019_paper.html)