Sensor: VLP16 Lidar
Data is collected by teleoperating/patrolling the robot around the indoor environment.
Lidar-based SLAM method: LeGO-LOAM[1] is used to obtained the pose of the robot with respect to the environment.
roslaunch velodyne_pointcloud VLP16_points.launch
roslaunch lego_loam run.launch
Pointcloud need to be extracted in kitti format (which is just np array binary file).
Poses for each pointcloud is extracted from the recorded tf and saved in poses.txt in kitty format.
First (or first few) frame might be ignored because tf cannot find the pose because it cannot extrapolate because lack of data in the tfbuffer.
play rosbag with clock
rosbag play --clock recorded.bag
run pointcloud_to_kitti.py
python3 pointcloud_to_kitti.py
or
rosrun package_name pointcloud_to_kitti
depends on where you place the file in.
terminate pointcloud_to_kitti.py when rosbag play ended using CTRL-C
.
Poses from real-time SLAM algorithm like LeGO-LOAM might not be accurate.
Accurate poses is needed to simplify the label process. Lidar frames can be combined and labelled together if poses is accurate enough.
Fine tune is done by using ICP-based scan matching method.
Code: Colab notebook
Replace the old poses with new poses obtained after the fine tuning process.