owned this note
owned this note
Published
Linked with GitHub
# KPRNet: Improving projection-based LiDAR semantic segmentation
Group 26
Ayush Kulshreshtha: 5695821 A.Kulshreshtha-1@student.tudelft.nl
[Responsible for New algorithm variant]
Tejus Kusur: 5779197 t.v.kusur@student.tudelft.nl
[Responsible for Hyperparameter tweaking]
Janvi Seth: 4645987 J.Seth@student.tudelft.nl
[Responsible for Code Reproduction]
In this blog post, we present our attempt at reproduction of the results presented in "*KPRNet: Improving projection based LiDAR semantic segmentation*" by Deyvid Kochanov et al [^1]. We also present the results of changing the hyperparameters and the architecture used.
This is as part of the final project in the course CS4240 Deep Learning (2022-23 Q3) at TU Delft.
## Table of Contents
.
1. [Introduction](#Introduction)
2. [Motivation](#Motivation)
3. [Model](#Model)
4. [Results](#Results)
5. [Challenges](#Challenges)
6. [References](#References)
1. ## Introduction
Semantic segmentation is one of the image segmentation methods used in computer vision where every individual pixel of an image or cloud space is assigned a class label such as traffic light, car, cycle, etc. This is particularly useful in the application of self-driving cars where it is essential to classify drivable from non-drivable surfaces and to detect obstacles.
Semantic segmentation differs from Object-detection where entire patches of an image is detected and a bounding box is created around that object. Typical LiDARs are inferior to cameras for object classification since they cannot capture the fine textures of objects, and their points become sparse with distant objects[^2], necessiating the need for the computationally expensive semantic segmentation.
## The SemanticKITTI dataset
The SematicKITTI datset[^3] revolutionised LIDAR measurement segmentation by providing in unprecedented number of scans of point-wise annotated 3D point clouds with 28 classes, extending the KITTI Vision Odometry dataset to a complete 360° field-of-view.

*Figure 1: A single sequence from the SemanticKITTI dataset with their corresponding labels*
2. ## Motivation
Current LiDAR based semantic segmentation can roughly be categorized in two approaches:
* Using purely point-wise methods acting directly on the 3D point cloud
* Using image segmentation CNN architectures for segmenting RGB images, i.e. to project LIDAR sweeps to 2D range images, with 3D points recovered using post-processing with non-learned CRFs or KNN-based voting.
This paper aims to combine the approaches, improved CNN architecture for 2D projected LiDAR sweeps and replacing the post-processing step with a learnable module based on KPConv.
3. ## Model
The proposed method in the paper combines 2D semantic segmentation network and a 3D point-wise layer. The convolutional network gets as input a LiDAR scan projected to a range image. The resulting 2D CNN features are projected back to their respective 3D points and passed to a 3D point-wise module which predicts the final labels.
ResNeXt features with stride 16 are fed into an ASPP module and combined with the outputs of the second and first ResNeXt blocks, which have strides of 8 and 4. The result is passed through a KPConv layer followed by BatchNorm, ReLu and a final classifier. However, as part of this project, we have also tested using ResNet-101 features in place of ResNeXt.

*Figure 2: The KPRNet architecture*
4. ## Implementation & Results
The code for this project is available at: https://github.com/ayush-13/KPRNET_Reproduction.
The project was run on Google Could Platform VM using the Nvidia-T4 GPU for training and testing.
### Reproduction
The goal was to reproduce the full results of the authors' work on the semantic-kitti test benchmark. We got the following results back after making our submission on the Semantic-KITTI benchmark:
| | KPRNet (paper) | Reproduced KPRNet (ours) |
| ------------- | -------------- | ------------------------ |
| Cat | 95.5 | 95.2 |
| Bicycle | 54.1 | 45.0 |
| Motorcycle | 47.9 | 50.8 |
| Truck | 23.6 | 18.8 |
| Other-vehicle | 42.6 | 36.7 |
| Person | 65.9 | 65.8 |
| Bicyclist | 65.0 | 55.7 |
| Motorcyclist | 16.5 | 13.6 |
| Road | 93.2 | 93.2 |
| Parking | 73.9 | 72.1 |
| Sidewalk | 80.6 | 79.7 |
| Other-ground | 30.2 | 27.8 |
| Building | 91.7 | 92.0 |
| Fence | 68.4 | 68.9 |
| Vegitation | 85.7 | 85.7 |
| Trunk | 69.8 | 69.0 |
| Terrain | 71.2 | 70.7 |
| Pole | 58.7 | 58.3 |
| Traffic-sign | 64.1 | 63.1 |
| **mean-IoU** | **63.1** | **61.2** |
The table shows that our reproduced results are very similar to the values reported in the paper.
**Qualitative analysis**
A qualitative analysis can be made through the visualiation of the point cloud, made using the semantic-kitti-api

*Figure 3: Unlabelled point cloud visualisation*


*Figure 4: True labels [top] vs Trained Model output [bottom]*


*Figure 5: True labels [top] vs Trained Model output [bottom]. All sequences shown here are from Sequence 8 of the SemanticKITTI Dataset*
### New Algorithm Variant
Undertaking the full training for the entire KITTI train set was beyond the compute capabilities of our system. The authors' work was performed on a cluster of 8 - 16GB GPUs, yielding a training time of ~ 12 hours. In order to make a training experimentation study feasible for us, the following changes were made:
1. The 2D semantic segmentation backbone of the network was changed to ResNet101[^4], pretrained on the Cityscapes dataset[^5] (The motivation for this change came from our meeting with Olaf Booij, who pointed out the various possibilities for the 2D backbone).
2. A subset of the data was used for training, with only 2 sequences as opposed to the 10 sequences in the full training dataset.
3. All parallelization related code was removed and the code was made compatible to run on a single GPU.
4. The number of epochs were reduced from 120 to 15.
5. The optimizer was changed from SGD to ADAM. The motivation behind was the limited epochs constraint, with the known fact that ADAM tends to optimize quicker than SGD.
We acknowledge that a lot of these changes seem inhibiting / detrimental to the performance of the paper. However, even after all these changes, we encountered a training time of ~15 hours, which is right at the feasibility limit for us.
This modified network architecture was analyzed with 2 different hyperparameter settings, as defined below.
### Sensitivity to Hyperparameters
Finally, we decided to tweak and study the effects of hyperparameter change on our modified network structure.
1. Change in Batch size: The original paper had a batch size of 3, being processed by each of the 8 GPUs. We discovered that with our single GPU, changing the batch size to 8 or higher made the GPU go out of memory, while changing the batch size to either 2 or 4 resulted in a NaN loss while training in torch (We failed to come up with a reasonable explanation for this correlation between Loss and batch size). The first experiment used a batch size of 3, while the next experiment used a batch size of 6.
2. Change in Learning rate: We decided to tweak the learning rate and see how it affected the training. The first run used a learning rate of 10e-6 and the second run used a learning rate of 10e-5, with the hypothesis that given the lower number of epochs, a faster learning rate might lead to better training performance.
**Results**
| | Experiment 1 | Experiment 2 |
| ------------- | -------------- | ------------------------ |
| Avg Val Loss | 0.9475 | 0.8561 |
| **mean-IoU** | **17.24** | **19.02**
The per class IoU was skewed towards a few classes, possibly meaning that the network only learnt to detect the features of a few class objects. However, no concrete conclusions can be drawn from the limited scope of our experiment.
5. ## Challenges and Future Scope
The biggest challenge faced during the course of this project is the large size of the SemanticKITTI dataset (80GB) and the computation needed for training the model. A significant portion of time was required to just upload the dataset to Google Cloud, whereas the training required over 15 hrs with the Nvidia T4 GPU, even when only 2 out of the 10 sequences in the dataset was utilised.
We aim to train and test this model on newer datasets such as NuScenes. However, the obstacle in achieving this is differences in the point cloud labels across the datasets much be overcome for this.
6. ## References
[^1]: D. Kochanov, F. K. Nejadasl, and O. Booij, “KPRNet: Improving projection-based LiDAR semantic segmentation,” Jul. 2020, doi: 10.48550/arXiv.2007.12668
[^2]: D. Feng et al., "Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges," in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1341-1360, March 2021, doi: 10.1109/TITS.2020.2972974
[^3]: J. Behley et al., “SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-October, pp. 9296–9306, Apr. 2019, doi: 10.1109/ICCV.2019.00939.
[^4]: He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[^5]: Zhu, Y., Sapra, K., Reda, F. A., Shih, K. J., Newsam, S., Tao, A., & Catanzaro, B. (2019). Improving semantic segmentation via video propagation and label relaxation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8856-8865).