# SDC Localization Competition
###### tags: `Self-Driving Cars`, `ROS`
The goal of this competition is to develop a localization system for **estimating the pose** of a self-driving car given a point cloud map. You will be graded based on the accuracy of your result and the contribution on solving the problem. You can write your code based on [last assignment](https://hackmd.io/@Biomotion/rJFrBgrSY).
Note: This is **NOT** a team competition you should complete this competetion **ON YOUR OWN**.
## Schedule
### Competition: 11/4 - 11/17
### Report: 11/4 - 11/24
## Data
### [Download All](https://drive.google.com/drive/folders/1amMpofd-ZRvVjTkNUC43C3CUt1d2ECpP?usp=sharing)
We are using rosbags from the following two sources:
1. [ITRI Datasets](#ITRI-Datasets)
2. [nuScenes Datasets](#nuScenes-Datasets)
There will be **3 bags** in total. One from ITRI and two from nuScenes. The detail sensor setup will be described later. The ITRI bag is considered as **EASY** case and the nuScenes bags are considered as **ADVANCED** cases.
### ITRI Datasets
- Files:
- sdc_localization_1.bag
- bag time: 20.1 seconds
- number of scans: 201
- Map: [Download](https://drive.google.com/drive/folders/1PrBcHc03Xzunfj-O2Is-t8-lu3BbUCUN?usp=sharing)

- Topics

- Sensors
- LiDAR: velodyne VLP-32C
- IMU: xsens MTi-G-710
```
orientation_covariance: [0.017453292519943295, 0.0, 0.0, 0.0, 0.017453292519943295, 0.0, 0.0, 0.0, 0.15707963267948966]
angular_velocity_covariance: [0.0004363323129985824, 0.0, 0.0, 0.0, 0.0004363323129985824, 0.0, 0.0, 0.0, 0.0004363323129985824]
linear_acceleration_covariance: [0.0004, 0.0, 0.0, 0.0, 0.0004, 0.0, 0.0, 0.0, 0.0004]
```
- GPS: positioal measurement simulated from vehicle pose (standard deviation of noise: 1 meter)
### nuScenes Datasets
In nuScenes Datasets, **wheel_odometry** is available for fusing. You can try consider this measurement to improve your performance. For the size issue, we provide **a full version and a lite version** of bag. The lite version only contains LiDAR, Wheel odometry, IMU, GPS and TF data. You can choose which one to use.
- Files:
- sdc_localization_2.bag
- bag time: 19.8 seconds
- number of scans: 396
- sdc_localization_3.bag
- bag time: 19.6 seconds
- number of scans: 389
- Map: [Download](https://drive.google.com/drive/folders/1PrBcHc03Xzunfj-O2Is-t8-lu3BbUCUN?usp=sharing)
The map is also delivered in **TILE format** due to size of files. Please refer [**this**](https://hackmd.io/@Biomotion/SJziDsDIt) article to make best use of the data.

- Topics

- [Sensors](https://www.nuscenes.org/nuscenes#data-collection)
- IMU
```
orientation_covariance: [0.000174532925, 0.0, 0.0, 0.0, 0.000174532925, 0.0, 0.0, 0.0, 0.000698131701]
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration_covariance: [0.0025, 0.0, 0.0, 0.0, 0.0025, 0.0, 0.0, 0.0, 0.025]
```
- Other Sensors
- Camera: ([sensor_msgs/CompressedImage](http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/CompressedImage.html))
- /image_back/compressed
- /image_back_left/compressed
- /image_back_right/compressed
- /image_front/compressed
- /image_front_left/compressed
- /image_front_right/compressed
- Radar(RAW): ([conti_radar/Measurement](https://github.com/DengYu1203/conti_radar))
- /radar_back_left
- /radar_back_right
- /radar_front
- /radar_front_left
- /radar_front_right
- Radar(point cloud): ([sensor_msgs/PointCloud2](http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/PointCloud2.html))
- /nuscenes_radar_back_left
- /nuscenes_radar_back_right
- /nuscenes_radar_front
- /nuscenes_radar_front_left
- /nuscenes_radar_front_right
## Result
You are supposed to estimate the **poses of vehicle** at the time of **EVERY** lidar scan with the following format. For each pose result, you will save both **position** and **orientation** values. The unit of positions are **in meters** and the unit of orientations are **in radians**. The yaw, pitch, roll value are Euler angles between map and vehicle.
```
id,x,y,z,yaw,pitch,roll
```
Save your result as a csv file. It should look like the following:

Note: All results represent **vehicle pose on map frame** which is the relative pose between **base_link/car** and **map**. Remember to transform your pose or lidar points onto vehicle frame.
## Evaluation
To compare your result with ground truth, we adopt the Root-Mean-Square Error (RMSE) metric. You should submit your result to our kaggle competitions. Please submit results from each bag to the corresponding competition.
- [2021 SDC Localization Competition I](https://www.kaggle.com/c/2021-sdc-localization-competition-i)
- [2021 SDC Localization Competition II](https://www.kaggle.com/c/2021-sdc-localization-competition-ii)
- [2021 SDC Localization Competition III](https://www.kaggle.com/c/2021-sdc-localization-competition-iii)
Note: For competition II&III, RMSE of Z-axis will not be evaluated since there is no ground truth on Z in nuScenes Datasets. **Please modify z-axis results of your csv file to 0 in competition II&III**. Otherwise, you'll get a large error.
## Submission & Grading
- [Competition Ranking](#Competition-Ranking): 60%
- [Presentation and Report](#Presentation-and-Report): 40%
### Competition Ranking
- Please upload your localization result to the kaggle competition websites.
- Maximum daily submissions: **20**
- The result CSV should contain coordinate data for **EVERY** LiDAR timestamps.
- For each localization competition, you will get your score based on your ranking.
| RANK | SCORE |
| -------- | -------- |
| 1 | 20 |
| 2 | 18 |
| 3 | 16 |
| Top 20% | 14 |
| Top 40% | 12 |
| Others above baseline | 10 |
| Below/equal-to baseline | 0 |
- Your total score in competition is sum of the three competition result.
- **Upload your code and the best submission to E3**. We will check if the code can be compiled and executed.
- Please pack all nodes in **roslaunch** to execute your program.
- playing rosbag doesn’t need to be included in roslaunch file
- You can use different roslaunch files to run each competitions if you need
- You may add some **README** if your code needs some extra libraries. Just make sure TAs can compile it properly.
- Your best submission files of the 3 localization competition named by `submit_1.csv`, `submit_2.csv`, `submit_3.csv`
- Naming Rule: `<student_id>_localization_results.zip`
- **Upload Deadline: 11/17 23:59**
- **There is no delay submission accepted. Remember to upload your code befor midnight**
-
### Presentation and Report
- Required elements:
- Pipeline: how your program works
- Contribution: the difference between yours and others
- Problems and Solutions
- Other Discussion or Findings ... etc.
- Naming Rule: `<student_id>_localization_report.zip`
The contribution part depends on the idea you implement or how you solve the issue you met. If you completely use other's method **without your idea**, you will get ZERO point in the contribution part.
- **Report Deadline: 11/24 12:00** (Note that it's not ended at 22:00 as usuall)
## Baseline Code
For those who cannot complete Assignment 4, we provide an example implementation of ICP Localization. You can start from this package and add your idea. **Please note that you will get zero points on report if you make no difference from this package.** Also, we do not guarantee any performance on this package, it only shows a way to read and save your result.
Here is the link to our repo:
[https://github.com/biomotion/baseline_localizer](https://github.com/biomotion/baseline_localizer)
## Hints
- If you want to get better performance from ICP, better initial guess will help a lot. You can use [IMU](http://www.starlino.com/dcm_tutorial.html) sensor to calculate local path. For more information, you can reference [this paper](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-696.pdf) and see the chapter 6.1 and 6.2 for implementation details.
- You are not restricted to use ICP on scan matching. You can find other method if you expect better performance.