#### SDC Assignment 6 - 2022/11/24
## 12/8更新 上傳結報即可!
# Motion Prediction
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li>About Trajectory Prediction</li>
<li>Getting Started</li>
<li>Part 1 : 熟悉Motion Dataset</li>
<li>Part 2 : 實做Motion Prediction Model </li>
<li>Part 3 Bonus: 驗證在光復路的Data上</li>
<li>Part 4: Report </li>
<li>FAQ</li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About Motion Prediction
前言
> 先前的作業與競賽中, 大家對Prediction這個名詞應該不陌生,Kalman Filter裡的Predict Step可以用來預測本車下一個時刻的位姿,協同量測讓估姿態更加準確。但自駕車除了估測本車的位姿以外,估測身旁移動物體的位姿對於自駕車的控制與安全性也是及其重要的。估測周遭物體過去的姿態,我們稱為追蹤; 預估周遭移動物未來一段的路徑,我們稱為路徑預測,這兩項主題將會是接下來作業以及競賽的主軸。
路徑預測
> 預測存在多種可能性以及不確定性,單單設下幾個簡單的判斷是不足已處裡複雜的交通場景,但我們能透過蒐集大量資料進行分析以及學習,讓預測這件事得以掌握。
本次Assignment目標
* 熟悉Motion Dataset -> 將待預測物的歷史軌跡以及道路線透過Matplotlib視覺化。
* 實做Motion Prediction Model -> 觀測過去五秒的歷史,並預測未來六秒的軌跡。
* (Bonus) 驗證在光復路的Data上
<!-- GETTING STARTED -->
## Getting Started
_Programming Language : Python, Pytorch_
1. Download [motion-prediction repository](https://drive.google.com/file/d/1DRNjpKYVWDznyNZYhU271EOvaMzSY7OO/view?usp=share_link)
2. Enter **Container** (change `~/motion_prediction` to wherever you put the `motion_prediction folder`)
```bash=
xhost +local:
docker run \
-it \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--rm \
--name prediction \
--user root \
-e GRANT_SUDO=yes \
-v ~/motion_prediction:/root/motion_prediction \
softmac/sdc-course-docker:prediction \
bash
```
<!-- Dataset -->
## Dataset
[1] Argoverse 2 Dataset Validation Set provides 25K scenarios. The trajectories of agents are sampled at 10Hz, with (0, 5] seconds used as observation and (6, 11] seconds for prediction. Each scenario carries its local map region. [more details](https://www.argoverse.org/av2.html)
- [Download Argoverse 2 Dataset - Validation](https://drive.google.com/file/d/1MkTVnHw8Xt9h3sTtjjOedSQl5cFt4awY/view?usp=share_link)
- 額外延伸可參考 [Argoverse 2 API](https://github.com/argoai/av2-api)
[2] Bonus Part 光復路的資料:
- [Download kung-fu-motion-dataset](https://drive.google.com/file/d/1zPj2_MnP62KJTHCKhwckFGSWrq4eWfsg/view?usp=share_link)
兩個都需要先下載好!
```
motion_prediction/
├── data/
├── argo-motion-dataset/
└── kung-fu-motion-dataset/
├── models/
├── weights/
├── config.yaml
├── ...
└── utils.py
```
<!-- Part 1 -->
## Part 1 - 熟悉Motion Dataset
_透過Matplotlib視覺化_
- [ ] 歷史軌跡
- [ ] 未來軌跡
- [ ] 道路中線
- [ ] (Bonus) 道路邊線, 斑馬線 ...
- [ ] (Bonus) 周遭物體的軌跡
in ```dataset.py``` -> 完成 ```plt.plot(...)```
``` py
def plot_scenario(sample):
ax = fig.add_subplot(111)
ax.set_facecolor('#2b2b2b')
''' History Trajectory
'''
x = sample['x'].reshape(_OBS_STEPS, 6)
#plt.plot(...)
''' Future Trajectory
'''
y = sample['y'].reshape(_PRED_STEPS, 5)
#plt.plot(...)
''' Lane Centerline
'''
lane = sample['lane_graph'].reshape(-1, 10, 2)
#plt.plot(...)
plt.axis('equal')
plt.xlim((-30, 30))
plt.ylim((-30, 30))
plt.show()
check_interrupt()
```
``` bash
python dataset.py
```

<!-- Part 2 -->
## Part 2 - 實做Motion Prediction Model
同學可以自行設計模型, 簡易的等速、等加速模型都可以。
唯一要求**觀測5秒**利用Model預測出**未來6秒**的軌跡。
in ```model/baseline.py``` -> 若參考Baseline,尚須完成comment掉的部份。
``` py
class Baseline(pl.LightningModule):
def __init__(self):
super(Baseline, self).__init__()
''' [1] history state (x, y, vx, vy, yaw, object_type) * 5s * 10Hz '''
#self.history_encoder = MLP(..., 128, 128)
self.lane_encoder = MapNet(2, 128, 128, 10)
self.lane_attn = MultiheadAttention(128, 8)
trajs = []
confs = []
''' we predict 6 different future trajectories to handle different possible cases.'''
for i in range(6):
''' [2] future state (x, y, vx, vy, yaw) * 6s * 10Hz '''
trajs.append(
# MLP(128, 256, ...)
)
''' we use model to predict the confidence score of prediction '''
confs.append(
nn.Sequential(
MLP(128, 64, 1),
nn.Sigmoid()
)
)
self.future_decoder_traj = nn.ModuleList(trajs)
self.future_decoder_conf = nn.ModuleList(confs)
def forward(self, data):
''' [3] In deep learning, data['x'] means input, data['y'] means groundtruth '''
#x = data['x'].reshape(-1, ...)
x = self.history_encoder(x)
lane = data['lane_graph']
lane = self.lane_encoder(lane)
x = x.unsqueeze(0)
lane = lane.unsqueeze(0)
lane_mask = data['lane_mask']
lane_attn_out = self.lane_attn(x, lane, lane, attn_mask=lane_mask)
x = x + lane_attn_out
x = x.squeeze(0)
trajs = []
confs = []
for i in range(6):
trajs.append(self.future_decoder_traj[i](x))
confs.append(self.future_decoder_conf[i](x))
trajs = torch.stack(trajs, 1)
confs = torch.stack(confs, 1)
return trajs, confs
```
- [ ] (Bonus) 考量周遭動態物體,可使用助教已經Preprocess完的資料 in ```dataset.py - sample['neighbor_graph']```。
**Training**
- in``` config.yaml``` 可以調整 ```max_epochs```
- 至少50 epochs以上效果才會比較好
``` bash
python train.py --train
```
(optional) 中斷後若要接續上個checkpoint train
``` bash
python train.py --train --ckpt epoch=01-train_loss=10.96.ckpt
```
**Inference and Visualize Argo**
``` bash
python train.py --viz --argo --ckpt epoch=01-train_loss=10.96.ckpt
```
<!-- Bonus -->
## Part 3 Bonus: 驗證在光復路的Data上
光復路101條軌跡中,包含直線,轉彎,換車道,減速停止等等場景...
**Inference and Visualize Kung Fu Road**
``` bash
python train.py --viz --kungfu --ckpt epoch=01-train_loss=10.96.ckpt
```
<!-- Part 4 -->
## Part 4 - Report
- Part 1 30%
- [x] dataset 視覺化的截圖
- [x] (Bonus +2% per attribute) 視覺化 道路邊線, 斑馬線...
- [x] (Bonus +5%) 視覺化 周遭物體的軌跡
- Part 2 70% + bonus
- [ ] Argo 軌跡預測的截圖
- [ ] 描述Model的設計方法
- [ ] Case Study
- 哪些場景成功?
- 哪些場景失敗?可能的成因是?若有方法可以解,你嘗試了哪些方法?
- [ ] (Bonus +10%) 考量周遭動態物體
- (Bonus Part +10%)
- [ ] Kung Fu 成功預測轉彎以及減速的截圖
- [ ] Case Study :
- 哪些場景成功?
- 觀察Cross Dataset的效果,哪些場景失敗?可能的成因是?若有方法可以解,你嘗試了哪些方法?
### *Deadline : 2022/12/8 !!!*
## FAQ
1. What is Pytorch Lightning?
- PyTorch Lightning 為 PyTorch 提供了更友善的界面。
- [更多參考](https://www.pytorchlightning.ai/)
2. Argo Motion Dataset
- 為了讓同學熟悉Motion Dataset,本次只使用官方 Argoverse 2 motion dataset validation set 當作本次作業的training set, 若同學想使用完整 [Argoverse 2 motion dataset training set](https://s3.amazonaws.com/argoai-argoverse/av2/tars/motion-forecasting/train.tar)可自行下載。
3. Prediction Model 額外補充
- [LaneGCN](https://arxiv.org/pdf/2007.13732.pdf)
- [VectorNet](https://arxiv.org/pdf/2005.04259.pdf)
4. Motion Prediction 常見的Metric
- Average Displacement Error (ADE) 平均的距離誤差。
- Final Displacemnet Error (FDE) 末端點的距離誤差。
- minADE ex: 預測6條軌跡,取ADE最小的那個。
- minFDE ex: 預測6條軌跡,取FDE最小的那個。
- [更多參考](https://eval.ai/web/challenges/challenge-page/1719/evaluation)