### Segmentation Loss Odyssey
---
#### Distribution-based loss (最大化近似兩個幾率分佈 )
1. ***Cross entropy (CE)***: minimize CE 等價於 minimize KL divergence / maximize likelihood estimation
2. ***TopK Loss***: 透過threshold來篩選出對模型較難學習的樣本
3. ***Focal Loss***: 透過權重改善 foreground-background class imbalance
4. ***Distance map penalized CE loss***: 有 distance penalty
---
#### Region-based Loss (key element is Dice Loss)
***這部分我的理解是從 confusion matrix 那邊推導衍生出來的(比較針對 metrics 導向優化的loss function ex.Dice coefficient/IOU)***
1. ***Sensitivity-specificity loss***
2. ***Dice Loss (公式含義有點像 F1-score)***
3. ***IOU Loss***
4. ***Tversky Loss***
5. ***Generalized Dice Loss***
6. ***Focal Tversky Loss***
7. ***Assymetric similarity loss***
8. ***Penalty Loss***
---
#### Boundary-based Loss
***Minimize the distance between GT and predicted segmentation***
1. ***Boundary (BD) Loss***: 透過積分boundary區域 改善unbalanced segmentation
2. ***Hausdorff Distance (HD) Loss***
---
#### Connection among Dice/BD/HD Loss
- For Dice Loss, mismatch is weighted by the sum of the number of foreground pixels in the segmentationand the number of pixels in GT.
- For BD loss, it is weighted by the distance transform map of GT
- For HD loss, it use the distance transform map of GT for weighting and the distance transform map of the segnmentation
---
#### Compound Loss
1. Combo Loss: weighted sum between weighted CE and Dice Loss
2. Exponential Logarithmic loss: exponential and log transforms to both Dice Loss and CE loss
---
{"title":"Segmentation Loss Odyssey","description":"type: slide","contributors":"[{\"id\":\"4724c2c9-74e7-418c-87c3-79a619b97461\",\"add\":1539,\"del\":3}]"}