# 機器學習筆記目錄
###### tags: `目錄`
## 李弘毅 - 機器學習
### Machine Learning, 2016
- [ML Lecture 0-1 機器學習簡介](https://hackmd.io/@Kevin880723/HJ-JFpMCI)
- [ML Lecture 1: Regression](https://hackmd.io/@Kevin880723/rJg3K0MCL)
- [ML Lecture 2: Where does the error come from?](https://hackmd.io/@Kevin880723/Bk5GQU7RL)
- [ML Lecture 3: Gradient Descent](https://hackmd.io/@Kevin880723/SJYt8UdRL)
- [ML Lecture 4: Classification](https://hackmd.io/@Kevin880723/HJPbs9Yyw)
- [ML Lecture 5: Logistic Regression](https://hackmd.io/@Kevin880723/B1OO_n37D)
- [ML Lecture 6: Brief Introduction of Deep Learning](https://hackmd.io/@Kevin880723/S1DNrKeNv)
- [ML Lecture 7: Backpropagation](https://hackmd.io/@Kevin880723/B1mZS6xEw)
- [ML Lecture 8: Hello World of Deep Learning & Keras](https://hackmd.io/@Kevin880723/BJbkijBNv)
- [ML Lecture 9: Tips for Training DNN](https://hackmd.io/@Kevin880723/r1Khv6SVD)
- [ML Lecture 23-1: Deep Reinforcement Learning (Policy Gradient)](https://hackmd.io/@Kevin880723/ReinforcementLearning1)
- [ML Lecture 23-3: Reinforcement Learning (Critic、Q-learning)](https://hackmd.io/@Kevin880723/rk7zV-Vh_)
### Deep Reinforcement Learning, 2018
> 這個播放清單的筆記延續自Machine Learning, 2016的Lecture 23。
- [DRL Lecture 4, 5: Q-learning (Advanced Tips, Continuous Action)](https://hackmd.io/@Kevin880723/HJdYMgu3O)
- [DRL Lecture 6: Actor-Critic](https://hackmd.io/@Kevin880723/B1Qg0_dhO)
### Generative Adversarial Network (GAN), 2018
- [GAN Lecture 1 (2018): Introduction](https://hackmd.io/@Kevin880723/H1mxbSphO)
- [GAN Lecture 2 (2018): Conditional Generation](https://hackmd.io/@Kevin880723/HkP-CLr6u)
- [GAN Lecture 3 (2018): Unsupervised Conditional Generation](https://hackmd.io/@Kevin880723/ByaoawLTd)
- [GAN Lecture 4 (2018): Basic Theory](https://hackmd.io/@Kevin880723/H1EAQQDp_)
### 機器學習2021
- [Transformer](https://hackmd.io/@Kevin880723/BJ0nZv51q)
## Stanford Machine Learning
- [Lecture 6: Training Neural Network 1](https://hackmd.io/@Kevin880723/SkUBP8tiF)
- [Lecture 9: CNN Architecture](https://hackmd.io/@Kevin880723/ry77dZdjF)
- [Lecture 10: Recurrent Neural Network](https://hackmd.io/@Kevin880723/r1nfloSiK)
- [Lecture 11: Detection and Segmentation](https://hackmd.io/@Kevin880723/S1okXYdsF)
## 林軒田 - 機器學習基石
- [Lecture 01: The Learning Problem](https://hackmd.io/@Kevin880723/rJrCc5bHP)
- [Lecture 02: Learning to Answer Yes/No](https://hackmd.io/@Kevin880723/SJPDINIHw)
- [Lecture 03: Types of Learning](https://hackmd.io/@Kevin880723/HyhXdYtBv)
- [Lecture 04: Feasibility of Learning](https://hackmd.io/@Kevin880723/HyxUuPkIw)
- [Lecture 05: Training versus Testing](https://hackmd.io/@Kevin880723/SyvnjX38P)
- [Lecture 06: Theory of Generation](https://hackmd.io/@Kevin880723/rJFCNDbPv)
- [Lecture 07: The VC Dimension](https://hackmd.io/@Kevin880723/BJPRr5M_v)
- [Lecture 08: Noise and Error](https://hackmd.io/@Kevin880723/HJMGo-BOv)
- [Lecture 09: Linear Regression](https://hackmd.io/@Kevin880723/Byy7ZsPuw)
- [Lecture 10: Logistic Regression](https://hackmd.io/@Kevin880723/BJiSJ5Kuw)
- [Lecture 11: Linear Models for Classification](https://hackmd.io/@Kevin880723/HJ_Vrf_Fw)
- [Lecture 12: Nonlinear Transformation](https://hackmd.io/@Kevin880723/H10AT5oFP)
- [Lecture 13: Hazard of Overfitting](https://hackmd.io/@Kevin880723/BkpG1RhKD)
- [Lecture 14: Regularization](https://hackmd.io/@Kevin880723/ryMdDf6tD)
- [Lecture 15: Validation](https://hackmd.io/@Kevin880723/H1PPphy9v)
- [Lecture 16: Three Learning Principles](https://hackmd.io/@Kevin880723/BkM3BO9cv)
## 林軒田 - 機器學習技法
- [Lecture 01: Linear Support Vector Machine (SVM)](https://hackmd.io/@Kevin880723/rkFtQT2qD)
- [Lecture 02: Dual Support Vector Machine](https://hackmd.io/@Kevin880723/S1PJMH1iv)
- [Lecture 03: Kernel Support Vector Machine](https://hackmd.io/@Kevin880723/SJuoy75sv)
- [Lecture 04: Soft-Margin Support Vector Machine](https://hackmd.io/@Kevin880723/HkvO3TbnP)
- [Lecture 07: Blending and Bagging](https://hackmd.io/@Kevin880723/r1XQ3EgTP)
- [Lecture 08: Adative Boosting](https://hackmd.io/@Kevin880723/rJAv0NEpP)
- [Lecture 09: Decision Tree](https://hackmd.io/@Kevin880723/S192t9VTP)
- [Lecture 10: Random Forest](https://hackmd.io/@Kevin880723/r12FELv6v)
- [Lecture 11: Gradient Boosted Decision Tree](https://hackmd.io/@Kevin880723/BkC5g2Tav)
- [Lecture 12: Neural Network](https://hackmd.io/@Kevin880723/rkXIzNihw)
- [Lecture 15: Matrix Factorization](https://hackmd.io/@Kevin880723/rJOodeAhD)
## 深度學習論文
### 1. 風格轉換
- [Learning Linear Transformations for Fast Image and Video Style Transfer](https://hackmd.io/@Kevin880723/Syz_kP2sv)
### 2. 物件偵測
- [You Only Look Once: Unified, Real-Time Object Detection (YOLOv1)](https://hackmd.io/@Kevin880723/rJHkavReO)
- [YOLO9000: Better, Faster, Stronger (YOLOv2)](https://hackmd.io/@Kevin880723/YOLO9000)
- [YOLOv3: An Incremental Improvement](https://hackmd.io/@Kevin880723/H1-EHcrXd)
### 3. 影像深度預測
- [Digging Into Self-Supervised Monocular Depth Estimation (monodepth2)](https://hackmd.io/@Kevin880723/MonoDepth2)
### 4. Domain Apaptation
- [Stagewise Unsupervised Domain Adaptation With Adversarial Self-Training for Road Segmentation of Remote-Sensing Images](https://hackmd.io/@Kevin880723/SyHETgo7F)
- [ProCST: Boosting Semantic Segmentation using Progressive Cyclic Style-Transfer](https://hackmd.io/@Kevin880723/SkvXINBL9)
- [Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation (ProDA)](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Prototypical-pseudo-label-denoising-and-target-structure-learning-for-domain-adaptive-semantic-segmentation-ProDA)
- [Metacorrection: Domain-aware meta loss correction for unsupervised domain adaptation in semantic segmentation](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Metacorrection-Domain-aware-meta-loss-correction-for-unsupervised-domain-adaptation-in-semantic-segmentation)
- [Attention Guided Multiple Source and Target Domain Adaptation](https://hackmd.io/yhLizBeoRnmgV-es1uKKUg?view)
- [Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Unsupervised-Intra-domain-Adaptation-for-Semantic-Segmentation-through-Self-Supervision)
- [Classes matter: A fine-grained adversarial approach to cross-domain semantic segmentation](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Classes-matter-A-fine-grained-adversarial-approach-to-cross-domain-semantic-segmentation)
- [Spherical Space Domain Adaptation with Robust Pseudo-label Loss](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Spherical-Space-Domain-Adaptation-with-Robust-Pseudo-label-Loss)
- [Generative Pseudo-label Refinement for Unsupervised Domain Adaptation](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Generative-Pseudo-label-Refinement-for-Unsupervised-Domain-Adaptation)
- [Progressive Feature Alignment for Unsupervised Domain Adaptation](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Progressive-Feature-Alignment-for-Unsupervised-Domain-Adaptation)
- [Category anchor-guided unsupervised domain adaptation for semantic segmentation](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Category-anchor-guided-unsupervised-domain-adaptation-for-semantic-segmentation)
- [Image to Image Translation for Domain Adaptation](https://hackmd.io/9LNQXLWCQjOsb5gNXXgcHw?view#Image-to-Image-Translation-for-Domain-Adaptation)
### 5. 語意式分割
- [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows (ICCV 2021)](https://hackmd.io/@Kevin880723/SwinTransformer)
- [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://hackmd.io/@Kevin880723/SegFormer)
### 6. 模型剪枝
- [Distilling the Knowledge in a Neural Network](https://hackmd.io/@Kevin880723/Sk85QarFt)
### 7. 實驗室論文
- [Semantic Segmentation for Free Space and Lane Based on Grid-based Interest Point Detection](https://hackmd.io/@Kevin880723/ryHYDtXkK)
- [一種應用於自動駕駛系統之針對一階物件偵測架構之持續學習策略](https://hackmd.io/@Kevin880723/r1f6cJqZY)
<!-- [No-Reference Quality Metric for Depth Maps(還沒讀完)](https://hackmd.io/@Kevin880723/rkb-h3cnO) -->
## 深度學習實作
- [PyTorch筆記](https://hackmd.io/@Kevin880723/PyTorch)
[深度學習相關環境紀錄(有些地方可能有問題,參考看看就好)](https://hackmd.io/@Kevin880723/ByG_rDuQd)
## 關鍵字
- Lecture 07: The VC Dimension:實際需使用的資料量為$d_{VC}$的10倍。
- GAN Lecture 4 (2018): Basic Theory:Discriminator一次更新3~5次,Generator一次更新一次。
{"metaMigratedAt":"2023-06-16T02:32:29.292Z","metaMigratedFrom":"Content","title":"機器學習筆記目錄","breaks":true,"contributors":"[{\"id\":\"7a289c91-d63c-4dbf-9875-8f8fda7d411b\",\"add\":8663,\"del\":324}]"}