lifelong learning 筆記 === # 各式 lifelong learning - [Awesome Incremental Learning / Lifelong learning (paper+code)](https://github.com/xialeiliu/Awesome-Incremental-Learning) - [continual-learning code](https://github.com/GMvandeVen/continual-learning) ## papers - selfless sequential learning, ICLR 2019 ([code](https://github.com/rahafaljundi/Selfless-Sequential-Learning)) - On Training Recurrent Neural Networks for Lifelong Learning, NeurIPS Continual Learning Workshop 2018 # Three scenarios for lifelong learning, NeurIPS Continual Learning Workshop 2018 ## 2 Three Continual Learning Scenarios 1. Task Incremental Learning 2. Domain Incremental Learning 3. Class Incremental Learning  ## 3 Strategies for Continual Learning ### 3.1 Task-specific Components 對每個 task 不優化整個 network,可以減緩 catastrophic forgetting。然而這種 approach 只能用於 Task-IL scenario 論文 - (Context-dependent Gating, **XdG**) Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. 2018 - 隨機指定應該參與某 task 的 node - Evolution channels gradient descent in super neural networks. 2017 - 用 evolutionary algorithms 來學習每個 task 應該用哪些 units - Overcoming catastrophic forgetting with hard attention to the task. 2018 - 用 gradient descent 解每個 task 應該用哪些 units ### 3.2 Regularized Optimization 當 task identity 的資訊在 test time 不再可用,一個 strategy 是對 parameter 做 regularize,計算 paremeter 的重要性,並且對重要 parameter 的改變作出懲罰。 論文 - (**EWC**, Elastic Weight Consolidation) Overcoming catastrophic forgetting in neural networks. 2017 - 用 parameter 的二次微分來評估重要性 - $L'(\theta) = L(\theta) + \lambda\sum_\limits i b_i(\theta_i-\theta_i^b)^2$ - (**SI**, Synaptic Intelligence) Continual learning through synaptic intelligence. ICML 2017 ### 3.3 Modifying Training Data 論文 - (**LwF**) Learning without Forgetting, ECCV 2016, TPAMI 2017 - 使用當前任務的 data + 之前任務的 predicted soft target - 示意圖: - 演算法: - (**DGR**) Continual learning with deep generative replay. NIPS 2017 - 對每個 task 的資料都另外 train 一個 generative model,這樣就可以產生之前 task 的資料以及 hard target - **合併 DGR 以及 distillation** 的 approach,產生的 input sample 會和 soft target 配對 - Incremental classifier learning with generative adversarial networks. 2018 - A strategy for an uncompromising incremental learner. 2017 - 儲存之前 task 的 data,如此的 "exact replay" 能夠增強 continual learning 的 performance - **iCaRL**: Incremental classifier and representation learning. CVPR 2017 - 使用 Nearest-Mean-of-Examplars classification - CNN 是 feature extractor,對舊 class 選出的部分 data (examplars) 和新的 class 的 data 都計算 feature map 的平均值,然後有新的 image 進來就 extract feature 並看它和哪個 class 的 mean 最接近,就預測成哪個 class - Variational continual learning. 2017 - Fearnet: Brain-inspired model for incremental learning. ICLR 2018 ###### tags: `lifelong learning`
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up