# Meta Learning & Transfer Learning ###### tags: `applications` ## Lectures UC-Berkeley 20 min primer on Meta Learning https://www.youtube.com/watch?v=h7qyQeXKxZE ## Papers [Meta-Learning in Neural Networks: A Survey](https://drive.google.com/drive/folders/1fd3ncgjI5V0e1xs_ztfLsyHMKwicZF2l) ## Notes **Hypothesis:** Good basis functions should transfer well to different tasks, but data-specific basis functions might not transfer well. #### 2/6/22 **Informal definition of meta learning:** given data/ experience on previous tasks, learn a new task more quickly and/or more proficiently. **Training objective:** During meta-learning, an outer (or upper/meta) algorithm updates the inner learning algorithm such that the model it learns improves an outer objective. For instance this objective could be generalization performance or learning speed of the inner algorithm. - Relating to our study on bases: basis expressiveness and generalizability can be these “outer goals” ![](https://i.imgur.com/EtO6F58.png) **Some related mentions:** - Meta-learning can be viewed as a tool to improve generalization by searching for the algorithm (induc- tive bias) that is best suited to a given problem, or problem family. - Idea: how does that relate to expressiveness of bases? - Bayesian Meta learning - Idea: Our findings regarding uncertainty and model architecture/ inductive bias and model setup can be useful for Bayesian Meta Learning