# Secular Bayesian Deep Learning
## Studentship scope
The main focus of this studentship is on practical and empirically successful ways of representing model uncertaity in deep neural networks. Growing body of empirical evidence points at the likelihood that Bayesian inference over model parameters is a suboptimal method of representing model uncertainty in deep learning, especially from the perspective of predictive accuracy. Simultaneously, exciting theoretical work found justification for generalised forms of probabilistic learning that diverge from the Bayesian method. This studentship will have a theoretical component: formulate and develop methods for representing uncertainty in neural networks using motivations from generalisation theory and classical statistics. It will also have a practical component: find scalable and easy-to-implement methods for uncertainty classification and demonstrate their utility in real-world applications.
## Details
* co-supervised by Sebastian Nowozin at Microsoft Research Cambridge and Ferenc Huszár at Cambridge University
* 4-year Industrial CASE studentship, funded partly by EPSRC, partly by Microsoft Research
* This studentship only provides full funding for those who qualify for Home/UK student fees in Cambridge. We won't be able to accept international students.
* Start date: ideally October 2021 (potentially extended until October 2022).
## Reading List
* Wenzel et al (2020) [How Good is the Bayes Posterior in Deep Neural Networks Really?
](https://arxiv.org/abs/2002.02405)
* Masegosa (2020) [Learning under Model Misspecification: Applications to Variational and Ensemble methods
](https://arxiv.org/abs/1912.08335)
* Morningstar et al (2021) [PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified Bayesian Regime](https://arxiv.org/abs/2010.09629)
* Alaa and van der Schaar (2020) [Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions](https://arxiv.org/abs/2007.13481)
* Guo et al (2020) [Deep Bayesian Bandits: Exploring in Online Personalized Recommendations
](https://arxiv.org/abs/2008.00727)
* Lacoste-Julien et al (2011) [Approximate inference for the loss-calibrated Bayesian](http://proceedings.mlr.press/v15/lacoste_julien11a/lacoste_julien11a.pdf)
* Bissiri et al (2013) [A General Framework for Updating Belief Distributions](https://arxiv.org/abs/1306.6430)