# Scalable Bayesian Optimization Using Deep Neural Networks
###### tags: `papers`, `nlm`
- GPs scale cubically with the number of observations -> NLM scales linearly, making Bayes Opt easier while maintaining flexibility and uncertainty
- cubically in the basis function dimensionality, instead of growing with the number of observations as in GP
Related applications:
Applications in reinforcement learning (see Riquelme et al., 2018 and Azizzadenesheli and Anandkumar, 2019 https://arxiv.org/abs/1802.09127, https://arxiv.org/abs/1802.04412), active learning, AutoML (Zhou and Precioso, 2019 https://arxiv.org/abs/1904.00577)
Todo (lucy): understand the math?