## Lesson 1: pytorch a deep learning framework
A deep learning framework provides three areas of functionality:
* numerical array computing:
* a convenient API to manipulate tensors and arrays: perform operations between arrays: elementwise, indexing, matrix products, einstein sum notation.
* common API over different computing architectures: run the same code on CPU or GPU. Demo of backend selection on colab, speed comparison, API tips for moving tensors to/from cuda.
* EASY EXERCISE: simple-ish pytorch array indexing.
* OR DIFFICULT EXERCISE: implement math equation using einsum
* automatic differentiation
* bit of background on chain rule and backpropagation
* demonstrating the computational graph
* pytorch `requires_grad` and `torch.no_grad()` environment
* zeroing gradients
* EASY EXERCISE: find gradient of quadratic, verify gradient is zero at minimum
* DIFFICULT EXERCISE: something with reparametrization trick maybe?
* library of deep learning utilities
* neural network layers/modules
* optimizers
* loss functions
* datasets and dataloaders
* mother of all training loops
* EASY EXERCISE: Fit a linear function
* DIFFICULT EXERCISE: Implement a residual block
## Take home exercises batch 1
* pytorch indexing and array operations example sheet - easy and 🌶️ versions.
* simple 1D to 1D relu network regression exercise set, exercises with reparametrisation, initialization, trying learning rates
* classification: logistic loss explained with exercises
* 🌶️ practicing `no_grad` environment: implement gradient clipping, and a custom optimizer