# <center><i class="fa fa-edit"></i> Machine Learning: Framing and Descending into ML </center>
###### tags: `Internship`
:::info
**Goal:**
- [x] Framing
- [x] Descending into ML
**Resources:**
[Framing](https://developers.google.com/machine-learning/crash-course/framing/video-lecture)
[Descending Into ML](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture)
[Machine Learning](https://hackmd.io/@Derni/HJQkjlnIP)
:::
### Framing
- Supervised ML
- Goal: learn to combine inputs to produce predictions on new data
- Label: variable to predict
- Represented by y
- Features: input variables to describe data
- Represented by {x1, x2,…,xn}
- Example: particular instance of data, x
- Labeled example: {features, label} like {x,y}
- Used to train model
- Unlabeled example:: {features, ?} like {x, ?}
- Makes predictions
- Model: maps examples to predicted labels y’
- Defined by learned internal parameters
- Regression model: predicts continuous values
- Classification: predicts discrete values
- Ex: spam, cat or dog
### Descending into ML
- Linear regression: approximate linear relationship with y = b + w1x1
- y’: predicted label
- b: bias (y-intercept), also w0
- w1: weight of feature 1
- Weight is similar to slope
- More sophisticated models with multiple features can be: y = b+ w1x1 + w2x2 + w3x3

- Training: learning good values for weights and bias
- Loss: how bad model’s prediction was on a single example
- Empirical risk minimization: minimize loss

- Squared loss function: “squared error.” For regression
- Square of difference between prediction and label = (y - y’)^2
- For Loss on data set, sum all individual L2
- Mean square error (MSE): Can average squared loss over whole dataset

