# CS410 Homework 6: Linear Regression
> **Due Date: 11/2/2024**
> **Need help?** Remember to check out Edstem and our website for TA assistance.
## Assignment Overview
Linear Regression is one of the most standard methods of machine learning.
## Learning Goals and Objectives
1. Applying knowledge from lecture to implement linear regression
2. Evaluating and Preprocessing Time Series data
3. Implementing Binary Classification via Logistic Regression
4. Implementing One vs Rest Logistic Regression
5. Interpreting Model Results
## Introduction
==TODO maybe use this? https://x.com/Rainmaker1973/status/1527220014578905088==
## Getting Started
### Stencil
Please click [here](https://github.com/BrownCSCI410/assignment-6-regression) to get the stencil code. It should contain these files: `main.py'...
==TODO==
### Environment
You will need to use the virtual environment that you made in Homework 0 to run code in this assignment, which you can activate by using `conda activate csci410`.
==TODO==
## Linear Regression
Now that Steve's civilization is pretty developed, he's interested in predicting the life expectancy of the average individual in his new village. What a caring leader! After surfing the web, Steve stumbled upon some Life Expectancy data from the World Health Organization. Since Steve is still relatively new to machine learning, he's curious about using the trusty linear regression to model and predict the life expectancy of his villagers.
Linear regression is one of the simplest yet most foundational models for predictions that rely on
## Data
You can find the life_expectancy.csv dataset in your project folder!
This dataset, from the WHO, contains data for a range of countries on health indicators for the years 2000-2015. It contains statistics on life expectancy, as well as related health categories such as infant deaths, BMI, and polio rates, in addition to country-specific data such as GDP and population. In addition, it separates countries into developing vs. developed countries. The purpose of the dataset is to capture relations between life expectancy and the other information in the dataset, and for governments to be able to focus on improving the life expectancy in their country by locating which factors have the greatest effect. The dataset is located at this link:
https://www.kaggle.com/datasets/kumarajarshi/life-expectancy-who?resource=download
## Tasks
Exploratory analysis
-> heatmap, correlation matrices
Data preprocessing
-> filtering data
Implement regression model
Evaluate model with testing
looking at loss functions, the strengths of diff loss functions
### Step 1:
### Step 2:
#### Part 1: Loss Function
For this model, you will be evaluating and training the model using L2 loss (sum squared loss). Mathematically, the L2 loss function is defined as:
$L_s(y_{predict}) = \sum^n_{i=1}(y_i-y_{predict}(x_i))^2$
Where $y_i$ is the actual value of the $i^{th}$ value and $y_{predict}(x_i)$ is the predicted value of that sample given the learned weights. Your model will aim to minimize this loss through matrix inversion.
#### Part 2: LinearRegression
You will be implementing this model using the given stencil code. For training the model, since you know that the data is linearly independent, you can use matrix inversion to compute the weight vector $\mathbf{w}$ that minimizes the sum squared loss. The equation to find $\mathbf{w}$ given a set of data points $X$ and their labels $\mathbf{y}$ is:
$\mathbf{w} = (X^TX)^{-1}X^T\mathbf{y}$
:::spoiler HINT
Use the numpy function **np.linalg.pinv** to calculate matrix inverses!</details>
:::
### Step 3:
Great! Now you have made your model! In order to test the functionality, you have (smartly) decided to use the mean-squared error, root mean-squared error, and R squared functions. Here is what they look like mathematically:
==TODO: Don't make the student implement this==
- MSE: $\frac{\sum_{i=1}^n(y_i - y_{predict}(x_i))^2}{n}$
- RMSE: $\sqrt{\frac{\sum_{i=1}^n(y_i - y_{predict}(x_i))^2}{n}}$
- $R^2$: $1 - \frac{\sum_{i=1}^n(y_i - y_{predict}(x_i))^2}{\sum_{i=1}^n(y_i - \bar{y})^2}$
For these equations, $y_i$ is the actual value of the $i^{th}$ value, $y_{predict}(x_i)$ is the predicted value of that sample given the learned weights, and $\bar{y}$ is the mean of all $y$ values.
Implement these functions (RMSE, MSE, R squared) in `test_linreg.py` and use these to interpret the performance of the model.
### Step 4:
include Writeup/interpretation of the model?
==TODO: Add feature selection testing==
==TODO: Add takeaways==
-->
<!--
## Logistic Regression
## Data
You will be using the same dataset as above
(@UTA, any ideas for how to visualize/break down the dataset to help the student understand why a logistic regression could possibly work here? Or may not work? Also, will definitely need elaboration on preprocessing/making target class out of a feature)
**Visualize data with violin plot? not sure**
## Tasks
### Step 1:
Implement binary logistic regression
(please add more details <3)
==TODO why is logistic regression accuracy so bad :(==
<!-- ## One vs Rest
As you may have noticed, Steve's model doesn't do so hot a lot of the time :-( This is largely because there are actually **three** classes of irises represented in the dataset he is using. Trying to conduct binary classification on more than two classes is very difficult. However, we can extend our knowledge of binary classification to handle multiple classes.
In **one-vs-rest** logistic regression, a separate binary classification model is trained for each class. For each model, a class of interest is treated as the positive class, and all other classes are grouped into the negative class...
-->
<!-- ### Step 2:
Implement One vs Rest logistic regression -->
## Submission
### Grading
### Hand-In
Submit the assignment via Gradescope under the corresponding project assignment by **zipping up your hw** folder or through **GitHub** (recommended).
To submit through GitHub, follow these commands:
1. `git add -A`
2. `git commit -m "commit message"`
3. `git push`
Now, you are ready to upload the repo to Gradescope.
:::success
Congrats on submitting your homework; Steve is proud of you!!

:::