# 1. Understanding Data: Lab_1 - 23 May 2023
1. Why you need to load two dataset for the problem of predicting life satisfaction based on GDP/capita?
Azlan:
because there are two variables involved to do the predictions
1. GDP - Independent variable (X-axis)
2. Life Statisfaction - dependent Variable (y-axis)
independent variable is a variable that is manipulated or controlled by the researcher.
dependent variable is the variable that is observed, measured, or affected as a result of changes in the independent variable.
Danyar:
to predict a life satisfications model, we use the two data set because the oecd contains data about life satisfactory varibles, in the other hand the other dataset is Gross Domestic Product which is a main factor for life satisfactory, by combining this two data set we will have a more accurate output.
Muhammad U
The reason is: Based on the two dataset, we can be able to find more relationship between diffrent country's GDP per capita which is contain in oecd_bli.csv file and oecd_bli which contain some indicators to be used. Therefore, the main object is to find or predict the life satisfaction which can only be achieved by getting GDP's for the country as the data with other indicators.Therefore, the two dataset has to be used to predict the life sataisfaction.
Mohammed Fahad ***
By putting these two sets of data together, we can examine and model the link between the predictor variable (GDP per capita) and the goal variable (life satisfaction) to make predictions or draw conclusions about how GDP per capita affects life happiness in 2020.
Moric
Loading two datasets to predict life satisfaction based on GDP per capita is useful because it allows for a comprehensive analysis and a better understanding of the relationship between these variables to see the impact of GDP per capita data on people's life satisfaction
# 2. Comparing the plots below what is the problem weakness with the model? Hint: The bias and variance trade-off for a generalized model


Danyar:
the first plot contains low varince and this means that the model has a small variation in the prediction of the target function with changes in the training data set, meanwile, the second plot containce high varinace A model that shows high variance learns a lot and perform well with the training dataset, BUT does not generalize well with the unseen dataset. As a result, such a model gives good results with the training dataset but shows high error rates on the test dataset. whcih is why at first the plot looks not very complex with low errors while in the second plot when we added more cities then the model will give us variant output, or we can say that the model is overfitting, the main reasons for high variance are:
1.The model is too complex.
2.The model is trained on noisy data.
Muhammad Umar
In 46: there is high bias which is shown by red line show underfitting, while green line shows overfitting while the blue show best fitted model that capture the data. when dealing with weakness, the red line shows high-biase as it deviate from data,
While in 49: the blue line shows that, it strikes the balance btwn biase and variance by getting the structure of the data without overfitting or underfitting.
Azlan:
Model Weakness:
1st model
low variance and high bias
2nd model
high variance and low bias
Mohammed Fahad
Code Snippet 46:
High bias: The linear model is too simple, which leads to underfitting and a failure to capture complexity.
The linear assumption could lead to an underestimate of the real link.
Code Snippet 49:
High variance: The risk of overfitting goes up when a complicated model is used to fit the whole dataset.
Possible overemphasis on noise or oddities, which could make it harder to draw
Moric
the potential weakness of model 46 is high bias due to its assumption of a simple linear relationship, while the potential weakness of model 49 is high variance, potentially caused by overfitting and the inclusion of outliers.
model 46:
Bias: From the plot, it seems that model 1 is fitting a simple linear regression line (blue line) to the data points. The weakness of this model could be high bias, as it assumes a linear relationship between GDP per capita and life satisfaction.
model 49:
Variance: In model 49, there are multiple elements that indicate potential issues with variance.
Overfitting: The black line represents the regression line obtained from the full dataset, which suggests a more complex model. If the model is too complex, it may have overfit the training data by capturing noise or random fluctuations in the data. Consequently, the model may not generalize well to new data, leading to poor performance.
Outliers: The red squares in the plot represent certain countries' data points labeled as missing data. The model attempts to incorporate these points, which might be outliers or have missing values. If these points have a substantial influence on the regression line, the model's performance may be affected. Outliers can contribute to high variance in the model's predictions.
# 3. How overfitting occurs for this regression problem?

Azlan:
Instead of capturing the true relationship between the features and the target variable, the regression model becomes overly complex and begins to fit noise or random fluctuations in the training data. This can lead to inaccurate generalisations and predictions based on new, unseen data. Models with too many parameters are inaccurate because of a large variance
Danyar:
reasons for over fitting are:
1.The model has too many parameters.
2.The model is trained on a small dataset.
3.The model is trained on noisy data.
the model has too many parameters. As a result, the model is overfitting the training data and will not generalize well to new data.
Muhammad Umar
Due to high degree of polynomial features that is included which make the model more flexible in the training data.Moreover, StandardScaler feature that is good to other model, but when join with polynomilaFeatures become complex, it bring variations in the data which leads to overfitting to occur.
Mohammed Fahad
Overfitting happens when the regression model gets too complicated and too well-suited to the training data. This can happen if the model is:
1. too complicated
2. there are not enough training data
3. there are too many traits
4. there is not any regularization.
So, the model may fit the training data very well, but it may not be able to apply well to data it has not seen before. Find the right mix between model complexity, data access, feature selection, and regularization methods to avoid overfitting.
Overfitting can be avoided and the model's ability to generalize can be improved by testing how well it works on data it has not seen before and gathering more representative data.
Moric
PolynomialFeatures with a degree of 10 (degree=10). This transformation expands the feature space by creating polynomial combinations of the original features. The high degree of 10 allows the model to capture intricate and complex relationships between the predictor variable (GDP per capita) and the target variable (life satisfaction).
LinearRegression. Despite the polynomial transformation, linear regression is still a linear model, which means it will try to fit a linear relationship between the features and the target variable
This excessive flexibility and complexity in the model can lead to overfitting. The model might become too tailored to the training data, capturing the idiosyncrasies and noise specific to the training set.