owned this note
owned this note
Published
Linked with GitHub
---
title: "Methods 2 -- Portfolio Assignment 3"
output:
html_document:
df_print: paged
pdf_document: default
---
- *Type:* Group assignment
- *Due:* 30 April 2023, 23:59
- *Instructions:* All problems are exercises from _Regression and Other Stories_. Please edit this file here and add your solutions.
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE, message = FALSE)
```
```{r}
library("gridExtra")
library("tidyverse")
library("rprojroot")
library("rstanarm")
library("ggplot2")
library("bayesplot")
theme_set(bayesplot::theme_default(base_family = "sans"))
library("foreign")
```
## 1. Exercise 10.5
#### a) Fit a regression of child test scores on mother’s age, display the data and fitted model, check assumptions, and interpret the slope coefficient. Based on this analysis, when do you recommend mothers should give birth? What are you assuming in making this recommendation?
```{r}
child_iq <- read.csv("child_iq.csv")
head(child_iq)
```
```{r}
# Model stan_glm of child test scores on mother's age
model <- stan_glm(ppvt ~ momage, data = child_iq, refresh=0)
print(model)
```
```{r}
# A scatter plot
ggplot(child_iq, aes(x = momage, y = ppvt)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE, color = "purple") +
labs(x = "Mother's Age", y = "Child Test Scores")
```
```{r}
shapiro.test(resid(model))
```
The scatter plot indicates a positive linear relationship between the independent variable (age) and the dependent variable (IQ score). With a slope coefficient of 0.8, we can infer that a one-unit increase in mother's age is associated with an expected increase of 0.8 units in child's test score. However, the Shapiro-Wilk normality test of the residuals reveals a small p-value of 2.538e-06, indicating potential violation of the assumption of normality. The model's estimated value of sigma is 20.4, which means that after taking into account the effect of mother's age, the model predicts that the residual variability of the response variable (ppvt) is about 20.4. In other words, the predicted child test scores may deviate from the actual values by approximately 20.4 points on average.
After an initial examination of the analysis, it may seem that giving birth at a higher age is recommended for mothers. Nevertheless, it is crucial to note that the interpretation of regression coefficients presupposes a causal relationship between variables and the satisfaction of model assumptions, which is the case in this scenario.
#### b) Repeat this for a regression that further includes mother’s education, interpreting both slope coefficients in this model. Have your conclusions about the timing of birth changed?
```{r}
model2 <- stan_glm(ppvt ~ momage + educ_cat, data = child_iq, refresh=0)
print(model2)
```
```{r}
shapiro.test(resid(model2))
```
The estimate for the intercept is 69.3, meaning that a child whose mother is 0 years old would be predicted to have a PPVT score of 69.3.The estimate for the slope coefficient for mothers age is 0.3, meaning that for every one year increase in a mother’s age, the child's predicted PPVT score would increase by 0.3 points, holding other predictor constant.
The estimate for the slope coefficient for mother’s education is 4.7, meaning that a child whose mother has a higher education level category would be predicted to have a higher PPVT score by 4.7 points.
The estimated value of sigma is 20.1. This means that, on average, the predicted PPVT scores from the model can deviate from the actual PPVT scores by about 20.1 points, which is a relatively large amount of variability.
The output of the Shapiro-Wilk test performed on the residuals of the model assesses whether the residuals follow a normal distribution. In this case, the test yielded a very small p-value of 7.149e-06, suggesting that the residuals do not conform to a normal distribution.
Based on the comparison of median regression coefficients between the two models, it appears that including educational level as a predictor in the model may alter our previous conclusions regarding the impact of maternal age on the child's PPVT score. Specifically, the coefficient suggests that maternal age may be a weaker predictor in the presence of educational level as a predictor.
#### c) Now create an indicator variable reflecting whether the mother has completed high school or not. Consider interactions between high school completion and mother’s age. Also create a plot that shows the separate regression lines for each high school completion status group.
```{r}
child_iq$mom_hs <- ifelse(child_iq$educ_cat >= 2, 1, 0)
model3 <- stan_glm(ppvt ~ mom_hs + momage + mom_hs:momage, data = child_iq, refresh = 0)
print(model3)
colors <- ifelse(child_iq$educ_cat >= 2, "black", "purple")
plot(child_iq$momage, child_iq$ppvt,
xlab = "Mother's Age", ylab = "Child Test Scores",
col = colors, pch = 20)
legend("topright", legend = c("Highschool", "No Highschool"),
col = c("black", "purple"), pch = 20, cex=0.5)
b_hat <- coef(model3)
abline(b_hat[1] + b_hat[2], b_hat[3] + b_hat[4], col = "black", lwd = 2)
abline(b_hat[1], b_hat[3], col = "purple", lwd = 2)
```
#### d) Finally, fit a regression of child test scores on mother’s age and education level for the first 200 children and use this model to predict test scores for the next 200. Graphically display comparisons of the predicted and actual scores for the final 200 children.
```{r}
subset_data <- child_iq[1:200, ]
model <- stan_glm(ppvt ~ momage + educ_cat, data = subset_data, refresh = 0)
new_data <- child_iq[201:400, ]
predicted_scores <- predict(model, newdata = new_data)
comparison_data <- data.frame(Actual = new_data$ppvt, Predicted = predicted_scores)
plot_actual <- ggplot(comparison_data, aes(x = 1:length(Actual), y = Actual)) +
geom_point(color = "navy", alpha = 0.6) +
geom_smooth(method = "lm", se = TRUE, color = "purple") +
labs(x = "Child", y = "Actual Scores") +
ggtitle("Actual Scores for the Final 200 Children") +
theme_minimal()
plot_predicted <- ggplot(comparison_data, aes(x = 1:length(Predicted), y = Predicted)) +
geom_point(color = "darkred", alpha = 0.6) +
geom_smooth(method = "lm", se = TRUE, color = "purple") +
labs(x = "Child", y = "Predicted Scores") +
ggtitle("Predicted Scores for the Final 200 Children") +
theme_minimal()
grid.arrange(plot_predicted,plot_actual, nrow = 1)
```
```{r}
r_squared <- cor(comparison_data$Predicted, comparison_data$Actual)^2
print(r_squared)
```
The R-squared value of 0.02399274 suggests that around 2.4% of the variance in the response variable (test scores) can be accounted for by the predictor variables (mother's age and education level) incorporated in the model. This indicates a limited capacity of the model to capture the variability in the test scores using these predictors.
## 2. Exercise 10.6
#### _Regression models with interactions:_ The folder `Beauty` contains data (use file `beauty.csv`) from Hamermesh and Parker (2005) on student evaluations of instructors’ beauty and teaching quality for several courses at the University of Texas. The teaching evaluations were conducted at the end of the semester, and the beauty judgments were made later, by six students who had not attended the classes and were not aware of the course evaluations.
```{r}
beauty_df<-read.csv("../methods-2-resources/data/Beauty/data/beauty.csv")
head(beauty_df)
```
#### (a) Run a regression using beauty (the variable `beauty`) to predict course evaluations (`eval`), adjusting for various other predictors. Graph the data and fitted model, and explain the meaning of each of the coefficients along with the residual standard deviation. Plot the residuals versus fitted values.
```{r}
beautyfit <-stan_glm(eval ~ beauty + female + nonenglish, data = beauty_df, refresh = 0)
summary(beautyfit, digits = 4)
```
```{r}
ggplot(beauty_df, aes(beauty, eval))+
geom_point()+
geom_abline(intercept = coef(beautyfit)[1], slope = coef(beautyfit)[2], col = "blue")+
geom_abline(intercept = coef(beautyfit)[1], slope = coef(beautyfit)[3], col = "red")+
geom_abline(intercept = coef(beautyfit)[1], slope = coef(beautyfit)[4], col = "purple")
labs(title = "Beauty and Evaluation", subtitle = "beauty, female, non english" ,ylab = "beauty", xlab = "evaluation")
```
```{r}
plot(fitted(beautyfit), residuals(beautyfit), main = "Residuals vs Fitted")
abline(h = 0, col = "blue")
```
The model predicting evaluation with beauty as $\beta_1$ was fitted using the bayesian regression modelling function *stan_glm()* from *rstanarm*. For the model to be adjusted for predictors a maximum model model was fit. Nonsensible predictors and The Beta values/ point estimates very close to zero was excluded from the model. When the predictors are very close to zero it means that they contribute minimally to an explanation of evaluation. The predictors in the final and plotted model are: $\beta_1: beauty$, $\beta_2: female$, and $\beta_3: nonenglish$.
The intercept, $\beta_0 = 4.1$. This is the predicted evaluation score for a teacher with beauty rating of zero who is male and native english speaker. $\beta_1 = 0.15$, this means that an increment of 1 in beauty rating leads to a predicted increase in evaluation rating of 0.15, when other predictors are held constant. Similarly $\beta_2 = -0.20$ and $\beta_3 = -0.33$ reflect the predicted decrease in evaluation score of -0.20 and -0.33 when a teacher is respectively female and non native english and other predictors are held constant. The residual standard deviation $\sigma = 0.53$. This is the variance left unexplained by the model. The residual vs fitted plot show no obvious violation of normality assumptions.
#### (b) Fit some other models, including beauty and also other predictors. Consider at least one model with interactions. For each model, explain the meaning of each of its estimated coefficients.
```{r}
beautyint_1 <- stan_glm(eval ~ beauty*female, data = beauty_df, refresh = 0)
summary(beautyint_1, digits = 4)
```
beautyint_1:
The intercept represents the predicted evaluation scores for individuals with zero beauty score and who are male.
The coefficient of beauty can be interpreted as the change in the predicted evaluation score associated with a one-unit increase in the beauty score, holding gender constant. Specifically, for every one unit increase in beauty score, the predicted evaluation score increases by 0.2 units, on average.
The coefficient of female can be interpreted as the difference in predicted evaluation scores between males and females, holding beauty constant. Specifically, on average, females have 0.2 units lower predicted evaluation scores than males.
The coefficient of beauty:female represents the interaction effect between beauty and gender on predicted evaluation scores. It can be interpreted as the difference in the slope of the regression line for males and females. Specifically, the predicted evaluation score for females increases by 0.11 units less than the predicted evaluation score for males for every one unit increase in beauty score.
```{r}
beautyint_2 <- stan_glm(eval ~ beauty + minority*female, data = beauty_df, refresh = 0)
summary(beautyint_2, digits = 4)
```
beautyint_2:
The coefficient of minority can be interpreted as the expected difference in evaluation scores between minority and non-minority professors, when the other predictors are held constant. In this case, the coefficient of minority is 0.09, which means that on average, professors coming from minorities are predicted to score 0.09 points higher than those coming from non-minorities, when the other predictors are held constant.
The coefficient of the interaction term between minority and female can be interpreted as the expected difference in the effect of gender on evaluation scores for minority and non-minority professors. In this case, the coefficient of the interaction term is -0.36, which means that on average, being female is associated with a 0.36 point decrease in evaluation scores for minority professors compared to non-minority professors, when the other predictors are held constant.
```{r}
beautyint_3 <- stan_glm(eval ~ beauty + female*nonenglish, data = beauty_df, refresh = 0)
summary(beautyint_3, digits = 4)
```
beautyint_3:
The coefficient of nonenglish can be interpreted as the difference in predicted evaluation scores between native English speakers and non-native English speakers, who have the same beauty score and gender. For example, a native English professor with a beauty score of 5 and who is male would be predicted to have an evaluation score 0.44 points lower than a professor who is not a native English speaker with the same beauty score and gender.
The coefficient of female:nonenglish represents the expected difference in evaluation scores between female native English speakers and female non-native English speakers, with the same beauty score. For example, a native English female professor with a beauty score of 5 would be predicted to have an evaluation score 0.25 points higher than a non-native English female professor with the same beauty score.
## 3. Exercise 10.7
_Predictive simulation for linear regression:_ Take one of the models from the previous exercise.
(a) Instructor A is a 50-year-old woman who is a native English speaker and has a beauty score of -1. Instructor B is a 60-year-old man who is a native English speaker and has a beauty score of -0.5. Simulate 1000 random draws of the course evaluation rating of these two instructors. In your simulation, use posterior_predict to account for the uncertainty in the regression parameters as well as predictive uncertainty.
```{r}
beautyint_3 <- stan_glm(eval ~ beauty + female*nonenglish, data = beauty_df, refresh = 0)
instructors <- data.frame(
beauty = c(-1, -0.5),
female = c(1, 0),
nonenglish = c(0, 0)
)
simulation <- posterior_predict(beautyint_3, newdata = instructors, draws = 1000)
instructor_A_ratings <- simulation[, 1]
instructor_B_ratings <- simulation[, 2]
instructor_A_ratings
hist(instructor_A_ratings)
```
(b) Make a histogram of the difference between the course evaluations for A and B. What is the probability that A will have a higher evaluation?
```{r}
difference <- instructor_A_ratings - instructor_B_ratings
hist(difference, breaks = "FD", col = "lightblue",
xlab = "Course Evaluation Difference (A - B)", ylab = "Frequency")
mean(difference)
sd(difference)
pnorm(0, mean=-0.2836705, sd=0.7676937, lower.tail=FALSE)
```