---
title: "Assignment 2 (A2): Language development in autistic and neurotypical children"
subtitle: 'Instructions'
author: Maxime Sainte-Marie
output:
html_document:
toc: yes
number_sections: yes
toc_float: yes
theme: united
highlight: espresso
css: '../../varia/standard.css'
pdf_document:
toc: no
number_sections: yes
geometry: margin=1in
knit: (function(inputFile, encoding) {
browseURL(
rmarkdown::render(
inputFile,
encoding = encoding,
output_dir = 'documents/assignments/instructions',
output_file = "a2_language_development"))})
bibliography: '../../varia/bibliography.bib'
editor_options:
chunk_output_type: console
---
**!! note: setwd()**
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
# knitr::opts_knit$set(root.dir = rprojroot::find_rstudio_root_file())
if(!'cmdstanr' %in% installed.packages()){
remotes::install_github("stan-dev/cmdstanr")
cmdstanr::install_cmdstan()}
pacman::p_load(
tidyverse,
brms,
patchwork
)
```
# Intro
Autism Spectrum Disorder is often related to language impairment. However, this phenomenon has rarely been empirically traced in detail:
1. relying on actual naturalistic language production
2. over extended periods of time.
Around 30 kids with ASD and 30 typically developing kids were videotaped (matched by linguistic performance at visit 1) for ca. 30 minutes of naturalistic interactions with a parent. Data collection was repeated 6 times per kid, with 4 months between each visit. Following transcription of the data, the following quantities were computed:
1. the amount of words that each kid uses in each video. Same for the parent
2. the amount of unique words that each kid uses in each video. Same for the parent
3. .the amount of morphemes per utterance (Mean Length of Utterance) displayed by each child in each video. Same for the parent.
This data is in the file you prepared in the previous class, but you can also find it [here](https://www.dropbox.com/s/d6eerv6cl6eksf3/data_clean.csv?dl=0)
## Assignment structure
We will be spending a few weeks with this assignment. In particular, we will:
1. build our model, analyze our empirical data, and interpret the inferential results
2. use your model to predict the linguistic trajectory of new children and assess the performance of the model based on that.
As you work through these parts, you will have to produce a written document (separated from the code) answering the following questions:
1. Briefly describe the empirical data, your model(s) and their quality. Report the findings: how does development differ between autistic and neurotypical children (N.B. remember to report both population and individual level findings)? which additional factors should be included in the model? Add at least one plot showcasing your findings.
2. Given the model(s) from Q2, how well do they predict the data? Discuss both in terms of absolute error in training vs testing; and in terms of characterizing the new kids' language development as typical or in need of support.
Below you can find more detailed instructions for each part of the assignment.
# Analysis
- Describe your sample (n, age, gender, clinical and cognitive features of the two groups) using plots and critically assess whether the groups (ASD and TD) are balanced.
- Describe linguistic development (in terms of MLU over time) in TD and ASD children (as a function of group). Discuss the difference (if any) between the two groups
- Describe individual differences in linguistic development: do all kids follow the same path? Are all kids reflected by the general trend for their group?
- Include additional predictors in your model of language development (N.B. not other indexes of child language: types and tokens, that'd be cheating). Identify the best model, by conceptual reasoning, model comparison or a mix. Report the model you choose (and name its competitors, if any) and discuss why it's the best model.
In working through this part of the assignment, keep in mind the following workflow:
1. Formula definition
2. Prior definition
3. Prior predictive checking
4. Model fitting
5. Model quality checks
7. Model comparison
```{r describe_data}
library(ggplot2)
library(dplyr)
library(readr)
library(sjstats)
library(Metrics)
data <- read_csv("~/GitHub/a2-linguistic-development-nsjbd/ASD_data_preprocessed.csv")
# Age distribution
ggplot(data, aes(x = Age, fill = Diagnosis)) +
geom_histogram(position = 'dodge', bins = 20) +
ggtitle("Age Distribution by Diagnosis") +
xlab("Age in months") +
ylab("Frequency")
# Gender distribution
ggplot(data, aes(x = Gender, fill = Diagnosis)) +
geom_bar(position = 'dodge') +
ggtitle("Gender Distribution by Diagnosis") +
xlab("Gender") +
ylab("Frequency")
# ADOS score (clinical feature)
ggplot(data, aes(x = ADOS, fill = Diagnosis)) +
geom_histogram(position = 'dodge', bins = 20) +
ggtitle("ADOS Score Distribution by Diagnosis") +
xlab("ADOS Score") +
ylab("Frequency")
# Socialization score (cognitive feature)
ggplot(data, aes(x = Socialization, fill = Diagnosis)) +
geom_histogram(position = 'dodge', bins = 20) +
ggtitle("Socialization Score Distribution by Diagnosis") +
xlab("Socialization Score") +
ylab("Frequency")
```
We can see from the age distribution that the ASD children are slightly older than the observed TD children.
From the gender plot we can asses that most of the observed children are males. As predicted, the TD children ADOS score was much lower than the ASD children´s score. From the socialization we can also see that ASD children performed substantially worse than the TD children, as predicted.
### Age Distribution
The ASD children in the sample are slightly older compared to the TD (Typically Developing) children.
### Gender Distribution
The sample predominantly consists of males, across both ASD and TD groups.
### Clinical Features: ADOS Score
As expected, the ADOS scores for the ASD children are significantly higher than those for the TD children.
### Cognitive Features: Socialization Score
Consistent with expectations, the Socialization scores indicate that ASD children perform substantially worse than TD children.
### Balance Assessment
Given the age difference and the skewed gender distribution, the groups may not be perfectly balanced. However the difference in ADOS and Socialization scores are expected due to the nature of the conditions, still it's worth noting for any subsequent analysis.
```{r set_variables}
# Filtering the data from NA values
data_filtered <- data[!is.na(data$CHI_MLU), c("ID", "Diagnosis", "Visit", "CHI_MLU")]
# plot of visit and child MLU
ggplot(data_filtered, aes(x = Visit, y = CHI_MLU, color = Diagnosis)) +
stat_summary(fun = mean, geom = "line", size = 1.5) +
ggtitle("Linguistic Development (MLU) Over Time in TD and ASD Children") +
xlab("Visit Number") +
ylab("Mean Length of Utterance (MLU)")
data_filtered %>%
group_by(Diagnosis) %>%
summarize(mean_CHI_MLU = mean(CHI_MLU, na.rm = TRUE))
```
## Observations from the Linguistic Development Plot:
### Autism Spectrum Disorder (ASD) Children:
Over the course of Clinical Visits the ASD children show small improvement in Mean Length of Utterance (MLU) compared to the TD children.
Interpretation: The ASD children's language development, as measured by MLU, improves over time but fluctuates between visits.
### Typically Developing (TD) Children:
Over the course of Clinical Visits the TD children show substantial improvement in MLU compared to the ASD children.
Interpretation: The increase in MLU for TD children suggests that the TD children improve more from visit 1-5 and then see a small setback in visit 6.
```{r}
ggplot(data_filtered, aes(x = as.factor(Visit), y = CHI_MLU, fill = Diagnosis)) +
geom_boxplot() +
facet_wrap(~ Diagnosis) +
ggtitle("Individual Differences in Linguistic Development: Boxplot") +
xlab("Visit Number") +
ylab("Mean Length of Utterance (MLU)")
```
### Observations from the Individual differences Plot:
This plot shows that there are more individual differences between the ASD children than the TD children.
## Formula Definition
```{r define_formulas}
# Simple Model
simple_model <- brms::brm(
brms::bf(CHI_MLU ~ 1 + MOT_MLU),
data = data,
chains = 4,
cores = 4,
seed = 123
)
prior_summary(simple_model)
# Improved Model incl. multiple predictors, interaction and random effects
improved_model <- brm(bf(CHI_MLU ~ MOT_MLU + Diagnosis + Diagnosis:Visit + (MOT_MLU|Diagnosis) + (MOT_MLU + Diagnosis|Visit) + (MOT_MLU + Diagnosis + Visit|ID)),
data = data,
chains = 4,
cores = 4,
seed = 123
)
prior_summary(improved_model)
```
## Prior Definition
We know that CHI_MLU cannot be negative, so we use lb = 0. We also set an upper limit of 4.5 because it is just above the maximum values of CHI_MLU.
We also know from the descriptive plots that there are two groups in the data (ASD and TD), which implies a bimodal model.
```{r define_priors}
simple_model_priors <- c(
prior(student_t(3, 1.9, 2.5), class = Intercept),
prior(uniform(0, 4.5), class = b, lb = 0, ub = 4.5),
prior(student_t(3, 0, 2.5), class = sigma, lb = 0)
)
improved_model_priors <- c(
prior(student_t(3, 1.9, 2.5), class = Intercept),
prior(uniform(0, 4.5), class = b, lb = 0, ub = 4.5),
prior(student_t(3, 0, 2.5), class = sigma, lb = 0)
)
```
## Prior Predictive Checking
By setting *sample_prior* parameter is set to "only" in the **brm** function, draws are drawn solely from the priors, thus ignoring the likelihood. This allows among other things to generate draws from the prior predictive distribution.
```{r }
simple_model_with_priors <-
update(simple_model,
prior = simple_model_priors,
sample_prior = 'only')
prior_summary(simple_model_with_priors)
pp_check(simple_model_with_priors, prefix='ppd') # plotting prior predictive distribution
improved_model_with_priors <-
update(improved_model,
prior = improved_model_priors,
sample_prior = 'only')
prior_summary(improved_model_with_priors)
pp_check(improved_model_with_priors, prefix='ppd') # plotting prior predictive distribution
```
## Model Fitting
```{r fit_model}
fitted_model <- brms::brm(
brms::bf(CHI_MLU ~ 1 + MOT_MLU),
data = data,
chains = 4,
cores = 4,
seed = 123,
prior = simple_model_priors
)
fitted_improved_model <- brms::brm(
brms::bf(CHI_MLU ~ MOT_MLU + Diagnosis + Diagnosis:Visit + (MOT_MLU|Diagnosis) + (MOT_MLU + Diagnosis|Visit) + (MOT_MLU + Diagnosis + Visit|ID)),
data = data,
chains = 4,
cores = 4,
seed = 123,
prior = improved_model_priors
)
```
## Model quality checks
```{r check_models}
summary(fitted_model)
bayes_R2(fitted_model)
summary(fitted_improved_model)
bayes_R2(fitted_improved_model)
```
## Model Comparison
We used the method of model comparison "Leave-One-Out Cross-Validation" (LOO) which can indicate a better model with a higher Expected Log Pointwise Predictive Density (ELPD).
The fitted_improved_model appears to be substantially better than the fitted_model in terms of predictive accuracy, with an ELPD difference of -54.5 and a standard error of 8.9. Given that the ELPD difference is more than a couple of times the standard error, it suggests that the improvement is statistically significant.
However, given the warnings we've received during model fitting, we should be cautious. The model comparison is only as good as the models we're comparing, and we have some issues to address in our models before we can fully trust these results.
```{r compare_models}
# Compute LOO for both models
loo_simple <- loo(fitted_model)
loo_improved <- loo(fitted_improved_model)
# Compare the models
loo_compare(loo_simple, loo_improved)
```
```{r test_hypotheses}
# Write your code here
```
# Prediction
N.B. There are several data sets for this exercise, so pay attention to which one you are using!
1. The (training) data set from last time
2. The (test) data set on which you can test the models from last time:
- [Demographic and clinical data](https://www.dropbox.com/s/ra99bdvm6fzay3g/demo_test.csv?dl=1)
- [Utterance Length data](https://www.dropbox.com/s/uxtqqzl18nwxowq/LU_test.csv?dl=1)
- [Word data](https://www.dropbox.com/s/1ces4hv8kh0stov/token_test.csv?dl=1)
Relying on the model(s) you trained in part 2 of the exercise, create predictions for the test set and assess how well they do compared to the actual data.
- Discuss the differences in performance of your model in training and testing data. Is the model any good?
- Let's assume you are a speech therapy clinic. You want to assess whether the kids in your test sample will have a typical (like a TD) development, or they will have a worse one, in which case they should get speech therapy support. What do your predictions tell you about that? Which kids would you provide therapy for? Is the model any good?
In the following, we will be using our most simple model, since the other data sets do not contain all the different predictors as in the training data set.
### Importing LU_test and pre-processing
```{r import_data}
LU_test <- read_csv("LU_test.csv")
# Assuming your data frame is named 'data'
# Rename columns
LU_test <- LU_test %>%
rename(
ID = SUBJ,
Visit = VISIT
)
# Remove all non-numeric characters from the 'Visit' column
LU_test <- LU_test %>%
mutate(
Visit = as.integer(gsub("[^0-9]", "", Visit))
)
```
## Remove missing data to ease merging with predictions
```{r remove_missing}
# Remove rows with any NA values
LU_test <- na.omit(LU_test)
```
Here we should be using a model that'd have some more interesting predictors (to make sure we have something to predict)
Alternatively we could retrain the model to include visit 1 for all the test kids (and thus have the random effects)
```{r}
# Predictions on Test Data (LU_test)
predictions <- predict(simple_model, newdata = LU_test, allow_new_levels = TRUE)
# Creating a Data Frame to Compare Actual vs Predicted
comparison_df <- data.frame(
Actual = LU_test$CHI_MLU,
Predicted = as.vector(predictions)
)
```
### plot
```{r}
comparison_df$index <- seq_len(nrow(comparison_df))
comparison_df_long <- tidyr::gather(comparison_df, key = "Type", value = "Value", -index)
ggplot(comparison_df_long, aes(x = index, y = Value, color = Type)) +
geom_line() +
ggtitle("Actual vs Predicted Over Observations") +
xlab("Observation Index") +
ylab("Value")
```
```{r train_model}
# Write your code here
```
Assess the performance of the model on the training data: root mean square error is a good measure. (Tip: google the function rmse())
```{r assess_performance}
train_predictions <- predict(simple_model, newdata = LU_test)
rmse_value <- rmse(LU_test$CHI_MLU, as.vector(train_predictions))
rmse_value
```
rmse - 1.456195
Show how child fare in Child MLU compared to the average TD child at each visit
```{r child_performance_td_average}
```