# What do we know about the replication crisis?
Originated when my homeboy Daryl Bem proved psychic powers existed. IF p < .05 we know its true.
We want data like French marriages, open.

**97% of the original studies produced statistically significant results
Only 36% of the replication studies did so**
(Open Science Collaboration, 2015)
Not getting the same results when reproducing an experiment
False or exaggerated results
Wanting to undertake **original study** not merely copying someone elses study
Ongoing issue where people struggle to replicate experiments due to biased, skewed data.
> People can’t replicate experiments because of skewed or biased data e.g. not being truthful
People choosing their results based on specific parts of data so as a whole the experiment can’t be replicated
Replicability is important for science
The whole thing about open science - transparency
People feeling urgency to publish results so they force it to be significant
Some studies suggest this may be a result of low statistical power in single replication studies rather than a failure to replicate results entirely
Need for multiple replication studies

People only publishing statistically significant results which can skew data, or could be due to individual differences, poor research practices and post-hoc analysis. Ultimately, sfactors such as this affects the credibility and accuracy of results.
due to experimenters playing around with data they cant necessarily remember the exact process they went through and report it in the method therefore when other experimenters go to replicate it.... they dont know the exact steps that the original study undertook.
***Replication Crisis***
- Inability to get the same results
- Due to publication bias
- Cherry picking results
- Deviation from method – need to write down the method clearly (the original study) and replication studies need to specify when they deviate from the original method
- Pre-registered reports writing up the intro, method and expected results BEFOREHAND getting it confirmed by a panel of reviewers; if reviewed, paper will be published – won’t be about significance of results, rather the purpose of the study
o Here, it’s not about tampering with results to get something significant, but rather having an interesting/approved purpose of a study
- Replication is about finding the same result, but can use a different method – not an authentic reconstruction
- Reproducibility is about using the same method, not necessarily the same result though
- Publication bias failure to get a result may impact funding, therefore, force a significant result
o Relaxing exclusion criteria
***cool***
***The replication crisis: Inability to replicate results of prior findings:***
* Lack of transparency
* Cherry picking data
* Exploratory analyses termed as confirmatory
* Inflating alpha (type 1 error rate inflation)
***Reproducibility:*** need to clearly stating methods and procedures to be able to be reproduced
***Things to help:***
* Open data
* Preregistering experiments
## The Down-Low on the Replication Crisis
- The replication crisis = lots of psych experiments/data not being able to be replicated by future researchers
- Replication = replicating a previous experiment (following same method etc) and collecting new data
- Reproducing = reperforming same analysis on same data by different analysts
- Not publishing data when unsupportive of hypotheses
- Altering/deleting/fabricating data to selectively support hypotheses
- Data snooping
- Demand for ‘new’ and ‘innovative’ findings (especially when it influences careers/holding jobs/salaries/reputation)
- Unclear methods and data analysis sections
- Materials sometimes behind a firewall/need to email to get it/need to pay to access
- Accessibility and freedom of information an issue
- Privatised papers (aka less peer review)
- Underpowered/biased sample studies and over-generalisation of results
- Selectively choosing statistics/graphs to make findings seem more practically important than they are in reality (eg skewed graphs)
### What is the "crisis"?
- Past studies being recreated and the replications not finding significant results
### What has caused it?
- Researchers fail to control for FWER (problem of multiple comparisons)
- Publication bias
- Biased samples - overgeneralisations
- Only significant results are published - may incentivise researchers to employ invalid research methods
- Ability to pay to publish - undermines peer review
### What does it mean for research to be "replicable"?
- Recreate findings using the same methodology
- Using new data
### What does it mean for research to be "reproducible"?
- Using the same data, but different statistical procedures that generate the same result
### Are they the same thing?
- No they are not the same?
## What is the replication crisis?
A small percentage of psychology studies get the same result using the same methodology
Replication = run same method in same way to (hopefully) get the same result
Recent investigations reveal that many major findings can’t be replicated and the majority of papers published do not have open/available data.
### What caused the replication crisis?
Sample sizes (underpowered) significant effect is by chance
Use of multiple, incompatible analyses to get significant results
Analytic flexibility → repetition of data analyses until a p < 0.5
Researchers degrees of freedom and not noting these deviations from original procedure → Desire to get significant effect: do analysis over and over until get significant result (Every extra analysis conducted increases chance of Type 1 error)
Data exclusion - participants that are outliers (many assumptions?) (data manipulation?) Also inappropriate post-hoc analyses
Individual differences and variability challenges replication particularly for babies due to their mood, developmental stage and short attention span
Variability: babies might be engaged in one task then tired for the next
Incentive for individuals to publish to get promotions etc. Significant results are more likely to be published. Pushing researchers in the wrong direction (e.g., p-hacking, Harking)
P-hacking: continuous collection and/or modification of data
HARKing: adding variables that were not part of the original research and adding justification ie Making a hypothesis after seeing the results of a study
## Replication Crisis
- Classical experiments couldn’t be replicated in modern research
- Statistics were invalid
- Post-hoc vs planned analysis – caused research to be hard to replicate
- Statistical analysis methods were invalid – scheffe vs bofferoni
- Manipulation (intentional or unintentional) of data to get statistical significance
- Lack of trust of psychology research
- Poorly written methods
## To prevent replication crisis- Open Science!
What is open science?
- Open science is a practice of science that allows others to collaborate & contribute, where research data, lab notes & other research processes are freely available to enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open science involves the materials and data being openly available to the public. Many articles will apply for pre-registration and will usually upload this information to an independent archive or repository to allow other researchers or those who are interested in the topic to understand just how the methodology and results are aligned.
**Pillars of open science:**
1. open data-Sharing of data collected (e.g.: raw data, videos)- help things to be more transparent
2. Open design (pre-registering design/intentions at outset)
- Preregistration: detailing hypotheses, method, and data analysis before data collection, allows for a clearer distinction between exploratory and confirmatory data analysis!
- Registered reports: peer review, revision, and acceptance occurs before data collection -> will be published regardless of the results; restrict researcher degree of freedom/ publisher bias, clear intention (prevent data exploration)
3. Open analysis (give away code used to analyse results, ie. what was done to get it from raw form to statistical test on paper)
4. Open access (accessible to the public)
(You're welcome :))