# Exploring Reproducibility
This is a community building and information gathering session to begin to understand where the community is at on various topics related to reproducibility.
:::info
**Please add your thoughts on the topics, with a focus on the topic assigned to your table.
I've added some prompts to stimulate discussion, but don't feel constrained by these or pressured to answer them all :grin:**
:::
You're also very welcome to add your thoughts on any of the other topics if you like!
The document will remain live after the session so feel free to add further thoughts/ideas as you have them.
## Topics
### Registered reports
AKA preregistered reports, are published early in a scientific project and serve to outline the project.
One can think of this as kinda like publishing the intro and methods of your paper before you gather your data.
The idea here is to offer an opportunity to get a peer review before experiments are done to hopefully reduce avoidable mistakes, or point out if a study is obviously underpowered from the outset.
The COS has a good explanation [here](https://www.cos.io/initiatives/registered-reports).
---
- What would the incentives be for researchers to publish more negative results, studies or analysis that don't work
- Perhaps registered reports go some way to tackling this
- However, they may be problematic with respect to data-driven or exploratory science
#### Prompts:
##### How do you think this relates to reproducibility in your work?
- What would it involve to incorporate this into your work?
- Do you feel the time/effort required would be worth it and why?
- Any concerns/critiques of this?
##### Barriers to use
- Lack of awareness/training?
- Insufficient support?
- Not given time?
- PI not interested/actively opposed?
- Lack of acknowledgment/reward for engagement?
##### What would help you engage with this more?
- Training? What kind?
- Support from HQ?
##### Tools/tips/suggestions
### Version Control (e.g. [Git](https://git-scm.com/), [GitHub](https://github.com/))
Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later.
You can think of it kind of like a "tracked changes" feature in Microsoft Word, but much more powerful and reliable.
It's ubiquitously used in software development for managing code as it allows for easy asynchronous working on a code base for arbitrary sized teams.
If one gets into the habit of frequent commits it a great way to fix bugs too as it lets you easily roll back to known working versions of your code.
Further reading [here](https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control).
---
#### Prompts:
##### How do you think this relates to reproducibility in your work?
- What would it involve to incorporate this into your work?
- Do you feel the time/effort required would be worth it and why?
It is worth to effort due to the multithreading and cacheing already processed data, and configuration.
- Any concerns/critiques of this?
##### Barriers to use
- Lack of awareness/training?
- Insufficient support?
- Not given time?
- PI not interested/actively opposed?
- Lack of acknowledgment/reward for engagement?
##### What would help you engage with this more?
- Training? What kind?
- Support from HQ?
##### Tools/tips/suggestions
### Code Review
Code review is, as the name suggests, where peers review each others code, potentially testing it whilst looking for bugs or other feedback to offer.
Whilst a ubiquitous practice in software development, to the point that many projects require that code has been reviewed before being committed, it's rarely done in scientific research.
---
#### Prompts:
##### How do you think this relates to reproducibility in your work?
- What would it involve to incorporate this into your work?
- Do you feel the time/effort required would be worth it and why?
- Any concerns/critiques of this?
There is a consenus that it is usefull. It can prevent errors. Makes code more efficient. It would be useful to do it at the end of a larger project, rather than step-by-step. Authorship of the code reviewer is an issue.
Sharing your code within DRI might be an issue, too, in terms of authorship - so prior rules of authorship should be enforced. Check within the comunity how many of us do code, and how many build on prior code/pipelines. The use of AI in code rewview might become handy, but make sure you don't make confidential data available to others (like ChatGPT)
##### Barriers to use
- Lack of awareness/training?
- Insufficient support?
- Not given time?
- PI not interested/actively opposed?
- Lack of acknowledgment/reward for engagement?
##### What would help you engage with this more?
- Training? What kind?
- Support from HQ?
##### Tools/tips/suggestions
---
### Virtual Environment (e.g. [Conda](https://docs.conda.io/en/latest/), [renv](https://rstudio.github.io/renv/articles/renv.html))
A virtual environment is a self-contained collection of software dependencies.
The idea being that one can have multiple versions of say a Python or R package, or versions of R/Python itself on their system simultaneously, and the active virtual environment determines which versions are available.
This allows one to easily share their development environment with others so they can replicate it.
Some further reading and Python examples [here](https://www.freecodecamp.org/news/how-to-setup-virtual-environments-in-python/).
---
#### Prompts:
##### How do you think this relates to reproducibility in your work?
- What would it involve to incorporate this into your work?
- Do you feel the time/effort required would be worth it and why?
- Any concerns/critiques of this?
##### Barriers to use
- Lack of awareness/training?
- Insufficient support?
- Not given time?
- PI not interested/actively opposed?
- Lack of acknowledgment/reward for engagement?
##### What would help you engage with this more?
- Training? What kind?
- Support from HQ?
##### Tools/tips/suggestions
---
### Workflow managers (e.g. [Nextflow](https://www.nextflow.io/))
AKA Workflow Management Systems aim to automate computational analyses by linking individual data processing task into a pipeline.
They abstract away task dependencies and allocating compute resources, allowing one to easily run the same pipeline on a HPC with SLURM, a cloud provider like AWS and your own local hardware, dynamically adjusting the pipeline to leverage the compute available to it.
Further reading, including a comparison of the workflow managers [here](https://www.nature.com/articles/s41598-021-99288-8).
---
#### Prompts:
##### How do you think this relates to reproducibility in your work?
- What would it involve to incorporate this into your work?
- Following the tutorials, learning the syntax and understanding the workflows you want to create.
- Do you feel the time/effort required would be worth it and why?
- Dependent on the size of the workflow you are creating. For large, multi-process workflows - definitely. Automation allows for resource efficiency and enables a lot of the job submission work to be performed in the background.
- Any concerns/critiques of this?
- Automation can sometimes be a drawback. Those who prefer to debug or interpret results on-the-fly can sometimes be put off by the background workflow, and often PIs or managers will prefer results to be produced immediately instead of waiting for a whole pipeline to run.
##### Barriers to use
- Lack of awareness/training?
- Definitely, encouraging researchers to teach each other these techniques as part of their job will mean standardising pipelines and promoting reproducibility will be less of a chore.
- Insufficient support?
- Not given time?
- In some cases, yes. PIs that are more results demanding may not see the benefit in spending time learning to use a workflow manager.
- PI not interested/actively opposed?
- Yes, PIs who are not particularly experienced in coding may not appreciate the time spent to learn workflow management.
- Lack of acknowledgment/reward for engagement?
- This can be an issue, however, support and guidance from the UK DRI can prevent this.
##### What would help you engage with this more?
- Training? What kind?
- Workshops, run by researchers who can teach others.
- Support from HQ?
- Yes - support groups who can help solve challenges in pipelines and debug will speed up work and enable more efficient working practices.
##### Tools/tips/suggestions
- Certainly worth encouraging bioinformaticians to use workflow managers such as NextFlow, as it will promote reproducibility, efficiency and automation. Sharing scripts and workflows will give people much needed headstarts in producing results.
---
### Containerisation (e.g. [Docker](https://www.docker.com/), [Singularity](https://apptainer.org/))
Have you ever had a hard time installing a piece of software?
Maybe some crusty old bioinformatics tool that sends you down to dependency hell?
Well containers aim to fix this by encapsulating a software environment into an image file.
Then if I know my code runs in that container, I can send you the image and you will be able to run my code as well.
Further reading about containers in a bioinformatics context [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6738188/).
---
#### Prompts:
##### How do you think this relates to reproducibility in your work?
- What would it involve to incorporate this into your work?
- Do you feel the time/effort required would be worth it and why?
- Any concerns/critiques of this?
##### Barriers to use
- Lack of awareness/training?
- Insufficient support?
- Not given time?
- PI not interested/actively opposed?
- Lack of acknowledgment/reward for engagement?
##### What would help you engage with this more?
- Training? What kind?
- Support from HQ?
##### Tools/tips/suggestions
# UK DRI data science
## 1. Data management
* What are the main challenges you face in managing and working with dementia-related data?
* Are there specific data types or sources that pose unique challenges?
* Storing large amounts of data!
* Imaging
* Multi-ome
* Long-read
* Analysis of this data
* Compute power
* Software/OS versions etc. required for specific analyses
* Waiting for jobs if cluster/server oversubscribed
* Handling of sensitive patient data
* Problems associated with sharing of this
## 2. Data sharing
* Do you share your datasets and analysis code with other teams in UK DRI and/or the wider research community?
* Overall insufficient sharing which means that ressources are wasted (how many snRNAseq data from identical types of samples were generated?)
* Issues with data ownership vs internal data sharing
* How to properly reward data scientist
* Knowledge is important. hard to share when you don't know what's out there
* How do we encourage sharing? Sticks not good enough need carrots
* Access requests need to be from the perspective of "why not share this" as opposed to why
* PIs require training / education. Need to show positive examples of the benefits of sharing
* Intra DRI sharing should be mandatory, institute should be working together, mandatory registers of what people are working on so awareness of what people are working on
* Are there specific tools, practices, or guidelines you follow to enhance the reproducibility of your analyses?
* ADDI platform seems way too heavy (virtual machine through browser ...)
* UKDRI internal platform?
## 3. Training and skill development
* What skills do you believe are crucial for the informatics ECRs?
* Teaching
* Statistics
* Reproducibility
* Science communication
* Best practices
* How are you acquiring or developing these skills at the moment?
* Data Carpentries - UK DRI Specific
* Similar to the DASH/Data Carpentry scheme at Edinburgh
* Teaching and allowing new skill development as part of their job.
## 4. Building UK DRI informatics ECR community
* What are the three main benefits of having an informatics UK DRI ECR community?
1. Networking and getting to know what each of us does actually.
2.
* How can you contribute to building this community?
## 5. Looking forward
* How do you envision the future of dementia data science, and what role do you see for informatics ECRs in shaping this future?
* What are three top challenges that you believe can be resolved/improved by the informatics ECR event next year?