Title: Code Review Demo: Reproducibility Audits
Description:
This is the first in a series of code review demos hosted by the US-RSE code review working group. In this demo, we will discuss reproducibility audits, a broad (and shallow) type of code review focused on the following questions: Can I figure out how to get the code to run? Does the code produce the expected outputs? How readable and re-usable is the code? This type of code review can be useful before submitting a manuscript with associated analysis code, or in the context of a "ReproHack".
Outline:
- Why reproducibility?
- reliability/correctness of code
- reusability of code (by you or other researchers)
- What is a reproducibility audit?
- broad & shallow review of code, often at the end of a project (before submission of a manuscript)
- Focus on:
- Is there sufficient documentation?
- Does the code run (on a different machine)?
- Is the code readable?
- Does the code produce the expected outputs?
- Don't need specific domain knowledge
- *Ideally* shouldn't need knowledge of specific computational tools (R, Python, Docker, etc.)
- ReproHack.org
- Provides resources to organize reproducibility hackathons (reprohacks) in a variety of formats
- Authors submit code from (idealy pre-print) manuscripts
- Reviewers provide feedback and reproducibility scores
- Good [participant](https://www.reprohack.org/participant_guidelines) and [author](https://www.reprohack.org/author_guidelines) guidelines
- Has database of manuscript code for practicing reproducibility audits
- Reduce anxiety for the real thing!
- Lab-level reproducibility audits
- I'd love to contribute to materials to help establish a parctice of code-review in academic labs
- Scafolding similar to reprohack.org guidelines and code of conduct are essential!
Demo: Auditing https://www.reprohack.org/paper/91/