--- tags: Roundup --- # Bi-monthly Roundup 2025/09/17 :::info - **Date:** Sep 17, 2025 16:00(UTC) - **Participants:** - Chris Markiewicz - Oscar Esteban - Jon Haitz Legarreta - Celine Provins - Mathias Goncalves - Felix Hoffstaedter - Hao-Ting Wang - Erin W. Dickie - Yohan Chatelain - Eilidh MacNicol - McKenzie Hagen - Martin Noergaard - Melanie Garcia - Steven Meisler - **Contact:** - **Host:** - **Reference:** - ::: ## Agenda * Summer catch-up * Nipreps Steering Committee * https://www.nipreps.org/news/#nsc-election-result-aug-5-2025 * Grant Submissions: * TOSI Award * RSMF * EU's OSCARS (E/MEGPrep) * PETPrep sprint (18+19 of August) * PETPrep officially released (https://github.com/nipreps/petprep) * Project updates * NiFreeze and dMRIPrep * MRIQC / McKenzie's paper * [Neurohackademy lecture](https://neurohackademy.org/course/quality-assessment-and-control-of-unprocessed-anatomical-functional-and-diffusion-mri-of-the-human-brain-using-mriqc/) * PETQC - still work in progress * fMRIPrep 25.2 imminent (hopefully?) * NiBabies / fmriprep-infants - new version incoming, looking to combine with fmriprep eventually * load_confounds / wonkyconn - resuming the project * NiTransforms * Conversation with Tom Nichols * Numerical variability of MRIQC's output * Felix's MRIQC plan * https://cerebra.fz-juelich.de/f.hoffstaedter/bootstrap_MRIQC ## Notes ### Conversation with Tom Nichols #### 1 Governance structure - Which leadership Model: we have a mixture of meritocracy with a certain space for liberal contribution. The bylaws are at https://github.com/nipreps/GOVERNANCE, and the text was forked from an initiative from GitHub called Minimally Viable Governance (MVG, https://github.com/github/MVG). - Decision Processes: There are three recurrent meetings: - Bi-monthly Roundup: anyone can attend, it's meant for people external to the development but interested/engaged/collaborating. It can decide on things (sort of an assembly) - Bi-weekly Technical Monitoring: TechMons are for active contributors, for the instantaneous drive of the project. Some of them are made special to see more long-term and define roadmaps, maintainers teams, etc. - NiPreps Steering Committee: 5 members, meets about monthly, core decision-making. - Conflict Resolution, we have a (basic) code of conduct, and wanted to set up an Ombudsperson as point of contact with training. That latter part fell off the tree and is now in limbo. Bylaws define some decision-making situations (when consensus/majority is needed, etc.) #### 2 Software Development - Core Developers (active maintainers, with commit access) § How many core developers are there? Around 10-15 § Who has responsibility for critical bugs/fixes? No one has a specific/formal responsibility; we have maintainers' teams who ultimately share that responsibility. § Non-core developers (contributors, without commit access): Anyone can submit their contributions, and they are invited to join NiPreps. Every project keeps track of current and past contributors, and we have a crediting system for that (on the website). § Roughly how many contributing/non-core developers are there? Some 40 - I defer to other NSC members if this is wrong. § How are non-core contributions vetted? Does some code have extra protections? In general, all the code has basic protections: we try to code review every PR, we have automated tests that should be green, etc. We sometimes add ad-hoc protection for feature branches with substantial changes. In the future, as a result of a collaboration with Lune Bellec, we will test for regressions caused by numerical variability (paper). We also have plans for a more rigorous assessment of "what is a -prep and what is not", but so far, we haven't really had enough time to build a pattern. #### 3 Support - Is support mostly through GitHub issues or mailing list/discussion site? I would say the main mode of support for fMRIPrep continues to be https://neurostars.org (i.e., discussion site). We haven't had a mailing list for fMRIPrep, but we did for MRIQC, and it worked for a while. However, GitHub issues and discussions replaced it. For fMRIPrep, GitHub has been a very strong source of support. This is a usually underestimated aspect of OSS (open scientific software), and around 2018-19 we were 5 active people in the team and each had one day a week to monitor neurostars and github for problems and drive them to a solution fast. Containers marked a before and after in terms of support. At some point of 2018 we decided not to resolve "bare installation" issues (unless very specific settings such as a cluster without singularity), and from one day to the next we suddenly noticed a remarkable change in the direction of questions, which became way more "scientific". Some of those questions derived into interesting efforts, such a documentation sprint that yielded our confounds section of fMRIPrep, I believe one gem in our thorough documentation. We have also started to connect by organizing workshops. - Does support mostly come from within core developers or from community? I would say the community takes a very decent share of the support today, but often there are core developers involved (e.g., someone responds to a question, and a developer 'refines' the response). #### 4 Scientific Validation and Reproducibility - Validation strategies: Systematic testing against known ground truth? Other validation exercises? We do have automated testing, where we mostly have "smoke" tests (i.e., an integration test that fails if you switch it on and smoke comes out) and we are trying to implement unit tests for clerical tasks (where the "test oracle" is more clear, sometimes you can call it a ground truth). We also use a lot of visual validation. IMHO our fMRIPrep paper was very innovative on that front, proposing a framework for validation using lots of data from OpenNeuro and then four experts rating the visual reports. The visual reports have also been part of the identity: they should help the user (i) assess the quality of the subproduct (i.e., did this run of, say, fMRIPrep, work well?), and (ii) understand the processing (meaning, the reports follow an order that tries to shed light on why this workflow is implemented in this particular way). - Handling Algorithmic Changes: How do you manage and communicate algorithm changes? How do you balance improving methods with the need for longitudinal consistency and backward compatibility? Tough, can I use the other NSC members wildcard? We've been trying to tackle these issues and so far failed to find a magic formula, so (greatly influenced by Chris M's vision) we have implemented helpers. With support from Lune Bellec's team, we started fMRIPrep's LTS (long-term support) program. LTS is a series (in the case of fMRIPrep, they are versions called 20.2.x) where a team is behind that series to provide specific support, and the LTS is guaranteed to have backward compatibility (i.e., we do not change APIs and, thanks to Yohan's numerical variability tests, we aim at not modifying the tools' behavior accidentally by including bugfixes and patches). Then, for code, and versions, etc., we try to stick with Python's SPEC 0 (https://scientific-python.org/specs/spec-0000/). Our stance on releases is here https://www.nipreps.org/devs/releases/ Nonetheless, code drift for prep's adaptations (e.g., fMRIPrep-rodents and fMRIPrep-infants) is a real issue. We are trying to adopt a model where all "-Preps" are generated with a 'software factory' so that we don't have duplicated code and inconsistencies. Eilidh is preparing a grant for this ;) Finally, the big question is how to adopt algorithmic innovations. One example is using SynthStrip for brain extraction. So far we have only trialed it out in MRIQC, but haven't managed to propagate it to fMRIPrep so far for various reasons. There is a surprising lack of evidence about what methods work and under what conditions, and the advent of AI for everything will bite us badly (and beyond NiPreps). Much resolves to visual assessment akin to our fMRIPrep paper, which is not ideal. We have also thought of "workflow metrics" to compare (because, sure, you can try to find the best algorithm for a given task, but the decision has lots of externality, and you need to see how it works with the remainder of the software around). We have an interesting discussion about what is within scope here: https://www.nipreps.org/community/features/, and we have very strong principles around modularity. - Numerical Stability: Do you try to ensure results are consistent across different operating systems, hardware architectures, and compiler versions? Not really, but the paper by Yohan (again here: paper) is a first approach. We have also built into our tools a control for several seeds of algorithms so that they can be replicated. Finally, we have accepted that some things cannot be made bit-to-bit replicable unless you pay a great price without optimizations (e.g., ANTs on a single thread) #### 5 Funding and Sustainability - Funding Model: So far, it is mainly funded by grants, although it's tough to maintain the funding, esp. when the software egresses the walls of a particular lab (Russ' lab in this case) and becomes a community project. We see the UK is currently trying to transform the landscape a bit, and the USA had a very good trajectory too (until it didn't). At this point, we have considered other ways of getting funding, but haven't resolved any issues. - Personnel Structure: We used to be postdocs and PhD students, but over time, we have become RSEs and faculty members. There are some new contributors at the PhD and PD levels (e.g., ASLPrep and Ted Satterthwaite's lab, who contribute substantially, fMRIPrep-rodents is led by Eilidh, a PD, and nifreeze by Jon, another PD). However, the reality is that we have not managed to replace ourselves (yet). - If grant-funded, other plans for sustainability? Yes, we've thought of charging for support, offering consultancy hours. We have also considered training (workshops, talks) as a way to keep it alive. However, no solid plans income-wise. On a different line, we have also tried to make the code more sustainable and maintainable (the idea of the "software factory" to generate workflows goes in this direction) #### 6 Licensing We are quite strong on employing the Apache License for the reasons given here https://www.nipreps.org/community/licensing/. For data, we recommend CC0 or CC-BY, and for some edge cases, MIT or BSD.