fmriprep-next: Preprocessing as a fit-transform model
The value of publicly shared neuroimaging data depends on the level of processing applied to the data. While raw data provide the greatest opportunity for asking novel questions, each step of processing left to secondary researchers is a potential source of analytical variation that can lead to conflicting results from the same source data [NARPS]. A researcher wishing to share the data they have collected can reduce sources of variability in downstream analyses by providing a canonical set of preprocessed data for reuse [HCPPipelines].
Publication of data that have been resampled into several spaces can enable different analyses while limiting analytical variability, but this requires significantly more storage and bandwidth. Generation of many derivatives may also be inefficient on shared, high-performance computing systems suited to computationally intensive tasks with relatively little storage use. It would thus be beneficial to calculate and distribute a compact set of preprocessing derivatives that permit the remaining derivatives to be generated cheaply and deterministically at the time of analysis.
Here we present recent changes in the architecture of fMRIPrep [fMRIPrep], a preprocessing pipeline for functional MRI. These changes separate the typical processing pipeline into two discrete, user-accessible workflows. Firstly, a computationally expensive "fit" stage performs steps such as segmentation, registration, and surface reconstruction, the derivatives of which are small and therefore easy to distribute. A second "transform" workflow then utilizes the raw input data and the derivatives from the "fit" workflow to deterministically generate dense pre-processed fMRI data in any desired target space with minimal additional computational cost. We discuss the practical consequences of these software changes for fMRI data processing and distribution.
The fit-transform architecture has been published in version 23.2.0a2. To test the impact of the described changes, fMRIPrep 23.2.0a2 and the prior release, 23.1.4, were run on two subjects from two different datasets.
Dataset A: 6 T1-weighted, 3 T2-weighted structural scans, 2 phase-difference fieldmaps, 4 single-echo BOLD runs with 195 volumes (total 784 volumes), and 1 single-band reference volume per BOLD series.
Dataset B: 2 T1-weighted structural scans, 6 spin-echo fieldmaps, and 8 single-echo BOLD runs with varying lengths, for a total of 4274 volumes.
These results were gathered on a single, 20-core Intel i9-10900 2.8GHz processor. The host system was Ubuntu Linux 22.04 LTS, running Docker version 24.0.7.
The commands tested requested outputs registered to MNI152NLin2009cAsym volumetric template and the fsLR "grayordinate" template.
Lestropie If you wanted to sell it better, should have done a dataset where you generate the pre-processed fMRI data in every single supported output space; since that's what you would otherwise need to do to facilitate any downstream analysis for a given raw dataset. :-P
Table 1 compares runtimes and the storage usage in each version. Running fit-only workflows resulted in 34-66%, 94-98% and 84-92% reductions in runtime, data size and file counts, compared to the previous version. These reductions reflect the compute/storage utilization of the prior transform processes.
Lestropie Fact that performance in both computation and scratch use has changed between versions needs to be addressed explicitly in text. It's not an obviously expected result given the description of structural changes provided.
Running fit and transform workflows resulted in 25-52%, 43-54% and 72-87% reductions in runtime, data size and file counts.
Some efficiency was achieved through changes incidental to the structural changes described here. The increase in output sizes reflects additional outputs needed for resampling, present in the fit-only derivatives, as well as some unintended outputs which will be removed in future revisions.
Here we describe changes which result in a significant decrease in computational time and storage utilization for users of fMRIPrep. These changes particularly benefit researchers and data stewards interested in ensuring access to large scale data repositories.
We also anticipate that these changes will simplify the process of resolving errors in preprocessing, as errors of fit and transformation can be addressed separately, and researchers will have the option of providing alternative fit results to be used in transformation. At the same time, the full workflow continues to provide the full range of derivatives that make fMRIPrep an attractive option.
Dataset | Version/Mode | Runtime | Scratch Size | Scratch Files | Output Size | Output files |
---|---|---|---|---|---|---|
A | 23.1.4 | 2h24m | 54.8GB | 36.8K | 2.30GB | 176 |
A | 23.2.0a2 / fit | 1h35m | 2.91GB | 5.89K | 602MB | 128 |
A | 23.2.0a2 / fit+transform | 1h47m | 19.8GB | 10.0K | 6.37GB | 206 |
B | 23.1.4 | 4h25m | 121GB | 157K | 5.10GB | 286 |
B | 23.2.0a2 / fit | 1h29m | 1.88GB | 12.0K | 543MB | 206 |
B | 23.2.0a2 / fit+transform | 2h7m | 56.5GB | 19.8K | 14.7GB | 348 |
Table 1: Comparison of runtimes and disk usage between fMRIPrep version 23.1.4 and 23.2.0a2. "fit" mode included only fit workflows and their outputs; "fit+transform" includes all outputs, with rough parity with 23.1.4 outputs. All processes were completed independently, with no shared CPU, memory or files.
Lestropie For many researchers, a key interaction they will have with this software in the future will be "I obtained the raw fMRI data and "fit" workflow derivatives; combined they are of this size. I then ran the "transform" workflow; that took this long, and produced scratch and output data of these sizes". That may be of greater interest than "what do if I get if I run "fit" then "transform"", which from a user perspective is not really any different to current usage.
Figure 1: Schematic diagram of data dependencies
in the fMRIPrep-next workflow model. Dashes indicate
optional data or processes. The "fit" derivatives
include a collection of individual volumes and
transform files. The "transform" section shows the
process used to generate resampled BOLD series.
The available inputs, such as fieldmaps and
slice-timing metadata, and the target space, such
as an MNI template, determine the final result.
The raw and fit derivatives may be distributed as
an archive, and the transform process may be
performed by the analyzing researcher.
Not shown here are confound time series, which
are products of both fit and transform workflows.
Lestropie "Confound time series" is not a derivative of "Resampled BOLD". Maybe like the "Fit derivatives" subgraph you need a "Transform derivatives" sub-graph where "Resampled BOLD" and "Confound time series" are generated from their respective inputs.