---
title: Nilearn developers meeting
---
# Nilearn developers meeting
## Important links
- Jitsi link: https://meet.jit.si/nilearn-dev-team-meeting
- dev on-boarding doc: https://hackmd.io/PPAjvZ0SSzeJeeqRIhWmNA?both=
```markdon
<!-- TEMPLATE TO COPY PASTE -->
## Day Month Year
### News
### Issues
#### number - title
### PRs
[review required](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)
#### number - title
### Set time next meeting
```
## For next meeting
### PRs
[review required](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)
#### 5772 - move some tests to root tests folder
:::info
**TL;DR**: are we OK with nilearn moving to adopt a `src` layout?
:::
https://github.com/nilearn/nilearn/pull/5772
Relates to [[MAINT] switch to a src layout to organize the tests](https://github.com/nilearn/nilearn/issues/3660)
##### current flat layout
```
pyproject.toml
src/
nilearn/
__init__.py
_utils/
niimg.py
tests/
test_niimg.py
...
signal.py
tests/
test_signal.py
...
...
```
##### src layout
```
pyproject.toml
src/
nilearn/
__init__.py
_utils/
niimg.py
signal.py
...
tests/
test_signal.py
_utils
test_niimg.py
...
```
:::info
Links what the `src` layout is and its benefits:
- scientific python recommendation: https://docs.pytest.org/en/stable/explanation/goodpractices.html#tests-outside-application-code
- https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/
- https://docs.pytest.org/en/stable/explanation/goodpractices.html#tests-outside-application-code
- https://www.youtube.com/watch?v=sW1qUZ_nSXk
- https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure%3E
:::
Extra benefit: avoid having tests files and their data in the package which can make the whole thing lighter https://github.com/nilearn/nilearn/issues/3660#issuecomment-2999317439
#### 5714 - add SignalWarning
:::info
**TL;DR**: Do we want exceptions / warnings that are more nilearn specific?
:::
https://github.com/nilearn/nilearn/pull/5714
Relates to [Create nilearn specific warnings / exceptions](https://github.com/nilearn/nilearn/issues/5495)
### Demo (if we have time)
sklearn and nilearn tag system:
- 2 marimo notebooks: https://github.com/nilearn/nilearn_sandbox/pull/17
## 20 Jan 2026
### News
- OHBM award submission for nilearn
- Feedback: improve the discussion of nilearn's impact on the community
- How nilearn is used in multiple major projects
- Add in information from OSCARS as appropriate
- [Draft](https://docs.google.com/document/d/1JEB96JPd8bBHsQYWkDnF3FTmKyWfcDZNeo2-u4w6_9A/edit?tab=t.0)
- [Sprint organization](https://hackmd.io/Y9s1hxMdSiWc8CS9d6OLhg)
- Logistics
- BT - covered by the workshop
- Other nilearn team member will book through INRIA account and see if we can book the hotel recommanded by Francine
- Works before the sprint
- Bug fix release
- Benchmark
- **Core devs: Comment on the issues we can lead before the sprint**
- HALF-Pipe reporting / metrics for upstream contribution: HTW follow up with Lea about the list
- Hande: remote participation on report
- Code quality: not for new contributors but we might want an issue to track these discussions; use the sprint to find a time as a coffee topic to discuss
- surface smoothing: identify some potential new people (MSL, IGH; HTW reach out)
- Balance between new people vs core dev turn out
- BT: people can come sporadically; recruit new people
- EdP: balance between outreach and getting things done
- Advertise at Brainhack MTL and UNIQUE mailing list
### Issues
#### number - title
### Set time next meeting
- Unofficial meeting for the sprint: Feb 4 2026
## 16 December 2025
### News
- Submitted
1. OHBM [abstract]( https://docs.google.com/document/d/1vpaZSg4TbnotFyLtKhVPdhjHE61YP0haxVfMFvqFKg8/edit?usp=sharing) on Nilearn developments
2. OHBM [educational course](https://hackmd.io/n8tyl5yaQKyDxvoOARTM2w) on open-source machine learning
- Reviewing release 0.13.0 [miletones](https://github.com/nilearn/nilearn/milestone/30)
- Sprint planning document : https://hackmd.io/Y9s1hxMdSiWc8CS9d6OLhg
- Confirmed for February 17-19, Montréal
- Finalizing external invitations
### Set time for next meeting(s)
Jan 6th - Sprint planning meeting
Jan 20th - Core Dev meeting
## 21 October 2025
**Attending:**
- Elizabeth
- Bertrand
- ~~Hao-Ting~~
- Remi
- ~~Himanshu~~
- Michelle
- Mohammad
- Jérôme
- Hande
- Pierre-Louis
### News
- OHBM 2026:
- who is going?
- people based in France (or nearby) will most likely go
- abstract?
- sustainable software development
- as a course
- release 0.13.0
- release date: end november
- issue: https://github.com/nilearn/nilearn/issues/5619
- milestone:
- see https://github.com/nilearn/nilearn/milestone/30
- FYI not all of those things may make it into 0.13
- dev changelog: https://nilearn.github.io/dev/changes/whats_new.html#highlights
- drop python 3.9 and support 3.14
- bump some dependencies and a whole bunch of deprecations
- 2026 sprint planning:
- Coordinating document : https://hackmd.io/@emdupre/SJsbga7Cgg/edit
### Issues
#### 5696 - make check_niimg* functions part of the user facing API
https://github.com/nilearn/nilearn/issues/5696
- 0.12.1 removed some imports from `nilean._utils.__init__.py`: this broke some downstream packages that made use of those private functions
- those functions although private are used in quite a few packages / projects that use nilearn: https://github.com/nilearn/nilearn/issues/5696#issuecomment-3317360324
- those validation functions should be part of the user facing API
#### 5712 - slow test suite
:::info
**TL;DR**: are we OK potentially running only a subsets of tests on PRs?
:::
https://github.com/nilearn/nilearn/issues/5712

This starts to add a lot of friction on the dev side.
Cause:
- a lot more tests have been added by trying to make sure all our estimators are sklearn compliant
- more github CI workers needed per PR
One of the latests run had:
5460 tests passed, 22 skipped, 878 xfailed
##### number of defined tests
```bash
nilearn git:(0.9.0) grep -rni 'def test_' nilearn/**/*.py | wc -l
928
nilearn git:(0.12.1) grep -rni 'def test_' nilearn/**/*.py | wc -l
1849
```
:::warning
Increase in number of tests in part reflects that long tests have been split into smaller ones.
:::
##### estimator checks
```bash
nilearn git:(0.12.1) grep -rni 'def check_' nilearn/_utils/**/es*.py | wc -l
63
```
That runs on about 30 different estimators.
:::warning
Does not include all the sklearn checks.
:::
##### possible solutions
we want:
- a successful run of the test suite to be as fast as possible
- reduce the number of CI workers used by PR or on main: allows more PRs in parallel
- a failing run of the test suite runs of the test suite should fail as fast as possible to liberate CI workers for other things
not mutually exclusive
- **pay** to get some "proper" github action time to use more workers
- **speed up slow tests**
- we have a whole bunch of tests marked as `pytest.mark.timeout(0)` that can maybe be made faster
- we can gain a little on here but not enough
- **fail faster**: mark test as slow / fast, run fast test first and only run slow test if fast tests fail
- con: the fail fast may hide some other tests failure that will only 'appear' once the fast tests are fixed
- **change testing strategy**
- change how many tests we run per PR:
- only run tests with [minimum dependency](https://nilearn.github.io/dev/ci.html#testing-minimum-yml) and [nightly build](https://nilearn.github.io/dev/ci.html#nightly-dependencies-yml) on schedule (once per day / week?) instead of after every merge in main
- change the testing matrix:
- fail faster: have one branch of the testing matrix with only the tests that check plotting against baseline figures --> fewer tests and would at least fail early if one plotting test is affected
- **do not test all pythons**: run oldest supported python on all OS, run latest supported python on a single OS
- only run tests that are affected by a PR:
- several pytest extensions 'claim' to be able to help with that
- rely on our the architecture we are starting to enforce with import-linter to sub-select the tests to run, for example:
- `nilearn.signal` only import from `nilearn.typing` and `nilearn.exceptions`, so if `nilearn.signal` is modified we only need to modify the tests for `nilearn.signal`
- `nilearn.glm` does not import from `nilearn.decoding`, so no need to run the tests for decoding when working on GLM.
### PRs
#### [review required](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)
#### 5410 - Tedana confounds
https://github.com/nilearn/nilearn/pull/5410
Ready to merge: @haotingwang ?
ACTION:
- HTW Carved out 30th Oct to have a final check.
#### 5770 - move `save_glm_to_bids` from `nilearn.interfaces` to `nilearn.glm` subpackage
:::info
**TL;DR**: are we OK moving this function?
:::
https://github.com/nilearn/nilearn/pull/5770
- architectural issue: circular import between `nilearn.interfaces` and `nilearn.glm`
- this PR would solve this but changes in public API: deprecation cycle needed
#### 5739 - replace NaN with 0 in GLM confounds
https://github.com/nilearn/nilearn/pull/5739
@haotingwang : should something be done about eventual NaN in confounds in the load confounds code?
HTW: It's usually the first few rows due to non-steady state detection. When we created load confounds and integrating it with the masker, those volumes with nana would be excluded through `sample_masks`. For GLM we need to either enforce people passing `sample_masks` or have a more straight forward solution. Happy to have a more focused meeting on this issue.
### Set time next meeting
- Tuesday November 4th
## 09 September 2025
**Attending:**
- ~~Elizabeth~~
- Bertrand
- Hao-Ting
- Remi
- Himanshu
- Michelle
- Mohammad
- Jérôme
- Hande
- Pierre-Louis
### News
- release 0.12.1
- many deprecations for 0.13.0
### Issues
#### 5615 - do feature screening based on mask size
https://github.com/nilearn/nilearn/issues/5615
original discussion on neurostars: https://neurostars.org/t/feature-selection-does-not-appear-to-work/33715/11
feature selection in decoders is only done if the size of the mask (implicit or explicit) is not inferior to 'screening_percentile' of the reference brain size:
the user would like a way to easily do: "keep only 10% of the voxel of this mask I am giving you"

issues:
- your mask can easily get wrong (think of data-driven mask), e.g. keeping only a small portion of the brain due to inhomogeneities in the image contrast (strong bias field). People often don’t check these masks.
- If you loop over subjects with different subject-specific masks, you may end up selecting different numbers of features without noticing.
ACTION:
- start deprecation
- name it "legacy" behavior
#### 5599 - templating engine
https://github.com/nilearn/nilearn/issues/5599
Options:
1. keep vendoring tempita (PRO: nothing changes, CON: new devs have to learn an uncommon templating language with some funky patterns e.g iterating over dict)
2. use tempita package (PRO: more up to date code (e.g dropped support for python 2), CON: see 1, also the package is not very actively maintained and seems that it's not just drop-in replacement for the version we used to vendor - see failing PR: https://github.com/nilearn/nilearn/pull/5604)
3. switch to jinja (PRO: actively maintained, standard template engine for a large part of the python community (django, flask, mkdocs...) - so easy to find help on "how to do X", CON: will need to adapt code and temlate a bit)
Remi: in favor 3
ACTION:
- estimate how much time 3 would take
- open issue for testing reports
- switch to jinja when it make sense
### PRs
[review required](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)
#### 5629 - change default standardize signal
`nilearn.signal.standardize_signal` to use default 'zscore_sample' instead of 'zscore' (compute std from sample and not population)
should be a simple deprecation
BUT this will affect `nilearn.signal.clean` that is used by all our maskers so it may affect most of our high-level classes behavior (glm, decoders...)
AND we have been using a DeprecationWarning for this and not a FutureWarning (see https://github.com/nilearn/nilearn/issues/5651), so users who rely on script (and notebooks?) may not have been aware this change is coming.
ALSO the allowed values for "standardize" should be affected for the decoders, decomposition... should be adjusted to align with the value suppoted by 'clean' (see
https://github.com/nilearn/nilearn/issues/5648) but this more of a documentation issue.
Question:
- do we just proceed with deprecation anyway even if the warning about a change (that may affect the output of a lot of our classes) may have been missed by users?
ACTION: delay the deprecation till 0.14
### Set time next meeting
October 21rst - 4PM Paris time
### Mini-tuto
TBD
- sklearn / nilearn tags for classes
- used to organize 'estimators' in broad categories
- sklearn
- have part of the public API since 1.6.0
- https://scikit-learn.org/stable/developers/develop.html#estimator-tags
- https://scikit-learn.org/stable/modules/generated/sklearn.utils.Tags.html
- nilearn:
- private(ish) for now but exposed via `__sklearn_tags__()` along with the sklearn tags
- https://github.com/nilearn/nilearn/blob/main/nilearn/_utils/tags.py
- maskers, glm, multimaskers, accept nifti img and/or surface img...
- massively used to organize the 'estimator checks' of nilearn: https://github.com/nilearn/nilearn/blob/main/nilearn/_utils/estimator_checks.py
## future Nilearn development cycles
- on boarding dev docs
- hackathon: winter-spring
- where / when?
- who do we invite?
- lune / jb
- more reviews before merging
- slow-down PR merge cycle
- dev meeting: have some mini tutorial section
- get more people to contribute:
- people who work on package that depends on nilearn
- people in specific labs who may have more time
### Nilearn Coding sprint
- In North America (Montreal) or Paris ? When ? Whom should we invite ?
## July 8 2025
**Attending:**
- Elizabeth
- Bertrand
- Hao-Ting
- Remi
- Himanshu
- Michelle
- Mohammad
- ~~Jérôme~~
- Hande
- Pierre-Louis
### News
- OHBM hackathon:
- [#5477](https://github.com/nilearn/nilearn/issues/5477) Creating surface meshes from volumetric masks.
- write an example first to assess amount of work needed? reach out to Sina [himanshu]
- use these meshes more extensively instead of just for visualization? not super robust
- [#5476](https://github.com/nilearn/nilearn/issues/5476) Standardized plots...
- kind of vague: what exact plots we need?
- move to discussion, ask about specific plots/metrics needed
- One new contributor ([#5473](https://github.com/nilearn/nilearn/issues/5473))
- OHBM:
- network/atlas correspondence tool: https://github.com/rubykong/cbig_network_correspondence
- Why plotting is the most popular feature: applications vs. methods users
- more interaction with "application" users: other events? eg. EBRAINS summit in Brussels
- release 0.12.0: https://nilearn.github.io/dev/changes/whats_new.html#id166
- do more releases per year? don't need to stick to the calendar for the sake of sticking to the calendar, if we have more content ready
- pre-releases: reach out to big downstream dependents to see if they have a testing infra that could leverage pre-releases
-
### Issues
#### 5513 - Improving Nilearn’s Codebase Understanding with Diagram-First Documentation
https://github.com/nilearn/nilearn/discussions/5513
- not convinced by the idea
- but it does raise the issue of having better on-boarding documentation for developpers
- dev doc may be about tiny details (SOLUTION: add comments in the code where needed if they are missing) or big picture (this would be better in specific documents to explain how things are structured)
#### Support joblib shelving for all maskers?
Currently only NiftiMasker does it
https://github.com/nilearn/nilearn/pull/5509#discussion_r2191557785
### PRs
[review required](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)
- 5517 - explain how map masker extract data
https://github.com/nilearn/nilearn/pull/5517
#### 5511 - ensure image estimator can fit and preserve several dtype
https://github.com/nilearn/nilearn/pull/5511
- Should our estimator support dtype = integer when they can return mean of voxels of a region?
- Should we enforce consistent dtype across the different parts of a surface image?
### other
* https://www.mcgill.ca/neuro/open-science/open-science-awards-and-prizes
* Eliz NeuroHackademy tutorial : suggestions ?
### Set time next meeting
September 9th?
## June 10 2025
**Attending:**
- ~~Elizabeth~~
- Bertrand
- Hao-Ting
- Remi
- Himanshu
- Michelle
- ~~Mohammad~~
- ~~Jérôme~~
- Hande
- Pierre-Louis
### News
- OHBM poster: himanshu
- OHBM hackathon:
- ideas: work on open issues?
- work on an atlas comparator? https://github.com/Remi-Gau/atlas_comparator.git, an updated version of https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0007200
- update list of core dev?
- https://nilearn.github.io/dev/authors.html#core-developers
- see PR: https://github.com/nilearn/nilearn/pull/5429
- release 0.12: coming soon
- milestone: https://github.com/nilearn/nilearn/milestone/28
- do we want to delay till after OHBM hackathon to maybe include a couple more fix or contributions? Or actually try to finish it before to get some early feedback at hackathon?
- save to disk: https://github.com/nilearn/nilearn/pull/5309
### Issues
#### 5408 - Misleading first level parameters logic
https://github.com/nilearn/nilearn/issues/5408
Some first level model parameters are more data specific (TR, slice_time_ref...) and will be ignored if user passes design matrix at fit time.
##### problems
- sklearn 'recommends' that data specific info should be passed at fit time
from the sklearn doc: https://scikit-learn.org/stable/developers/develop.html#fitting
> Depending on the nature of the algorithm, fit can sometimes also accept additional keywords arguments. However, any parameter that can have a value assigned prior to having access to the data should be an __init__ keyword argument. Ideally, fit parameters should be restricted to directly data dependent variables.
- not enough warnings / doc that those paramereters are ignored if design matrices are passed at fit time
Question: do we want to change the API, so that data specific parameters are passed at fit time ?
ACTION: no change in API, update doc and add warnings
### PRs
[review required](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)
#### 5410 - [ENH] add tedana support for load_confounds
https://github.com/nilearn/nilearn/pull/5410
- should we aim to have it ready for 0.12?
#### 5421 - [FIX] less warning when using symmetric_cmap with matplotlib engine and warn to use the proper 'hemi' when plotting surfaces
https://github.com/nilearn/nilearn/pull/5421
See related issue: https://github.com/nilearn/nilearn/issues/5414
The PR would throw warning about the "hemi" when user want to plot surfaces by passing arrays for mesh and data: we cannot know what hemisphere is being passed in this case, so we warn the user to be cautious and double check what hemi they have passed.
##### problems
- we have never warned about this before even when we did not have any SurfaceImage
- warning fatigue leads to warnings being ignored
- this is about plotting functions so users should litteraly see when something is not right (or at least one would hope)
##### more general question
- should we start thinking about deprecating the possibility to plot numpy arrays directly?
- Remi: not before we have an easy way to load files to SurfaceImage
- Bertrand: would require deprecation
### Set time next meeting
* July 8th, 4PM CET
## May 20th 2025
**Attending:**
- Elizabeth
- ~~Bertrand~~
- Hao-Ting
- ~~Remi~~
- Himanshu
- Michelle
- Mohammad
- ~~Jérôme~~
- Hande
### News
- OHBM in a month: suggestions for brainhack & poster?
- option 1: write the benchmark suite during brainhack
- option 2: plotting focused improvements, [#5216 diffusion plotting](https://github.com/nilearn/nilearn/pull/5216)
- for poster: surface API out, sklearn compliance, atlas object, benchmarks, [nilearn usage stats](https://github.com/nilearn/poia)
- on track for the release end of month
- Is anything needed from the other devs ? @Remi
- Thinking ahead to next release : how can we get other devs involved ?
- About surface/image objects interfacing with nibabel:
- nilearn.image module: why separate functions and not as object methods?
- implement image operations under NiftiImage object and SurfaceImage?
- nibabel's "surface image" implementation not moving forward due to lack of resources
### Issues
#### number - title
### PRs
[review required](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)
#### number - title
### Set time next meeting
## April 8th 2025
**Attending:**
- Elizabeth
- ~~Bertrand~~
- Hao-Ting
- ~~Remi~~
- Himanshu
- Michelle
- ~~Mohammad~~
- ~~Jérôme~~
- Hande
- ...
### News
- Performance benchmarking with asv ([#5280](https://github.com/nilearn/nilearn/pull/5280))
- AAL atlas issue fixed itself
### Issues
- [#5300](https://github.com/nilearn/nilearn/issues/5300): Use methods in SurfaceImage objects for checking equality, mean etc. ([relevant discussion](https://github.com/nilearn/nilearn/pull/5301#discussion_r2024375219))
- two different kind of functions "checks between two images" vs. "image operations"
- checks are only used by devs vs. image operations are used by public
- add methods in the objects for devs and still keep user-facing image module
- maintaining both would be difficult and would make structure complex
- new situation that we developed SurfaceImage, to be added to nibabel
- we still want to be "approachable"
- re-connect with nibabel devs about SurfaceImage
- [Elizabeth] bring up old discussions with nibabel devs
- Hao-Ting's ancient unfinished [easter egg in nibabel](https://github.com/nipy/nibabel/pull/1014)
- [#5262](https://github.com/nilearn/nilearn/pull/5262): use `imgs` instead of `img` / `X` for `fit`, `transform`, `fit_transform`
- Remi will look into it
- [#5128](https://github.com/nilearn/nilearn/pull/5128): User-guide page to introduce factors affecting performance
- Elizabeth noted that order of calls affecting peakmem readings by `%memmit` ([comment](https://github.com/nilearn/nilearn/pull/5128#discussion_r2021769564))
- probably due to numpy mem mapping
- better to run each command in separate ipython instances
- revisit [the example](https://nilearn.github.io/dev/auto_examples/07_advanced/plot_mask_large_fmri.html) comparing NiftiMasker's performance
- trying to implement the comparison via ASV, but parallelization failing ([PR on an external repo](https://github.com/man-shu/nilearn_benchmarks/pull/1))
- LATEST: works only with `threading` joblib backend:
```
Time:
[100.00%] ··· ================ =========== ===============
-- loader
---------------- ---------------------------
implementation nilearn nibabel (ref)
================ =========== ===============
nilearn 11.0±0.2s 19.6±0.1s
numpy (ref) 3.95±0.5s 15.0±0.1ms
================ =========== ===============
Peak Memory:
[75.00%] ··· ================ ========= ===============
-- loader
---------------- -------------------------
implementation nilearn nibabel (ref)
================ ========= ===============
nilearn 4.58G 5.96G
numpy (ref) 4.31G 167M
================ ========= ===============
```
### PRs
#### Review required ([list of PRs](https://github.com/nilearn/nilearn/pulls?q=is%3Apr+is%3Aopen+label%3A%22Review+required%22)):
- [#5280](https://github.com/nilearn/nilearn/pull/5280): Performance benchmarking with asv
### Other questions
* Organize a hackathon ?
* funding opportunities ? https://oscars-project.eu/
* under life sciences
* we dont exactly match the target, fall into "also funded" category
* 10 page proposal: needs a lot of time
* check with Bertrand, get some clarity this week
* [Himanshu] send an email with everyone to get the said clarity.
### Set time next meeting
4pm paris time 6th May