# Multiple Sclerosis Study
## Cross-Sectional Lesion Segmentation
- [List of papers](https://hackmd.io/I7ruFyn3R_65eNmq1lLLxw?both) (Hackmd)
- [Comparison table](https://coda.io/d/MS-Lesion-Segmentation-Methodologies_dG58X_F68Ki/MS-Lesion-Segmentation-Methodologies_suGef#Comparison_tuUxI/r15) (Coda)
### Methods Evaluation
On EMISEP dataset:
- **Anima Music v3.2**: installed (docker), and running ; waiting for the ground truth
- **Tiramisu + 2.5 Stacked Slices**: installed (virtualenv), not running yet
- **nnUNet**: installed, runnning, waiting for the split to be ready for the training
## Longitudinal Lesion Segmentation
- [List of papers](https://hackmd.io/fnD2o6KnTl-RU67jdJ9XNw)
### Ground Truth
How will we use the EMISEP and the OFSEP ground truth together?
### EMISEP Preprocessing
#### Brandon
- crop images baseline, followup & labels
- biais correction on baseline & followup
- histogram normalisation of follow up regarding baseline
- non-rigid registration of baseline on followup to avoid atrophy problems
#### Music
##### Preprocessing 1
On the Flair:
- brain extraction (animaAtlasBasedBrainExtraction.py: icc_atlas)
On all images (flair, t1, t2):
- linear registration of t1, t2 on Flair (animaPyramidalBMRegistration)
- bias correction (animaN4BiasCorrection -B 0.3: Bias field Full Width at Half Maximum, default=0.15 )
- denoising (animaNLMeans -n 3 : Patch half neighborhood size -> default: 5)
- mask image (extract brain)
##### Preprocessing 2
Three main steps:
- (Re)Extract brain with the Olivier altas,
- erode the result,
- and normalize with the uspio atlas control images (Nyul Standardization)
Then resample images to 1x1x1 mm3 ; and convert to Nifti.
Detailed process:
- register T1 Olivier atlas to T1 image (animaPyramidalBMRegistration, animaDenseSVFBMRegistration, animaTransformSerieXmlGenerator)
- apply transform to Olivier atlas mask & mask given mask image (user gives a mask image different from the one obtain from the atlas) (animaApplyTransformSerie, animaMaskImage)
- register wm, gm, csf maps (animaApplyTransformSerie)
- register T1 control to T1 image - to normalize within the same mask (animaPyramidalBMRegistration, animaDenseSVFBMRegistration, animaTransformSerieXmlGenerator)
- apply transform to control images (animaApplyTransformSerie)
- intersect control mask and patient mask & erode brain mask (animaMaskImage, animaMorphologicalOperations)
- apply mask (animaMaskImage)
- convertToNifti (animaConvertImage)
- normalize nyul (animaNyulStandardization)
- Resample images to get 1x1x1 mm3 images (animaImageResolutionChanger, animaTransformSerieXmlGenerator, animaApplyTransformSerie)
### Final preprocessing
#### Cross-sectional preprocessing
On the Flair (Francesca : "for the brain extraction, we could otherwise use icc_atlas/Reference_T1.nrrd - current implementation - or the MNI T1 - it seems that there is no MNI FLAIR available. Also, we already have atlas and MNI template in the same space - if I understand well ? "):
1. brain / regions extraction: use the colin 27 atlas registered in the MNI152 space (which is the space to use for normalisation with the MNI152 template):
- non rigid registration of the MNI template on the flair (in three steps: first rigid, then affine, and finally non-rigid. See animaAtlasBasedBrainExtraction.py. Note: Non-linear transformation may not wrok well for small/big head persons without computing an affine transformation beforward.)
- apply the obtained transfromation on brain mask
- mask the flair
- apply the obtained transformation on each region of the brain (from the colin 27 atlas), we will need them to localise lesions in the evaluation
- optionally (probably not necessary): redo the whole process to have a finer registration and mask (see animaAtlasBasedBrainExtraction.py)
On all images (Flair, t1, t2):
1. rigid registration of t1, t2 on Flair (animaPyramidalBMRegistration)
1. bias correction (animaN4BiasCorrection -B 0.3: Bias field Full Width at Half Maximum, default=0.15 )
1. mask image (extract brain) use the mask computed in the first step (Francesca : " the basic pre-processing stops here, next steps are meant for intensity normalization across patients. Unfortunately, the MNI template does not include all the 3 modalities and thus not usable as an external reference for intensity normalization. Uspio includes the 3 modalities. Do we want to do normalization across-patients to start with ? Should we leave it on a side for now ?")
1. transform the control image of the MNI Template the patient image (with the transformation computed at the first step)
1. ~~erode brain mask (animaMorphologicalOperations)~~ (the risk is to loose some important information) <- **Note: Francesca initially emphasized the importance of this point. Did she change her mind ?** We have no feedback from Francesca yet. Let's wait to see her position on this. Francesca : " In the music pipeline I did perform an erosion of the brain boundaries (of a few voxels) because I observed some left over hyperintensities due to sub-optimal skull stripping. No important information was removed. Anyhow, it is not a fundamental step and the overall pipeline is changing. So, for the time being, we may keep the pipeline fundamental and leave this step on a side and if we will make a similar observation we can always re-think about this step."
3. apply mask on registered control images (animaMaskImage)
4. normalize nyul the patient image with the control image (animaNyulStandardization) <- **Note: I read the [Nyúl paper](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.204.102&rep=rep1&type=pdf) and it seems that there is a learning step. How do we handle it?** I'm sorry I'm not aware of this ^^ what is this step?
5. crop all volumes with flair brain mask
#### Longitudinal preprocessing
1. Same steps as cross-sectional preprocessing except for the cropping (Francesca : "Should we keep it basic (and similar to Salem et al.) at least for a start and say we do not normalise across patients ? This means we arrive at step 3. in the above CS pipeline. ")
2. rigid-registration of both time points (all modalities) in an intermediate space
4. smooth** non-rigid registration of baseline on followup to avoid atrophy problems (note: also keep the input images)
5. Crop all volumes based on the flair brain mask. (also crop the input images of the previous step with the union of the flair masks of both time points)