owned this note
owned this note
Published
Linked with GitHub
# Human Brain Atlas Processing Tutorial [without an input template - OLD METHOD]
## Table of Contents
[TOC]
## About
This guide will go through the steps used to generate the original 0.25 tempalates from the Human Brain Atlas project. We have since optimized our method (see [this](https://hackmd.io/qKZtbfpSTnGzSebiQW3KfQ) guide), but we wanted to document how we originally did things so we don't forget...
This guide also goes over how to PD correct your scans (based on [Van de Moortele et al 2009](https://doi.org/10.1016/j.neuroimage.2009.02.009). However, we have since abandoned PD correction as it makes our images a little too blurry.
This guide assumes :
-you **DON'T** have an input template that you want to align everything to. See [this](https://hackmd.io/qKZtbfpSTnGzSebiQW3KfQ) guide if you want to preprocess data **WITH** an input template.
-you have installed all the necessary software/programs
-that you're using linux or OSX as your operating system. most of the software packages used here are not compatible with Windows.
The data from the original project is huge, so here we will use sample data and a 0.4mm template instead of 0.25mm. You can can download this dataset here (download the `demo-without-input-template` folder as a zip file):
| Link to sample dataset used in this guide | https://osf.io/2mnsg/files/?view_only=2d48452b19cf4fb68d892072be41e575 |
| ----------------- |:--------------------------------------------------------------------- |
Any queries can be sent to Zoey Isherwood (zoey.isherwood@gmail.com) or Mark Schira (mark.schira@gmail.com)
## List of software packages needed
| Software/Programs | Website |
| ----------------- |:--------------------------------------------------------------------- |
| ITKSNAP |[http://www.itksnap.org/pmwiki/pmwiki.php?n=Downloads.SNAP3](https://) |
| MATLAB |[https://au.mathworks.com/products/matlab.html](https://) |
| Python |[https://www.python.org/](https://) |
| HDBET |[https://github.com/MIC-DKFZ/HD-BET](https://) |
| FSL |[https://fsl.fmrib.ox.ac.uk/fsl/fslwiki](https://) |
| Freesurfer |[https://surfer.nmr.mgh.harvard.edu/](https://) |
| ANTS |[http://stnava.github.io/ANTs/](https://) |
| pydeface |[https://github.com/poldracklab/pydeface](https://) |
## List of scripts used to process data
**NOTE:** if you download the dataset, all the necessary code is included in the zip folder.
| Scripts | Website |
| ----------------- |:----------------------------------------------------------------------------------------|
| `dicm2nii.m` |[https://au.mathworks.com/matlabcentral/fileexchange/42997-xiangruili-dicm2nii](https://) |
| `preproc-hba-project.sh` | https://osf.io/aq3xy/?view_only=2d48452b19cf4fb68d892072be41e575 |
| `make-masks-hba-project.sh` | https://osf.io/vzygb/?view_only=2d48452b19cf4fb68d892072be41e575 |
| `n4bias-corr-hba-project.sh` | https://osf.io/gx3w6/?view_only=2d48452b19cf4fb68d892072be41e575 |
| `batch_pd_corr_code.m` | https://osf.io/t96e3/?view_only=2d48452b19cf4fb68d892072be41e575 |
## Data summary
>:memo: Internal data summary:
>| Sequence Name | File used for template | File used for brainmask |
>| ----------------- |:--------| :--------|
>| MP2RAGE | UNI_DEN | **INV2** |
>| DUTCH | INV1_ND |INV1_ND |
>| FLAWS | INV2_ND |INV2_ND |
>| T2 | T2 | T2 |
>| DTI | `mean` for alignment. Apply non-linear & linear transform to other files (`FAC`) | `mean` |
## Converting RAW files to NIFTI
### Step 1: DICOM -> NIFTI
MATALB script used to convert data:
| Original Name | Website |
| ----------------- |:----------------------- |
| dicm2nii.m |[https://au.mathworks.com/matlabcentral/fileexchange/42997-xiangruili-dicm2nii](https://)|
>:memo: Internal lab info: dicoms are located here:
>```
>/mnt/lab-nas/raw/raw2018/human-brain-atlas-march-2019/source
>```
1. Open MATLAB and go to the directory where the unzipped data is located
```matlab=1
cd [insert data path here]
%e.g. cd ~/Desktop/hba-sample-dataset-template-without-input
```
2. Make a directory called ```raw```
```matlab=1
mkdir raw
```
3. Now run the ```dicm2nii.m``` script. For it to run it has to be added to the filepath. When you run it a GUI like this should open:
![](https://i.imgur.com/gbvVry0.png)
4. Click on ```DICOM folder/files``` and select the ```source``` folder in the current directory. Next, click the ```Result folder``` button and select the ```raw``` folder in the current directory.
Now ensure all the correct options are selected (see image above; click to select ```Output format .nii``` ,```Compress```,```Left-hand storage```, ```Use parfor if needed```, ```Save json file```, and ```Use SeriesInstanceUID if it exists```)
Once all the correct options are selected, click Start conversion.
Note: this step can take a while...
5. If the processing has finished, you've successfully converted the files from DICOM to NIFTI.
### Step 2: Rename files according to BIDS formatting
Rename the files output in `raw` with BIDS formatting. For this, use `sub-01` as the subject name, and `ses-02` as the session number. In the sample dataset, the run numbers should be runs: `run-01`, `run-02`,`run-03`, and `run-04`. The acquisition name should be: `acq-dutch`.
There will be multiple files associated with each `run`. These files include: `INV1`,`INV1_ND`, `T1_Images`, `UNI_DEN`, `UNI_Images`, `INV2`, and `INV2_ND`.
See below for some examples of how to rename each file:
| Original Name | BIDS Name |
| ----------------- |:------------------------------------------------------------------------------------|
| Dutch_wip944_mp2rage_0_5iso_7T_INV1_ND_s013.nii.gz | sub-01_ses-02_run-01_acq-dutch-INV1_ND.nii.gz |
| Dutch_wip944_mp2rage_0_5iso_7T_INV1_s014.nii.gz | sub-01_ses-02_run-01_acq-dutch-INV1.nii.gz |
| Dutch_wip944_mp2rage_0_5iso_7T_INV2_ND_s015.nii.gz | sub-01_ses-02_run-01_acq-dutch-INV2_ND.nii.gz |
| Dutch_wip944_mp2rage_0_5iso_7T_INV2_s016.nii.gz | sub-01_ses-02_run-01_acq-dutch-INV2.nii.gz |
| Dutch_wip944_mp2rage_0_5iso_7T_T1_Images_s017.nii.gz | sub-01_ses-02_run-01_acq-dutch-T1_Images.nii.gz |
| Dutch_wip944_mp2rage_0_5iso_7T_UNI_DEN_s018.nii.gz | sub-01_ses-02_run-01_acq-dutch-UNI_DEN.nii.gz |
| Dutch_wip944_mp2rage_0_5iso_7T_UNI_Images_s019.nii.gz | sub-01_ses-02_run-01_acq-dutch-UNI_Images.nii.gz |
We were a bit old school in our approach and manually renamed each file to follow BIDS formatting. There are many more intuitive ways of doing this (e.g. using the BIDS feature in dicm2nii, naming files automatically using a script etc), but we ended up naming them manually.
### Step 3: Anonymise the data by defacing each scan.
Python script used to deface data:
| script name | Website |
| ----------------- |:----------------------- |
| pydeface |[https://github.com/poldracklab/pydeface](https://) |
Run the section of code below to anonymize the data by removing the subject's face. It's a little redundant in the case of our data since 1) the identities of both subjects are not anonymised in the upcoming publication and 2) we end up skull stripping later on. So you can skip this step if you like, but it's just best practice to do this just in case you have to share anonymous data with the skull intact.
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
INPUT_PATH=${DATA_PATH}/raw
OUTPUT_PATH=${DATA_PATH}/defaced
# make OUTPUT_PATH if it doesn't already exist
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
SCANS=(sub-01_ses-02_run-01_acq-dutch-
sub-01_ses-02_run-02_acq-dutch-
sub-01_ses-02_run-03_acq-dutch-)
TYPES=(INV1_ND.nii.gz
INV1.nii.gz
INV2_ND.nii.gz
INV2.nii.gz
T1_Images.nii.gz
UNI_DEN.nii.gz
UNI_Images.nii.gz)
cd $INPUT_PATH
for f in ${SCANS[@]}; do
echo "Processing scan: $f..."
#deface INV2 image then apply to everything else...
pydeface ${INPUT_PATH}/${f}${TYPES[0]} \
--applyto \
${INPUT_PATH}/${f}${TYPES[1]} \
${INPUT_PATH}/${f}${TYPES[2]} \
${INPUT_PATH}/${f}${TYPES[3]} \
${INPUT_PATH}/${f}${TYPES[4]} \
${INPUT_PATH}/${f}${TYPES[5]} \
${INPUT_PATH}/${f}${TYPES[6]} \
done
# now move all the defaced files to ${OUTPUT_PATH}
mv ${INPUT_PATH}/*_defaced.nii.gz ${OUTPUT_PATH}/
```
## Preprocessing the data
### Step 1: Preprocess raw files using `preproc-hba-project.sh`
```bash=1
### preprocess images.
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
OUTPUT_PATH=${DATA_PATH}/preproc
# make OUTPUT_PATH if it doesn't already exist
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
#input vars:
# for preproc-hba-project.sh
NORMALISE_LUM=0 # set to 0 because we want to pd corr later.
UPSAMPLE_VOXEL_SIZE=0.4 #set to 0.25 for the HBA project, 0.4 for the demo script
EQDIMS_SIZE=512 # set to 1024 for the HBA project, 512 for the demo script
#list all the scans you want to process in the IMAGES variable below. #find "$(pwd)" makes it easier...
# for the HBA project, we preprocessed ALL the files we had (e.g. T1_Images, UNI_DEN, etc) just in case we wanted to analyse these images later down the track...
# but for the purposes of this demo, we'll just process the most relevant file: INV2_ND.
IMAGES=(${DATA_PATH}/defaced/sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/defaced/sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/defaced/sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
for image in "${IMAGES[@]}"; do
bash ${CODE_DIR}/preproc-hba-project.sh \
-i $image \
-o $OUTPUT_PATH \
-n $NORMALISE_LUM \
-u $UPSAMPLE_VOXEL_SIZE \
-e $EQDIMS_SIZE \
done
```
### Step 2: Generate automated masks for each raw file
In order to skull strip each file, we have to first generate a brainmask. To do this we use the `make-masks-hba-project.sh` script, which utilises HD-BET and ANTs' N4 bias correction. The script is pretty automated, requiring only a few input parameters, so I won't delve into exactly what it's doing here.
1. To generate brainmasks of the files generated in the last few steps, run the following section of code:
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
# for make-masks-hba-project.sh
OUTPUT_PATH=${DATA_PATH}/brainmasks
INFLATE_MM=0
# make OUTPUT_PATH if it doesn't already exist
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
# list all the scans you want to process in the IMAGES variable below. #find "$(pwd)" makes it easier...
IMAGES=(${DATA_PATH}/preproc/preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/preproc/preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/preproc/preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
for image in "${IMAGES[@]}"; do
bash ${CODE_DIR}/make-masks-hba-project.sh \
-i $image \
-o $OUTPUT_PATH
done
#clean up
rm $OUTPUT_PATH/ss-*.nii.gz
rm $OUTPUT_PATH/hd-bet-*.nii.gz
```
### Step 3: Skull strip raw files with masks generated in the last step
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
# get scans and masks and put into the following variables... need fullpath for code to work. use line below...
# find $(pwd)/*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
# find $(pwd)/brain*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
OUTPUT_PATH=${DATA_PATH}/brainmasks
# make OUTPUT_PATH if it doesn't already exist
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
IMAGES=(${DATA_PATH}/preproc/preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/preproc/preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/preproc/preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
MASKS=(${DATA_PATH}/brainmasks/brainmask-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/brainmasks/brainmask-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/brainmasks/brainmask-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
echo "skull stripping with indicated masks..."
counter=0
for image in "${IMAGES[@]}"; do
FILEPATH=$(dirname $image)
FILENAME=$(basename $image)
FILENAMENOEXT=${FILENAME%%.*}
echo "ss'ing: ${image}"
echo "brainmask: ${MASKS[$counter]}"
ImageMath 3 ${OUTPUT_PATH}/ss-${FILENAMENOEXT}.nii.gz m $image ${MASKS[$counter]}
# copy mask used here to final directory...
# cp ${MASKS[$counter]} ${OUTPUT_PATH}/brainmask-${FILENAMENOEXT}.nii.gz
((counter=counter+1))
done
```
### Step 4. Prepare Proton Density (PD) scan to align to each T1 scan
| *NOTE*: If you don't want to proton density correct your images, you can skip Steps 3 through 5(?). If you're using the demo dataset, you'll have to tweak the filepaths/filenames in each Step's code blocks accordingly... |
| ----------------- |
To account for the large differences in luminance in our scans across the brain (e.g. the occipital pole being brighter than the frontal lobe), we perform Proton Density (PD) correction. For this, you need to collect a proton density scan from the same scanning session as your T1 images, align this PD scan to each T1 image, then divide the T1 image to this scan. This process makes the Grey/White contrast much better, and also makes each scan more homogeneous in terms of luminance.
If you're processing data collected across 2 different sessions, you will need 2 different PD scans in order to perform PD correction. The PD scan from Session 1 will need to be aligned to each T1 scans from Session 1, then the T1 scans will be divided by the PD scan from its corresponding session. The same goes for Session 2 - the PD scan from Session 2 needs to be aligned with all the T1 scans from Session 2.
For the purposes of this guide, in the example dataset, the defaced raw PD scan from the session each T1 was collected (`ses-02`) is already provided as a NIFTI file in `${DATA_PATH}/proton-density` with the file name `sub-01_ses-02_acq-PD_defaced.nii.gz`.
In order to align it to our preprocessed T1 data, we also need to run the preprocessing code to upsample the scan to the same resolution and dimensions as our T1 data, as well as skull strip it to improve registration to the T1 images.
1. Run the preprocessing script `preproc-hba-project.sh` on the PD scan
```bash=1
### preprocess images.
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
OUTPUT_PATH=${DATA_PATH}/proton-density
# make OUTPUT_PATH if it doesn't already exist
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
#input vars:
# for preproc-hba-project.sh
NORMALISE_LUM=0 # set to 0 because we want to pd corr later.
UPSAMPLE_VOXEL_SIZE=0.4 #set to 0.25 for the HBA project, 0.4 for the demo script
EQDIMS_SIZE=512 # set to 1024 for the HBA project, 512 for the demo script
IMAGES=(${DATA_PATH}/proton-density/sub-01_ses-02_acq-pd_defaced.nii.gz)
for image in "${IMAGES[@]}"; do
bash ${CODE_DIR}/preproc-hba-project.sh \
-i $image \
-o $OUTPUT_PATH \
-n $NORMALISE_LUM \
-u $UPSAMPLE_VOXEL_SIZE \
-e $EQDIMS_SIZE \
done
```
2. Generate a brainmask of the PD scan using `make-masks-hba-project.sh`
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
# for make-masks-hba-project.sh
OUTPUT_PATH=${DATA_PATH}/proton-density
INFLATE_MM=0
# make OUTPUT_PATH if it doesn't already exist
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
# list all the scans you want to process in the IMAGES variable below. #find "$(pwd)" makes it easier...
IMAGES=(${DATA_PATH}/proton-density/preproc-sub-01_ses-02_acq-PD_defaced.nii.gz)
for image in "${IMAGES[@]}"; do
bash ${CODE_DIR}/make-masks-hba-project.sh \
-i $image \
-o $OUTPUT_PATH
done
#clean up
rm $OUTPUT_PATH/ss-*.nii.gz
rm $OUTPUT_PATH/hd-bet-*.nii.gz
```
3. Skull strip the PD scan based on the mask generated in the previous step
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
# get scans and masks and put into the following variables... need fullpath for code to work. use line below...
# find $(pwd)/*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
# find $(pwd)/brain*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
OUTPUT_PATH=${DATA_PATH}/proton-density
# make OUTPUT_PATH if it doesn't already exist
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
IMAGES=(${DATA_PATH}/proton-density/preproc-sub-01_ses-02_acq-PD_defaced.nii.gz)
MASKS=(${DATA_PATH}/proton-density/brainmask-preproc-sub-01_ses-02_acq-PD_defaced.nii.gz)
echo "skull stripping with indicated masks..."
counter=0
for image in "${IMAGES[@]}"; do
FILEPATH=$(dirname $image)
FILENAME=$(basename $image)
FILENAMENOEXT=${FILENAME%%.*}
echo "ss'ing: ${image}"
echo "brainmask: ${MASKS[$counter]}"
ImageMath 3 ${OUTPUT_PATH}/ss-${FILENAMENOEXT}.nii.gz m $image ${MASKS[$counter]}
# copy mask used here to final directory...
# cp ${MASKS[$counter]} ${OUTPUT_PATH}/brainmask-${FILENAMENOEXT}.nii.gz
((counter=counter+1))
done
```
### Step 5. Align skull stripped Proton Density (PD) scan `${DATA_PATH}/proton-density/ss-preproc-sub-01_ses-02_acq-PD.nii.gz` to each `${DATA_PATH}/brainmasks/ss-*.nii.gz` scan using ITK-SNAP
In order to run PD correction, the corresponding PD scan of each T1 scan has to be aligned. We do this alignment using ITK-SNAP.
1. Open each file with its corresponding PD scan. See the code blocks below if you want to open each file using the command line.
You can manually open each file and the corresponding PD scan using ITKSNAP’s GUI. Alternatively, you can open ITKSNAP from the command line. Examples of this are listed below for each of the 3 images
SCAN 1
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
#SCAN 1
PREPROC_SCAN=${DATA_PATH}/brainmasks/ss-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
PD_IMAGE=${DATA_PATH}/proton-density/ss-preproc-sub-01_ses-02_acq-PD_defaced.nii.gz
itksnap -g $PREPROC_SCAN -o $PD_IMAGE
```
SCAN 2
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
#SCAN 2
PREPROC_SCAN=${DATA_PATH}/brainmasks/ss-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
PD_IMAGE=${DATA_PATH}/proton-density/ss-preproc-sub-01_ses-02_acq-PD_defaced.nii.gz
itksnap -g $PREPROC_SCAN -o $PD_IMAGE
```
SCAN 3
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
#SCAN 3
PREPROC_SCAN=${DATA_PATH}/brainmasks/ss-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz
PD_IMAGE=${DATA_PATH}/proton-density/ss-preproc-sub-01_ses-02_acq-PD_defaced.nii.gz
itksnap -g $PREPROC_SCAN -o $PD_IMAGE
```
2. Once you've opened the necessary files, click the `Tools` menu option, then `Registration`.
![](https://i.imgur.com/Bw3N2fd.png)
3. First get a good manual alignment of the PD scan to the T1 scan using the `Manual` registration tab.
Use your mouse to rotate and move the brain in either of the 3 viewing windows. The main thing is to align the ACPC line and ensure roughly the same positioning. Since the PD scan has to be collected during the same session, they should already be roughly aligned.
![](https://i.imgur.com/qgXbUkF.png)
4. Now run automatic registration using the following parameters: Transformation Model `Rigid`, Image similarity matrix `Mutual Information`, Coarse Level `8x`, Finest Level `1x`. Once all the parameters are set, hit "Run Registration".
![](https://i.imgur.com/asuTlcm.png)
5. When the Automatic registration is complete, visually inspect it to make sure it did a good job. If it's all good, save the transformation matrix. To do this, click the floppy disk icon in the Registration panel. Name the file the following depending on the run number `pd-ses-02-to-sub-01_ses-02_run-0X_acq-dutch-INV1_ND_defaced.txt` and save it in `${DATA_PATH}/proton-density`
![](https://i.imgur.com/G4tmz3T.png)
6. Now we have to reslice the PD scan. For this, click the tile icon in the Registration panel. Then choose `Linear` Interpolation, then hit `OK`.
![](https://i.imgur.com/NTZd87V.png)
A 3rd scan should now appear in the ITK-SNAP window - this is the resliced PD scan. Click on the dropdown button (circled below) and click `Save image`. Save the file in the `${DATA_PATH}/proton-density` folder named as the following depending on the run number `pd-ses-02-to-sub-01_ses-02_run-0X_acq-dutch-INV1_ND_defaced.nii.gz`
![](https://i.imgur.com/FLFxtAW.png)
Repeat this process for the remaining scans before you move onto `Step 5`.
### Step 6. Divide each T1 scan with its corresponding aligned PD scan using `batch_pd_corr_code.m`
Now we're ready to divide each T1 scan by its corresponding PD aligned scan. For this, we'll use a custom written MATLAB code, `batch_pd_corr_code.m`.
A least at the time of writing this guide, MATLAB isn't compatible with reading in large `.nii.gz` files. So before we do the division, we have to `gunzip` each T1 and PD aligned scan. To do this, used the command line/terminal and run the following block of code:
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
gunzip ${DATA_PATH}/brainmasks/ss-*.nii.gz
gunzip ${DATA_PATH}/brainmasks/brainmask-*.nii.gz
gunzip ${DATA_PATH}/proton-density/pd-*.nii.gz
```
Now you can run `batch_pd_corr_code.m`. Be sure to change the variable `XYZ`, `XYZ`, and `XYZ` depending on your filepath(s)/file(s). The `batch_pd_corr_code.m` script is in the code is also contained in the code block below.
```matlab=1
%% batch_pd_corr_code.m
% before running this code, you must have done the following steps
%
% 1. upsample PD image to match resolution of T1 of interest
% 2. align upsampled PD to T1 image
% 3. skull strip PD image + T1 image. output brainmask during this process
% nss = 0;
% indicate where you want your output
data_path = ['insert data_path here'];:
output_dir = [data_path '/proton-density'];
% list the skull stripped T1 files you want to PD correct (full path):
t1w_ss_fn_all = {[data_path '/brainmasks/ss-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii']
[data_path '/brainmasks/ss-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii']
[data_path '/brainmasks/ss-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii']};
% list the corresponding brainmask file for each T1 file (full path).
% we do this since sometimes the PD image doesn't completely align with the
% T1, or the PD scan has less drop out than the T1 scan... We really just
% want to correct the brain contained in the T1 file.
brainmask_fn_all = {[data_path '/brainmasks/brainmask-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii']
[data_path '/brainmasks/brainmask-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii']
[data_path '/brainmask-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii']};
% list the corresponding aligned PD scan for each T1 file (full path).
reg_pd_ss_fn_all = {[data_path '/proton-density/pd-ses-02-to-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii']
[data_path '/proton-density/pd-ses-02-to-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii']
[data_path '/proton-density/pd-ses-02-to-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii']};
% t1w_ss_fn = [fileDir '/' 'e-ss-raw-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii'];
%
% if nss == 1
% t1w_nss_fn = [fileDir '/' 'insert file name here'];
% end
%
% brainmask_fn = [fileDir '/' 'template-brainmask-sub-01-ses-02_run-01_acq-dutch.nii'];
% reg_pd_ss_fn = [fileDir '/' 'PD_sub-01_ses-02_run-01_acq-dutch.nii'];
smooth_pd = 1;
smooth_kernel = 3;
for ii = 1:numel(t1w_ss_fn_all)
t1w_ss_fn = [t1w_ss_fn_all{ii}];
% if nss == 1
% t1w_nss_fn = ['insert file name here'];
% end
brainmask_fn = [brainmask_fn_all{ii}];
reg_pd_ss_fn = [reg_pd_ss_fn_all{ii}];
%% read files as niftis...
disp('reading nifti files...')
t1w_ss = readFileNifti(t1w_ss_fn);
% if nss == 1
% t1w_nss = readFileNifti(t1w_nss_fn);
% end
brainmask = readFileNifti(brainmask_fn);
reg_pd_ss = readFileNifti(reg_pd_ss_fn);
%% smooth pd if smooth_pd == 1
if smooth_pd == 1
disp('smoothing pd...')
reg_pd_ss.data = imgaussfilt3(reg_pd_ss.data,smooth_kernel);
end
%% normalise scans between 0 and 1
disp('normalising scans between 0 and 1')
norm_t1w_ss = single(mat2gray(t1w_ss.data));
norm_reg_pf_ss = single(mat2gray(reg_pd_ss.data));
%
% if nss == 1
% norm_t1w_nss = single(mat2gray(t1w_nss.data));
% end
%% divide normalised t1w with normalised pd image
disp('dividing normalised t1w with normalised pd image')
pd_corr_t1w_ss = norm_t1w_ss./norm_reg_pf_ss;
% clean up
pd_corr_t1w_ss(isnan(pd_corr_t1w_ss)) = 0;
pd_corr_t1w_ss(pd_corr_t1w_ss<0) = 0;
pd_corr_t1w_ss(isinf(pd_corr_t1w_ss)) = 0;
%% remove outliers by clipping all voxels > the 99.9th percentile
disp('removing outliers by clipping all voxels > the 99.9th percentile')
pd_corr_t1w_ss_rmoutliers = pd_corr_t1w_ss;
toclip = prctile(pd_corr_t1w_ss_rmoutliers(:),99.99);
pd_corr_t1w_ss_rmoutliers(pd_corr_t1w_ss_rmoutliers > toclip) = 0;
%% equalise histogram
disp('equalising histogram')
upperLim = 0.999; %1; %0.999; % anything lower results in clipping...
equal_pd_corr_t1w_ss_rmoutliers = single(mat2gray(pd_corr_t1w_ss_rmoutliers));
equal_pd_corr_t1w_ss_rmoutliers(brainmask.data == 0) = NaN;
equal_pd_corr_t1w_ss_rmoutliers = imadjustn(equal_pd_corr_t1w_ss_rmoutliers,stretchlim(equal_pd_corr_t1w_ss_rmoutliers(:),[0 upperLim]));
equal_pd_corr_t1w_ss_rmoutliers(brainmask.data == 0) = 0;
%% match to T1w range... 0 to max(T1)
disp('match to T1w range... 0 to max(T1)')
rescaled_pd_corr_t1w_ss_rmoutliers = equal_pd_corr_t1w_ss_rmoutliers.*single(max(t1w_ss.data(:)));
% if nss == 1
% rescaled_pd_corr_t1w_nss_rmoutliers = equal_pd_corr_t1w_ss_rmoutliers.*single(max(t1w_nss.data(:)));
% end
%% copy pd corr to t1 images and write
disp('writing output...')
% ss
workingVar = norm_t1w_ss;
workingVar = workingVar.*single(max(t1w_ss.data(:)));
workingVar(brainmask.data == 1) = rescaled_pd_corr_t1w_ss_rmoutliers(brainmask.data == 1);
pdcorr_t1w_ss = t1w_ss;
pdcorr_t1w_ss.data = workingVar;
[path,fname,ext] = fileparts(pdcorr_t1w_ss.fname);
if smooth_pd == 1
pdcorr_t1w_ss.fname = [output_dir '/' 'pdcorr-smoothed-' num2str(smooth_kernel) 'mm-' fname ext];
else
pdcorr_t1w_ss.fname = [ourput_dir '/' 'pdcorr-' fname ext];
end
writeFileNifti(pdcorr_t1w_ss)
%non ss
% if nss == 1
% workingVar = norm_t1w_nss;
% workingVar = workingVar.*single(max(t1w_nss.data(:)));
% workingVar(brainmask.data == 1) = rescaled_pd_corr_t1w_nss_rmoutliers(brainmask.data == 1);
%
% pdcorr_t1w_nss = t1w_nss;
% pdcorr_t1w_nss.data = workingVar;
%
% [path,fname,ext] = fileparts(pdcorr_t1w_nss.fname);
%
% if smooth_pd == 1
%
% pdcorr_t1w_nss.fname = [output_dir '/' 'pdcorr-smoothed-' num2str(smooth_kernel) 'mm-' fname ext];
%
% else
%
% pdcorr_t1w_nss.fname = [output_dir '/' 'pdcorr-' fname ext];
%
% end
%
% writeFileNifti(pdcorr_t1w_nss)
%
% end
disp('donezo')
end
%% clean up
clear all
```
Once you've finished running `batch_pd_corr_code.m`, you can now `gzip` each `.nii` file to convert it to `.nii.gz`. See the code block below to do this using the command line/terminal:
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
gzip ${DATA_PATH}/brainmasks/ss-*.nii
gzip ${DATA_PATH}/brainmasks/brainmask-*.nii
gzip ${DATA_PATH}/proton-density/pdcorr*.nii
gzip ${DATA_PATH}/proton-density/pd-*.nii
```
You've now finished PD correction of your images. Here is a side-by-side comparison between a PD corrected scan and the original scan:
![](https://i.imgur.com/djH1WFC.png)
### Step 7. N4 bias correct files
While PD correction does a good job at homogenizing the T1 scans, it's not perfect. As such, to further correct for inhomogeneities in our images (e.g. the occipital pole being brighter than the rest of the brain), here we run N4 bias correction. Use the block of code below to do this:
```bash=1
## now n4 bias correct the new skull stripped files...
# get scans and masks and put into the following variables... need fullpath for code to work. use line below...
# find $(pwd)/e-ss*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
# find $(pwd)/template-brain*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
OUTPUT_PATH=${DATA_PATH}/n4bias-corr
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
IMAGES=(${DATA_PATH}/proton-density/pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/proton-density/pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/proton-density/pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
MASKS=(${DATA_PATH}/brainmasks/brainmask-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/brainmasks/brainmask-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/brainmasks/brainmask-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
echo "N4Bias correction..."
counter=0
for image in "${IMAGES[@]}"; do
bash ${CODE_DIR}/n4bias-corr-hba-project.sh \
-i $image \
-x ${MASKS[$counter]} \
-o $OUTPUT_PATH \
((counter=counter+1))
done
```
If the block of code above ran without fail, you're ready to move onto the next step.
--
**NOTE:** If the N4 Bias Correction fails for any of the images, the most likely culprit is the brainmask not being in the "same space" or having the same origin. This sometimes happens randomly for for some reason... To fix this problem, you just have to open up the T1 scan that's giving you the issue, and also open up its corresponding brainmask as a `segmentation`. See the line of code below to open these two files up in ITK-SNAP:
EXAMPLE SCAN 2
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
#SCAN 2
DEFACED_SCAN=${DATA_PATH}/proton-density/pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
BRAINMASK=${DATA_PATH}/brainmasks/brainmask-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
itksnap -g $DEFACED_SCAN -s $BRAINMASK
```
After opening the scan and its corresponding brainmask, click Segmentation -> Save [brainmask] as…
![](https://i.imgur.com/8D20XMZ.png)
Then change the filename to include the prefix `pdcorr-`
![](https://i.imgur.com/NQKn8gS.png)
Then click finish, and now the brainmask is in the same space as the T1 scan.
So now you just have to re-run N4 bias correction on the error prone file using the brainmask you just saved:
```bash=1
## now n4 bias correct the new skull stripped files...
# get scans and masks and put into the following variables... need fullpath for code to work. use line below...
# find $(pwd)/e-ss*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
# find $(pwd)/template-brain*.nii.gz -maxdepth 1 -type f -not -path '*/\.*' | sort
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
OUTPUT_PATH=${DATA_PATH}/n4bias-corr
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
IMAGES=(${DATA_PATH}/proton-density/pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz)
MASKS=(${DATA_PATH}/brainmasks/pdcorr-brainmask-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz)
echo "N4Bias correction..."
counter=0
for image in "${IMAGES[@]}"; do
bash ${CODE_DIR}/n4bias-corr-hba-project.sh \
-i $image \
-x ${MASKS[$counter]} \
-o $OUTPUT_PATH \
((counter=counter+1))
done
```
### Step 8: Creating an unbiased starting template using `create-template-hba-project.sh` before running the ANTs Multivariate Template script
Since we don't have a starting template to use, we need to make our own first. To do this, we'll use `FSL` to align and average each T1 scan in an unbiased way (e.g. everything won't be aligned to the first scan, rather everything will be aligned to a rough average of the scans, then linearly averaged). The script we'll use to run registration and averaging in `FSL` is called `create-template-hba-project.sh`. If you want more detail on what exact commands are used, please refer to that script.
In order to run it, see the block of code below:
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
OUTPUT_PATH=${DATA_PATH}/alignment
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
#output name must NOT have extension...
OUTPUT_NAME=sub-01-fsl-template-t1w-dutch-INV1_ND-pdcorr
IMAGES=(${DATA_PATH}/n4bias-corr/n4corr-pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/n4bias-corr/n4corr-pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${DATA_PATH}/n4bias-corr/n4corr-pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
IMAGES="${IMAGES[@]}"
#THE INPUT FLAG NEEDS TO BE AT THE END OTHERWISE THE CODE WON'T WORK!!!
bash ${CODE_DIR}/create-template-hba-project.sh \
-d $OUTPUT_PATH \
-n $OUTPUT_NAME \
-i "${IMAGES[@]}"
```
### Step 9. Copy files needed for the ANTs Multivariate Template Code
ANTs requires all the files used for the template to be in the same directory. It's not the most practical thing to do space-wise, but I like to copy all the files we'll be using into a new directory just for ANTs. So now we'll transfer over the relevant T1 files... As well as the input template.
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
OUTPUT_PATH=${DATA_PATH}/ants-mvt
if [ ! -d ${OUTPUT_PATH} ]; then
mkdir -p ${OUTPUT_PATH};
fi
cp ${DATA_PATH}/n4bias-corr/n4corr*.nii.gz $OUTPUT_PATH
cp ${DATA_PATH}/alignment/sub-01-fsl-template-t1w-dutch-INV1_ND-pdcorr.nii.gz $OUTPUT_PATH
```
### Step 10. Run the ANTs Multivariate Template Code
Now we're finally ready to run the ANTs Multivariate Template Code!
Run the block of code below to run ANTs. Be sure to change the variable `NUM_CORES` based on your computer specs.
Depending on your computer specs this can take a few days to run. To run 3 iterations on a 24 core computer (ramonx, UOW) it'll take ~3 days.
```bash=1
DATA_PATH=[base input path to the unzipped downloaded dataset]
CODE_DIR=${DATA_PATH}/code
FILEDIR=${DATA_PATH}/ants-mvt
if [ ! -d ${FILEDIR} ]; then
mkdir -p ${FILEDIR};
fi
# copy relevant files over to FILEDIR. ANTS only works when everything is in the same folder.
TEMPLATE=${FILEDIR}/sub-01-fsl-template-t1w-dutch-INV1_ND-pdcorr.nii.gz
IMAGES=(${FILEDIR}/n4corr-pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-01_acq-dutch-INV1_ND_defaced.nii.gz
${FILEDIR}/n4corr-pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-02_acq-dutch-INV1_ND_defaced.nii.gz
${FILEDIR}/n4corr-pdcorr-smoothed-3mm-ss-preproc-sub-01_ses-02_run-03_acq-dutch-INV1_ND_defaced.nii.gz)
DIMS=3
GRADIENT=0.1
NUM_CORES=12 #change the number of cores based on your computer's specs
NUM_MODS=1
N4BIASCORRECT=0 # we've already done this step.
STEPSIZES=20x15x5 #from Lüsebrink, F., Sciarra, A., Mattern, H., Yakupov, R. & Speck, O. T1-weighted in vivo human whole brain MRI dataset with an ultrahigh isotropic resolution of 250 μm. Scientifc data 4, 170032, https://doi.org/10.1038/sdata.2017.32 (2017).
ITERATIONS=2 #4 from ants paper, 2 for hba project (prev 0.25 template). may change to 3 iterations later down the track... 2022/02/14. zji.
###############################
#
# Set number of threads
#
###############################
ORIGINALNUMBEROFTHREADS=${ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS}
ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=$NUM_CORES
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS
########### ants
cd $FILEDIR
outputPath=${FILEDIR}/TemplateMultivariateBSplineSyN_${STEPSIZES}
mkdir $outputPath
antsMultivariateTemplateConstruction.sh \
-d $DIMS \
-r 1 \
-c 0 \
-m $STEPSIZES \
-n $N4BIASCORRECT \
-s CC \
-t GR \
-i $ITERATIONS \
-g $GRADIENT \
-b 1 \
-o ${outputPath}/T_ \
-y 0 \
-z $TEMPLATE \
${IMAGES[@]}
###############################
#
# Restore original number of threads
#
###############################
ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=$ORIGINALNUMBEROFTHREADS
export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS
```
### Step 11. Looking at/navigating the data from ANTs.
Once the script has completed, you can inspect your data using ITKSNAP. You'll notice in the `${DATA_PATH}/ants-mvt` folder a new folder called `${DATA_PATH}/ants-mvt/TemplateMultivariateBSplineSyN_${STEPSIZES}` depending on the step sizes you used. Within this folder, you'll see some other folders with `GR_` as the prefix. These folders are output after each iteration. If you want to compare the output template after each iteration you can open the `T_template0.nii.gz` file within each `GR_` folder and compare it to the final template `${DATA_PATH}/ants-mvt/TemplateMultivariateBSplineSyN_${STEPSIZES}/T_template0.nii.gz`.
The other files you may see have the suffixes `_Warp.nii.gz`, `InverseWarp.nii.gz`, and `Warp.nii.gz`. These are the non-linear alignment files output from ANTs which were used to warp the files to the input template. The number preceding each suffix indicates the file number it corresponds to (e.g. scans 1 to 4 will correspond to 0 to 3).
Files with the suffix `_WarpedToTemplate.nii.gz` are the output of each scan being warped to the template. Again the number preceding the suffix indicates the file number, so if you open each one they should all be aligned.
As noted above, the output template we're most interested in has the file name `T_template0.nii.gz`. See below for an example screenshot of this file:
![](https://i.imgur.com/c0JaxFz.png)