or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
![image alt](https:// "title") | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | Emoji list | ||
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Syncing
xxxxxxxxxx
Human Brain Atlas Processing Tutorial [input template]
Table of Contents
About
This guide will go through the steps used to generate the templates from the Human Brain Atlas project. This guide assumes :
-you have an input template that you want to align everything to. See [this] (link coming soon) guide if you want to preprocess data without an input template.
-you have installed all the necessary software/programs
-that you're using linux or OSX as your operating system. most of the software packages used here are not compatible with Windows.
The data from the original project is huge, so here we will use sample data and a 0.4mm template instead of 0.25mm. You can can download this dataset here (download the
demo-input-template
folder as a zip file):Any queries can be sent to Zoey Isherwood (zoey.isherwood@gmail.com) or Mark Schira (mark.schira@gmail.com)
List of software packages needed
List of scripts used to process data
NOTE: if you download the dataset, all the necessary code is included in the zip folder.
dicm2nii.m
make-masks-hba-project.sh
n4bias-corr-hba-project.sh
hba-sample-data-set-preproc-input-template.sh
(Note: this code contains all the code blocks contained in this guide)Data summary
Converting RAW files to NIFTI
Step 1: DICOM -> NIFTI
MATALB script used to convert data:
raw
Now run the
dicm2nii.m
script. For it to run it has to be added to the filepath. When you run it a GUI like this should open:Click on
DICOM folder/files
and select thesource
folder in the current directory. Next, click theResult folder
button and select theraw
folder in the current directory.Now ensure all the correct options are selected (see image above; click to select
Output format .nii
,Compress
,Left-hand storage
,Use parfor if needed
,Save json file
, andUse SeriesInstanceUID if it exists
)Once all the correct options are selected, click Start conversion.
Note: this step can take a while…
If the processing has finished, you've successfully converted the files from DICOM to NIFTI.
Step 2: Rename files according to BIDS formatting
Rename the files output in
raw
with BIDS formatting. For this, usesub-01
as the subject name, andses-01
as the session number. In the sample dataset, the run numbers should be runs:run-01
,run-02
,run-03
, andrun-04
. The acquisition name should be:acq-mp2rage-wip944
.There will be multiple files associated with each
run
. These files include:INV1
,T1_Images
,UNI_DEN
,UNI_Images
, andINV2
.See below for some examples of how to rename each file:
We were a bit old school in our approach and manually renamed each file to follow BIDS formatting. There are many more intuitive ways of doing this (e.g. using the BIDS feature in dicm2nii, naming files automatically using a script etc), but we ended up naming them manually.
Step 3: Anonymise the data by defacing each scan.
Python script used to convert data:
Run the section of code below to anonymize the data by removing the subject's face. It's a little redundant in the case of our data since 1) the identities of both subjects are not anonymised in the upcoming publication and 2) we end up skull stripping later on. So you can skip this step if you like, but it's just best practice to do this just in case you have to share anonymous data with the skull intact.
Preprocessing the data
Step 1: Generate automated masks for each raw file
In order to skull strip each file, we have to first generate a brainmask. To do this we use the
make-masks-hba-project.sh
script, which utilises HD-BET and ANTs' N4 bias correction. The script is pretty automated, requiring only a few input parameters, so I won't delve into exactly what it's doing here.When the mask is generated for the INV2 image, you have to open the corresponding UNI_DEN MP2RAGE image in ITK SNAP along with the corresponding INV2 generated mask. You then have to save the mask with the phrase "UNI_DEN" in the filename instead of INV2.
You can manually open each file and the corresponding brainmask using ITKSNAP's GUI. Alternatively, you can open ITKSNAP from the command line. Examples of this are listed below for each of the 4 images
SCAN 1
SCAN 2
SCAN 3
SCAN 4
After opening the scan and its corresponding brainmask, click Segmentation -> Save [brainmask] as…
Then change the filename such that
INV2
changes toUNI_DEN
… Then click finish. Now the brainmask is in the same space as the
UNI_DEN
images.Step 2: Skull strip raw files with masks generated in the last step
Step 3: Align the 0.4m template to each raw file and save the corresponding manual brainmask
Summary:
We've already manually created a brainmask of a template file. So rather than relying on the automated brainmask, we'd like to align the manual one to each raw file.
This step has to be done manually using ITK SNAP… use the masks generated in Step 1 as an ROI to help guide the linear alignment only use RIGID alignment in ITKSNAP. It is important to use nearest neighbour interpolation when reslicing the brainmasks.
Save each brainmask file in
${DATA_PATH}/sub-01/0p20/brainmasks
with the prefixtemplate-brainmask-
and the name of the corresponding raw file, e.g.template-brainmask-sub-01_ses-01_run-01_acq-mp2rage-wip944_UNI_DEN_defaced.nii.gz
More detailed instructions are below:
Tools
menu option, thenRegistration
First get a good manual alignment of the template scan to the raw scan using the
Manual
registration tab.Use your mouse to rotate and move the brain in either of the 3 viewing windows. The main thing is to align the ACPC line and ensure roughly the same positioning.
Rigid
, Image similarity matrixMutual Information
, Coarse Level8x
, Finest Level1x
. Also make sureUse segmentation as a mask
is selected. Once all the parameters are set, hit "Run Registration"template-to-sub-01_ses-01_run-0X_acq-mp2rage-wip944_UNI_DEN_defaced.txt
and save it in${DATA_PATH}/template
${DATA_DIR}/template/brainmask-0p40-sub-01_t1_ACPC.nii.gz
in ITKSNAP by either using the File -> Open option in the toolbar, or by dragging and dropping the file from a fileviewer. When opening the file, be sure to selectLoad as Additional Image
.Moving image layer
selected is the brainmask file. Click okay, and now the brainmask should be in the same space as the raw scan.Make sure the Interpolation option is set to
Nearest Neighbor
, then click okay.A 4th scan should now appear in the ITKSNAP window - this is the resliced brainmask. Click on the dropdown button (circled below) and click
Save image
. Save the file in the${DATA_PATH}/brainmasks
folder named as the following depending on the run numbertemplate-brainmask-sub-01_ses-01_run-0X_acq-mp2rage-wip944_UNI_DEN_defaced.nii.gz
Repeat this process for the remaining scans before you move onto
Step 4
.Step 4: Expand the brainmasks generated in the previous step using FSL
Since the manual brainmask is a bit tight, we're going to inflate it slightly before skull stripping the original files. Run the section of code below to do this:
Step 5. Skull strip raw files using the expanded template masks that have been aligned to each raw file…
Now that we've inflated the brainmasks slightly, we're ready to skull strip the original files. Run the section of code below to do this.
NOTE If you're running into any errors in this step or the last, it is possible that you didn't save the resliced template brainmask properly for one of your raw scans. Check the errors being spit out by the last two steps, and if there is a
Segmentation fault
for one of the files, double check that file, and reslice the template brainmask again and save it just in case it wasn't done correctly/a step was missed..It's also important throughout the process, particularly at this stage, to quality check (QC) the output. Open ITKSNAP and check the e-ss*.nii.gz files to make sure they were skull stripped correctly. You can also run the section of code below to automatically open up these files in ITKSNAP.
In the figure above, it looks like the skull stripping has been done correctly.
Step 6. N4 bias correct files
In order to correct for inhomogeneities in our images (e.g. the occipital pole being brighter than the rest of the brain), here we run N4 bias correction. Use the block of code below to do this:
Step 7. Upsample first input file
Before putting all our files into the ANTs multivariate template code, we have to upsample the first input image to the desired voxel resolution of our template. We have to do this because the ANTs script doesn't upsample everything based on the input template - it does so based on the first input image. So here we'll upsample the first scan to the desired voxel resolution (for this demo, it's 0.4mm isotropic)
Step 8. Copy files needed for the ANTs Multivariate Template Code
ANTs requires all the files used for the template to be in the same directory. It's not the most practical thing to do space-wise, but I like to copy all the files we'll be using into a new directory just for ANTs. We've already saved the upsampled version of the first scan to
${DATA_PATH}/ants-mvt
so now we'll transfer over the remaining files… As well as the input template.Step 9. Run the ANTs Multivariate Template Code
Now we're finally ready to run the ANTs Multivariate Template Code!
Run the block of code below to run ANTs. Be sure to change the variable
NUM_CORES
based on your computer specs.Depending on your computer specs this can take a few days to run. To run 3 iterations on a 24 core computer (ramonx, UOW) it'll take ~3 days.
Step 10. Looking at/navigating the data from ANTs.
Once the script has completed, you can inspect your data using ITKSNAP. You'll notice in the
${DATA_PATH}/ants-mvt
folder a new folder called${DATA_PATH}/ants-mvt/TemplateMultivariateBSplineSyN_${STEPSIZES}
depending on the step sizes you used. Within this folder, you'll see some other folders withGR_
as the prefix. These folders are output after each iteration. If you want to compare the output template after each iteration you can open theT_template0.nii.gz
file within eachGR_
folder and compare it to the final template${DATA_PATH}/ants-mvt/TemplateMultivariateBSplineSyN_${STEPSIZES}/T_template0.nii.gz
.The other files you may see have the suffixes
_Warp.nii.gz
,InverseWarp.nii.gz
, andWarp.nii.gz
. These are the non-linear alignment files output from ANTs which were used to warp the files to the input template. The number preceding each suffix indicates the file number it corresponds to (e.g. scans 1 to 4 will correspond to 0 to 3).Files with the suffix
_WarpedToTemplate.nii.gz
are the output of each scan being warped to the template. Again the number preceding the suffix indicates the file number, so if you open each one they should all be aligned.As noted above, the output template we're most interested in has the file name
T_template0.nii.gz
. See below for an example screenshot of this file: