# nnUNetv2 Notice
## Install nnUNetv2
Just following the offical steps [offical installation instructions link](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/installation_instructions.md)
## Prepare Custom Dataset
1. Declare those environment variable: `nnUNet_raw`, `nnUNet_preprocessed`, `nnUNet_results`
- nnUNet_raw: Where are the original dataset store
- nnUNet_preprocessed: Where to store the preprocessed dataset
- nnUNet_results: Where to store downloaded pretrain weights or training checkpoint
2. Make sure your dictionary structure
- The structure should look like that:
```
nnUNet_raw/Dataset001_BrainTumour/
├── dataset.json
├── imagesTr
│ ├── BRATS_001_0000.nii.gz
│ ├── BRATS_001_0001.nii.gz
│ ├── BRATS_001_0002.nii.gz
│ ├── BRATS_001_0003.nii.gz
│ ├── BRATS_002_0000.nii.gz
│ ├── BRATS_002_0001.nii.gz
│ ├── BRATS_002_0002.nii.gz
│ ├── BRATS_002_0003.nii.gz
│ ├── ...
├── imagesTs
│ ├── BRATS_485_0000.nii.gz
│ ├── BRATS_485_0001.nii.gz
│ ├── BRATS_485_0002.nii.gz
│ ├── BRATS_485_0003.nii.gz
│ ├── BRATS_486_0000.nii.gz
│ ├── BRATS_486_0001.nii.gz
│ ├── BRATS_486_0002.nii.gz
│ ├── BRATS_486_0003.nii.gz
│ ├── ...
└── labelsTr
├── BRATS_001.nii.gz
├── BRATS_002.nii.gz
├── ...
```
- Dataset\<XXXX>_\<dataset name>:
- \<XXXX>: It represent the dataset's ID, just givn a random number
- \<dataset name>: Naming your dataset
- imagesTr: The input image used to training model
- The image name format is \<Identidy Name>\_\<series id>\_\<i-th channel>.<file_ending>
- \<Identidy Name>: Just given a random name
- \<series id>: Notice this value should equals the label's \<series id>, and this value should padding '0' at start for align length.
- \<i-th channel>: used to annotate what modality this image. It relate to key[channel_names] in dataset.json
- labelsTr: The corresponding label used to training model
- imagesTs[Optional]: It look like doesn't be used while training model
- The label format is \<Identidy Name>\_\<series id>.<file_ending>
- reference to [imagesTr]
- dataset.json: This file **must** contains
|json key|doc|
|-----|-----|
|channel_names|modality indicator ------> modality|
|labels|category name -------> mask digit value|
|numTraining|number of file in imagesTr|
|file_ending|image type. ex: .nii, .nrrd, .dcm, .png|
------
EX:
```json
{
"channel_names": { # formerly modalities
"0": "T2",
"1": "ADC"
},
"labels": { # THIS IS DIFFERENT NOW!
"background": 0,
"PZ": 1,
"TZ": 2
},
"numTraining": 32,
"file_ending": ".nii.gz"
"overwrite_image_reader_writer": "SimpleITKIO" # optional! If not provided nnU-Net will automatically determine the ReaderWriter
}
```
4. Run command
- Preprocess
```shell!
nnUNetv2_plan_and_preprocess -d <your dataset id> --verify_dataset_integrity
```
- Training
```shell!
nnUNetv2_train dataset_name_or_id configuration fold
```
- nnUNetv2_train doc:
```shell!
usage: nnUNetv2_train [-h] [-tr TR] [-p P] [-pretrained_weights PRETRAINED_WEIGHTS] [-num_gpus NUM_GPUS] [--use_compressed] [--npz] [--c] [--val] [--val_best] [--disable_checkpointing] [-device DEVICE]
dataset_name_or_id configuration fold
positional arguments:
dataset_name_or_id Dataset name or ID to train with
configuration Configuration that should be trained
fold Fold of the 5-fold cross-validation. Should be an int between 0 and 4.
options:
-h, --help show this help message and exit
-tr TR [OPTIONAL] Use this flag to specify a custom trainer. Default: nnUNetTrainer
-p P [OPTIONAL] Use this flag to specify a custom plans identifier. Default: nnUNetPlans
-pretrained_weights PRETRAINED_WEIGHTS
[OPTIONAL] path to nnU-Net checkpoint file to be used as pretrained model. Will only be used when actually training. Beta. Use with caution.
-num_gpus NUM_GPUS Specify the number of GPUs to use for training
--use_compressed [OPTIONAL] If you set this flag the training cases will not be decompressed. Reading compressed data is much more CPU and (potentially) RAM intensive and should only be used if you
know what you are doing
--npz [OPTIONAL] Save softmax predictions from final validation as npz files (in addition to predicted segmentations). Needed for finding the best ensemble.
--c [OPTIONAL] Continue training from latest checkpoint
--val [OPTIONAL] Set this flag to only run the validation. Requires training to have finished.
--val_best [OPTIONAL] If set, the validation will be performed with the checkpoint_best instead of checkpoint_final. NOT COMPATIBLE with --disable_checkpointing! WARNING: This will use the same
'validation' folder as the regular validation with no way of distinguishing the two!
--disable_checkpointing
[OPTIONAL] Set this flag to disable checkpointing. Ideal for testing things out and you dont want to flood your hard drive with checkpoints.
-device DEVICE Use this to set the device the training should run with. Available options are 'cuda' (GPU), 'cpu' (CPU) and 'mps' (Apple M1/M2). Do NOT use this to set which GPU ID! Use
```
## Prediction by nnUNetv2 built-in command

|Argument|Functional|
|-----|------|
|-c|指定model為 `2d, 3d_fullres, cascade...`|
|-f|指定checkpoint來自哪個fold|
|-i|直接指向存放待預測影像的資料夾,無需考慮`nnUNet_raw, nnUNet_preprocessed`|
|-o|直接指定預測結果存放地,無需考慮`nnUNet_raw, nnUNet_preprocessed, nnUNet_results`|
|-d|model使用哪個dataset訓練,可用 dataset id 也可用dataset name|
|-chk|指定使用`checkpoint_best.pth` 或 `checkpoint_last.pth`|
... continue updating