# Week 3
You can access the Zoom meeting [here.](https://ukri.zoom.us/j/99401085708)
Click [here](https://hackmd.io/@SIRF-CIL-Fully3D/r1AxJKNou) to return to the main note.
**Mon 12th of July 14:00 – ~16:00 GMT + 01:00**
[Local time and calendar invite](https://dateful.com/eventlink/1606089402)
Joint session
1. [Introduction to the week](https://www.ccpsynerbi.ac.uk/sites/www.ccppetmr.ac.uk/files/Week3_intro.pdf)
2. [Synergistic reconstruction](https://www.ccpsynerbi.ac.uk/sites/www.ccppetmr.ac.uk/files/Week3_Monday_PET_Kris.pdf)
3. [PET/MR](https://www.ccpsynerbi.ac.uk/sites/www.ccppetmr.ac.uk/files/20210713_PETMR.pdf)
4. Post-reconstruction [Deep Learning](https://www.ccpsynerbi.ac.uk/sites/www.ccppetmr.ac.uk/files/20210713_DL.pdf)
5. Move to the Gather.Town site to try and self-organise into groups, see also https://hackmd.io/@SIRF-CIL-Fully3D/SyZHD-sh_.
**To access the [recording](https://ukri.zoom.us/rec/share/oqf8zIQSKaFEguzH-0o8e8mM7Wh7fKDbv-9uuBz7LnuehYB7OR-0jHOHWQ-pzh00.F-089ZIW9b1CxLaD) of the Zoom meeting use passcode - L@eR1bHi**
---
**Tue 13th of July 14:00 - 15:00 GMT + 01:00**
Drop-in technical support session.
https://ukri.zoom.us/j/99401085708 (Passcode: 1957351)
---
**Wed 14th of July 14:00 – 16:30 GMT + 01:00**
[Local time and calendar invite](https://dateful.com/eventlink/1860359607)
(Partially overlapping) sessions for support
1. 14:00 - 15:30 Synergistic Reconstruction
2. 15:00 - 16:30 Deep Learning
**To access the [recording](https://ukri.zoom.us/rec/share/nBp_DLOeIUN7RioQEdmsUsOfpU18EUtzlhOXzL36n7LhlbBdltUGrXBQeZ_D9sTm.kskqndyOI-Ad0pDw) of the Zoom meeting use passcode - H%9J^Eex**
---
**Fri 16th of July 14:00 – 16:30 GMT + 01:00**
[Local time and calendar invite](https://dateful.com/eventlink/6427708974) **updated - this link was previously incorrectly adding a calendar entry for July 23**
Joint session
1. Summary of main learning objectives
2. Example solutions
3. Wrap up
**To access the [recording](https://ukri.zoom.us/rec/share/3li6X4yRNejrmC3-1WYTjHGiM1TwC5wiV4xwWBkQyspVfUlyxmnJAico-S-hr1Oc.q0qOXbh6tYrSybKI) of the Zoom meeting use passcode - Wzx$4g#E**
---
**Fri 16th of July 16:30-17:30 GMT + 01:00**
Optional social event. (using [Gather.Town](https://hackmd.io/@SIRF-CIL-Fully3D/SyZHD-sh_))
## Supporting Material
### Synergistic
* [Synergistic reconstruction for PET/MR](https://youtu.be/eFzqcznmlms)
**Kris Thielemans** lecture at CCP PETMR training school PSMR 2018.
* [Methods for joint image reconstruction from multiple modalities](https://youtu.be/AIbrt1t-ue8)
**Simon Arridge** seminar at the CCP PETMR/CCPi Synergistic Image Reconstruction symposium (Chester, UK) 2019
[PDF](https://www.ccppetmr.ac.uk/sites/www.ccppetmr.ac.uk/files/Arridge_SIR2019.pdf)
* [Image reconstruction methods for energy-resolved CT](https://youtu.be/utxj7JGslmA)
**Emil Sidky** seminar at the CCP PETMR/CCPi Synergistic Image Reconstruction symposium (Chester, UK) 2019
[PDF](https://www.ccppetmr.ac.uk/sites/www.ccppetmr.ac.uk/files/EmilSidky.pdf)
* [(An Overview of) Synergistic Reconstruction for Multimodality/Multichannel Imaging Methods](https://doi.org/10.1098/rsta.2020.0205)
**Arridge, Simon R., Matthias J. Ehrhardt, and Kris Thielemans**, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2200 (28 June 2021): 20200205.
review paper
* [Multi-modality Imaging with Structure-Promoting Regularizers](https://doi.org/10.1007/978-3-030-03009-4_58-1)
**Matthias J. Ehrhardt**, In Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging, Springer, 2021. [preprint](https://arxiv.org/abs/2007.11689)
* [Synergistic Tomographic Image Reconstruction: Part 1](https://doi.org/10.1098/rsta.2020.0189)
**Tsoumpas, Charalampos, Jakob Sauer Jørgensen, Christoph Kolbitsch, and Kris Thielemans**, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2200 (28 June 2021): 20200189.
Editorial of a Thematic Issue (part 1)
* [Synergistic Tomographic Image Reconstruction: Part 2](http://rsta.royalsocietypublishing.org/content/379/2204)
**Tsoumpas, Charalampos, Jakob Sauer Jørgensen, Christoph Kolbitsch, and Kris Thielemans**, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2204 23 August 2021).
TOC of the Thematic Issue (part 2)
### Deep Learning
* [Basics of machine learning for image reconstruction: CNNs](https://youtu.be/6Ulu7JfHVlI)
**Andrew Reader**
* [Using AI in the PET pipeline](https://youtu.be/nnS0YPWuwLY?t=312)
**Andrew Reader**
* [pyapetnet paper (NeuroImage open access)](https://www.sciencedirect.com/science/article/pii/S1053811920308843?via%3Dihub)
**Georg Schramm**
* [Stanford lecture series on CNN (for visual recognition)](https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv)
**Fei-Fei Li**
* [20min Berkeley lecture on popular optimizers for Deep Learning (SGD, Adagrad, RMSProp, Adam)](https://www.youtube.com/watch?v=gmwxUy7NYpA)
**Alex Smola**
## Notebooks
### Synergistic
See the [SIRF README](https://github.com/SyneRBI/SIRF-Exercises/tree/master/notebooks/Synergistic#synergistic-notebooks) for more info on the SIRF notebooks, as well as the [CIL folder on GitHub](https://github.com/TomographicImaging/CIL-Demos/tree/main/training/2021_Fully3D/Week3).
#### MR (using CIL)
* [cil_joint_tv_mr.ipynb](https://github.com/SyneRBI/SIRF-Exercises/blob/master/notebooks/Synergistic/cil_joint_tv_mr.ipynb)
#### PET (SIRF only)
* [HKEM_reconstruction.ipynb](https://github.com/SyneRBI/SIRF-Exercises/blob/master/notebooks/Synergistic/HKEM_reconstruction.ipynb)
* [MAPEM_Bowsher.ipynb](https://github.com/SyneRBI/SIRF-Exercises/blob/master/notebooks/Synergistic/MAPEM_Bowsher.ipynb)
#### CIL
* [01_Color_Processing.ipynb](https://github.com/TomographicImaging/CIL-Demos/blob/main/training/2021_Fully3D/Week3/01_Color_Processing.ipynb)
* [02_Dynamic_CT.ipynb](https://github.com/TomographicImaging/CIL-Demos/blob/main/training/2021_Fully3D/Week3/02_Dynamic_CT.ipynb)
* [03_Hyperspectral_reconstruction.ipynb](https://github.com/TomographicImaging/CIL-Demos/blob/main/training/2021_Fully3D/Week3/03_Hyperspectral_reconstruction.ipynb)
If **not** running on the STFC cloud you will need to download the data see [CIL README](https://github.com/TomographicImaging/CIL-Demos/blob/c2bae4649b51ecc3f3d0bb77eacc9faeeb5f7503/training/2021_Fully3D/Week3/README.MD)
### Deep Learning
**We strongly recommend to run these notebooks on a GPU server in the STFC cloud (Jupyter hub) since there a NVIDIA GPU is available and we can install tensorflow and the matching CUDA version.** If you want to run the notebooks on your on machine, we can discuss on Friday how to correctly setup tensorflow and CUDA.
On the STFC cloud, all notebooks are available in ```pyapetnet/notebooks/```.
Moreover, the simualated PET and MR images we need are pre-downloaded and
available at ```/mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/```.
**<span style="color:green">The GPU server image has been upgraded such that now tensorflow and all required CUDA libs are installed.</span>**
**<span style="color:red">To run the notebooks, you have to restart your GPU server!</span> To do so, user the control panel in the top right corner and select "Stop my server" followed by "Start my server" and "Launch server"**
**<span style="color:blue">When using tensorflow with GPUs, you can only have one kernel active at a time, which means that running two notebooks at the same time will give errors.</span>**
#### If your notebooks fail with an error about "numpy"
As a consequence of the image update above, the version of numpy has been changed leading to some incompatibility of software that had been built against it. You will see an error like
```
ValueError: numpy.ndarray size changed, may indicate binary incompatibility.
Expected 88 from C header, got 80 from PyObject
```
A quick fix is to run on a terminal on the STFC cloud:
```bash=
sudo /opt/pyvenv/bin/pip install numpy==1.21.0
```
### Deep Learning notebooks
Notebooks 00, 01 and 02, explain how to setup a network and an efficient data pipeline for training with on-the-fly data augmentation in tensorflow.
* [00 Introduction & Overview, data download](https://github.com/gschramm/pyapetnet/blob/master/notebooks/00_introduction.ipynb)
* [01 Data handling and Tensorflow Dataset pipelines](https://github.com/gschramm/pyapetnet/blob/master/notebooks/01_tf_data.ipynb)
* [02 CNN setup and training](https://github.com/gschramm/pyapetnet/blob/master/notebooks/02_tf_models.ipynb)
The solution for our challenge raised at the end of notebook 02 is here.
As always, trying yourself before looking at the solution is highly recommended.
* [03 Solution (training pyapetnet on simulated data)](https://github.com/gschramm/pyapetnet/blob/master/notebooks/03_training.ipynb)
<!---
Finally, you should check whether tensorflow can see and use your GPU by executing
```
python -c 'import tensorflow as tf; a = tf.config.list_physical_devices(); print("\n\n",a)'
```
If the last line of the printed output does only contain ```CPU``` and not ```CPU``` and ```GPU```, please create a post in the FAQ below.
--->
## Frequently Asked Questions
Here we will collect questions that are common to many participants. Please only edit this section if you are part of the organisational team.
### Q: MAPEM_Bowsher notebook sets uniform weighting equal to Bowsher weighting
After calculating BowsherWeights, I find
```
print(BowsherWeights[500,:])
>>>[1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0.]
```
These values are reassigned after executing
```
uniformWeights=BowsherWeights #### Should this be BowsherWeights.copy()?
uniformWeights[:,:]=1
# set "self-weight" of the voxel to zero
uniformWeights[:,27//2]=0
```
which returns the following
```
print(BowsherWeights[500,:])
>>>[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
```
Should `uniformWeights` be initialized with `BowsherWeights.copy()`?
**A**: this was indeed a bug.
[PR #147](https://github.com/SyneRBI/SIRF-Exercises/pull/147) is now merged but you can just modify your notebook according to the lines above.
## User questions
In this section anybody can ask a question. **Please mention the relevant notebook** in your question to make it easier for us to answer. It will also help others to check any previous questions/answers for a specific notebook they are working on. If you think your question does not fit to a specific notebook, please ask it in **General questions.**
## General questions.
### XNAT
Does someone uses XNAT? I know it handles images. How is it used? What for?
Thank you
**A** (Kris) At UCL, the Dementia Research Centre and many others use XNAT. It is the standard solution for DPUK. I haven't used it myself yet, but it's on our list...
XNAT itself is essentially a database with a web front-end allowing permissions, workflows etc. It can be used to either automatically or manually set data-processing going, where output will be incorporated into the study.
There's training available at https://xnat.org/about/xnat-academy.php and there's going to a course co-organised with UCL in September I believe.
#### XNAT revisited
Could it be used for any image modality or is it limited to Dementia / Altzheimer images of the brain?
**A** (Kris +): it is a generic platform which is commonly used for DICOM images and can also be used for non-imaging data. It is the intention of CCP SyneRBI to integrate SIRF as one of the potential processing pipelines in XNAT.
### Q: TNV CIL implementation
Can't seem to find the CIL implementation of TNV. There doesn't seem to be a nuclear norm operator: https://tomographicimaging.github.io/CIL/nightly/optimisation.html#function
Perhaps I am looking in the wrong place...
**A** (Vaggelis) No, you are right in CIL we have only `TotalVariation` but we have access to other regularisers via a [**plugin**](https://github.com/TomographicImaging/CIL/blob/master/Wrappers/Python/cil/plugins/ccpi_regularisation/functions/regularisers.py) called [**ccpi_regularisation**](https://github.com/TomographicImaging/CIL/tree/master/Wrappers/Python/cil/plugins/ccpi_regularisation). This wraps functions that are included in the [CCPi-RegularisationToolkit](https://github.com/vais-ral/CCPi-Regularisation-Toolkit). In this repository, there are
parallelised implementations with different algorithms for regularisers such as
1. TV (Total Variation)
2. dTV (directonal TV)
3. TGV (Total Generalised Variation)
4. TNV (Total Nuclear Variation)
5. Non local TV
These regularisers are designed to solve the following problem:
$$\underset{u}{\operatorname{argmin}} \frac{1}{2}\|u - g\|^{2} + \mbox{Regulariser} $$
using different algorithms, e.g., FISTA, PDHG, GD, Split-Bregman. In a sense, the above minimisation problem is a **proximal operator** of **a regulariser**. Note that it does not include any operator related to PET/MR/CT applications. Therefore, the acquired solution lives in the image space. To use it in practice for the PET/MR/CT applications, we need to use a different setup in our algorithms.
**q cont** Thank you! I was wondering if we could be privy to the implementation? Is the SVD computed analytically as for images in 2D there are only two singular values per pixel. Or is for example "numpy.linalg.svd" used...
**A** (Vaggelis) The proximal operator of the **TNV** regulariser is located [here](https://github.com/vais-ral/CCPi-Regularisation-Toolkit/blob/master/src/Core/regularisers_CPU/TNV_core.c#L272). It's a bit difficult to read the code but from what I understand it computes the singular values analytically. **Note** that this implementation is designed only for 2D, and 2D+channels, e.g., colour image processing (2D+RGB). I believe that doing the same in 3D is possible but computationally expensive. The implementation is described better [here](http://www.ipol.im/pub/art/2016/141/article_lr.pdf#page=30) and [Proposition 1](https://arxiv.org/pdf/1508.01308.pdf#page=17).
### Q: I have attempted to set up TNV for image denoising (using PDHG), but the objectives are not being calculated. The error reads "convex_conjugate method is not implemeted".
My implementation is as follow:
`from cil.plugins.ccpi_regularisation.functions import TNV`
`F = 0.5*L2NormSquared(b=im_data)`
`G = TNV(alpha,100,0.00000001)`
`K = IdentityOperator(ig)`
`pdhg_tnv = PDHG(f=F,g=G, operator=K, etc...)`
`pdhg_tnv.run(verbose=2)`
From the github code it seems that the gradients are calculated within `TNV`. Surely using the PDHG notation K=Jacobian operator and F=nuclear norm and G = data-fitting for the case of just denoising. I guess that would be very difficult to implement in an object oriented way, hence TNV is just a function that takes care of the gradients?
**A** (Vaggelis) Unfortunately you are right, at the moment there is no implementation of the `convex_conjugate` of `TNV`. But I was able to run the lines above without any objectives output. The above setup is not ideal for denoising, but can be generalised to tomography by changing the **IdentityOperator** and the **im_data**. For denoising you can also do:
`tnv_sol = G.proximal(im_data, tau=1.0)`
**Please note** that the **TNV** implementation assumes that the data are ordered as `(channels, dim_y, dim_x)` so you can comment the `reorder` line. However, you need to use it when you want to "show" you reconstruction.
```python=1
data = dataexample.RAINBOW.get(size=(500,500), scale=(0,1))
# data.reorder(['horizontal_y', 'horizontal_x','channel'])
pdhg_tnv.solution.reorder(['horizontal_y', 'horizontal_x','channel'])
plt.figure(figsize=(10,10))
plt.imshow(pdhg_tnv.solution.array)
plt.title("TNV")
plt.show()
```
**Q cont'd** Thanks Vaggelis, I am quite concerned that (when using PDHG) my solution is not converged. How would I check this if the objectives are not output?
Also when just using `tnv_sol = G.proximal(im_data, tau=1.0)`, the regularisation parameter is set in `G = TNV(alpha, num_iter, tol)` as alpha? I have changed the range of tau and it is making no difference to the solution.
I'd thought changing tau would make a difference from the definition of the prox operator? Unless I am misunderstanding?
\begin{equation}
\mathrm{prox}_{\tau F}(x) = \underset{z}{\mathrm{argmin}} \frac{1}{2}\|z - x\|^{2} + \tau F(z)
\end{equation}
At the moment I am justifying this as the regularisation parameter is effectively playing the role of tau?
**A** (Edo) You understand correctly, $\tau$ of that definition should be $\tau = \text{tau} * \alpha$, i.e. it should be internally multiplied to the regularisation parameter $\alpha$ by the `TNV` function. However, you unveiled a [bug](https://github.com/TomographicImaging/CIL/issues/931) and this does not happen and $\tau$ is pretty much ignored.
### How do I check if TNV is converged without being able to output the objectives?
### What is the current state of the art reconstruction method for PET/MR?
**A** (Georg) The short answer: OSEM. All clinical scanners have implemented early-stopped OS-MLEM with post-smoothing. GE systems also come with a MAP recon where the user can use the "relative difference prior" (Q.Clear). More advanced methods are not available on commercial systems and not widely used in clinical practice. The is partly due to the fact that properly evaluating the benefit of a new reconstruction method on a well-defined clinical task (e.g. lesion detection, tumor staging, "neuro" staging) is non-trivial and very time consuming since you need expert readers. One thing that people tend to ignore is the fact that image quality is highly task-dependent.
Not only do you need expert readers, but there is often no ground truth and you can't just keep repeating PET due to dose considerations.
### Q: Can't find the notebook "Sequence MRI" on the cloud
**A** (Vaggelis) The "_Sequence MRI_" notebook is located at "SIRF-Exercises/notebooks/Synergistic". The notebook is called **cil_joint_tv_mr.ipynb**.
### Question about pyapetnet
I noticed that I cannot access pyapetnet from Docker (Mac). I saw the instructions to install pyapetnet on the cloud.
I have no problem running the listed instruction in a terminal on Mac. To avoid spoiling the Docker installation I wonder where (folder) I am supposed to install pyabetnet. Thank you
**A** (Georg) You can indeed install pyapetnet simply from pypi via ```pip install pyapetnet``` (best in a separate virtual environment). This will install all the dependecies as well (e.g. tensorflow, ...). Then you can simply download the notebooks from github and the data from zenodo, and you are ready to run them yourself locally.
The pyapetnet local install instructions using pip can be found [here](https://github.com/gschramm/pyapetnet#installation).
Note, however, that training CNNs without having an Nvidia GPU and properly configured CUDA libs is in general slow. This is not a problem for the first three notebooks (00,01,02), but becomes a problem if you want to long trainings with many epochs.
On Friday, I can give a few tips on how on how to properly install tensorflow locally
### Q: I want to try denoising using the CIL optimisation framework. How would one specify an acquisition which is just a noisy image? I understand in this case the forward operator is just the identity matrix.
[This](https://github.com/TomographicImaging/CIL-Demos/blob/main/binder/TotalVariationDenoising.ipynb) notebook does use the proximal method of `TotalVariation` to denoise with the TV function.
$$
\text{prox}_{\tau f}(x) = \text{argmin} \frac{1}{2}\| x - z \|^2 + \tau \mathrm{TV}(z)
$$
You can solve the same problem with PDHG. To setup PDHG we need to find the expressions of two functions $F$ and $G$ and an operator $K$ and write our minimisation problem in the following form
$$ \mathrm{argmin} F(Ku) + G(u)$$
Therefore for a TV denoising problem:
\begin{equation}
\mathrm{argmin} \frac{1}{2}\|u-g\|^{2} + \alpha\|\nabla u\|_{2,1}
\tag{tv_denoising}
\end{equation}
we let $F(z)=\alpha\|z\|_{2,1}$, $G(u)=\frac{1}{2}\|g-u\|^{2}$ and $K=\nabla$. In CIL, we write:
`F = alpha*MixedL21Norm()`
`G = 0.5*L2NormSquared(b=g)`
`K = GradientOperator(ig)`
Another option to setup PDHG is to use the **BlockFramework**, i.e., F will be a **BlockFunction** and K a **BlockOperator**:
$$F(z_{1}, z_{2}) = f_{1}(z_{1})+f_{2}(z_{2}) = \alpha\|z_{1}\|_{2,1} + \frac{1}{2}\|g-z_{2}\|^{2}$$
$$K =
\begin{bmatrix}
\nabla \\
\mathbb{I}
\end{bmatrix},$$
where $\mathbb{I}$ is the Identity operator going from an `ImageGeometry` to an `ImageGeometry`. Then, $F(Ku)$ will form the objective in (tv_denoising). We left with the function $G$ which is defined as the zero function. In CIL, all these are:
```python=1
F = BlockFunction(alpha*MixedL21Norm(), 0.5*L2NormSquared(b=g))
K = BlockOperator(GradientOperator(ig), IdentityOperator(ig)
G = ZeroFunction()
```
In case you want to apply TV regularisation for PET/MR/CT, you need to change two things:
a) `IdentityOperator` **to** your PET/MR/CT operator.
b) `g` **to** your PET/MR/CT **AcquisitionData**
Note in some applications your fidelity term (`L2NormSquared`) has to change also.
More information in [Week2 PDHG notebook](https://github.com/TomographicImaging/CIL-Demos/blob/main/training/2021_Fully3D/Week2/03_PDHG.ipynb).
## Synergistic notebook questions
### Q: I have posted a followup question in week 2 for the DIY_OSEM notebook regarding the OSEM function and the thatched artifact.
I just wanted to make sure it doesn't get missed if all questions are supposed to be directed to the current HackMD week.
**A**: I've attempted to answer, [link](https://hackmd.io/N-372kgKS7ypR3KT3_BqDA?view#Question-about-DIY_OSEM-subset-sensitivity).
### Q: Data not downloaded on cloud?
When I try to run the MAPEM_Bowsher nb, I get the error message:
`error: ??? "'ProjData::read_from_file: error opening file /home/jovyan/SIRF-Exercises/data/working_folder/Synergistic/BrainWeb/FDG_sino_noisy.hs' exception caught at line 320 of /opt/SIRF-SuperBuild/sources/SIRF/src/xSTIR/cSTIR/cstir.cpp; the reconstruction engine output may provide more information"`
Could you kindly advise on this? Thanks!
**A** (Kris): did you run the `BrainWeb` notebook from the folder? It generates data so needs to be run first.
**Q**: I did, yes. The files were written to "...SIRF-Exercises/working_dir/Synergistic/BrainWeb"
From the following lines:
```
brainweb_sim_data_path = exercises_data_path('working_folder', 'Synergistic', 'BrainWeb')
print(brainweb_sim_data_path)
/home/jovyan/SIRF-Exercises/data/working_folder/Synergistic/BrainWeb"
```
**A** (Matthew): I ended up hardcoding in order to execute the notebook. The VM requires `brainweb_sim_data_path = /home/sirfuser/devel/SIRF-Exercises/data/working_folder/Synergistic/BrainWeb` but the cloud requires `brainweb_sim_data_path = /home/jovyan/SIRF-Exercises/working_dir/Synergistic/BrainWeb`.
**A**: (Kris) It seems you've found another bug! Apologies. Checking the `brainWeb` notebook, it contains
```
cd_to_working_dir('Synergistic', 'BrainWeb')
```
while the `MAPEM_Bowsher` notebook uses
```
cd_to_working_dir('Synergistic', 'MAPEM_Bowsher')
brainweb_sim_data_path = exercises_data_path('working_folder', 'Synergistic', 'BrainWeb')
```
I think we'll need something like
```
cd_to_working_dir('Synergistic', 'MAPEM_Bowsher')
...
brainweb_sim_data_path = os.path.join('..','BrainWeb')
```
as at present we don't have a `working_dir_path` yet (see also https://github.com/SyneRBI/SIRF-Exercises/issues/121).
**A**: (Nicole): I found a solution:
```
wrk_dir = os.getcwd()
brainweb_sim_data_path = os.path.join(os.path.dirname(wrk_dir), 'BrainWeb')
```
(The error comes up in the Dual_PET notebook as well ("brainweb_sim_data_path" is defined in cell 0b))
**A**: (Kris) To confuse matters more, I had a typo above on the `MAPEM_Bowsher` location, corrected now. (sorry).
I can't really understand Nicole's line above. From the notebook, `wrk_dir` would be `working_folder/Synergistic/MAPEM_Bowsher`, which would then lead to getting yet another level `BrainWeb`? (This is why I had the `'..'`)
**Q**: (Nicole) The line "brainweb_sim_data_path = os.path.join('..','BrainWeb')" didn't work for me. I can't reproduce the error anymore so there must have been something else wrong. Sorry!
**A**: (Ash) Nicole's solution should work - `os.path.dirname()` is equivalent to `..`
**A**: (Kris) aha. ok. I've created an [issue for this](https://github.com/SyneRBI/SIRF-Exercises/issues/148). PRs welcome.
### notebook `gradient_descent_mr_pet_ct`
1. Why does the notebook use another notebook (tqdm) rather than an array?
2. MatLab has a data type "cell array" that can contain data of different types. Does Python have a correspondent feature?
3. I get an error when running this notebook
"'NoneType' object has no attribute 'as_array'"
**A**
1. [`tqdm`](https://github.com/tqdm/tqdm) is a progress bar for python loops working in the terminal. I suppose you refer to these lines:
```python=
for f in tqdm([fname], desc="mMR ground truths", unit="subject"):
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
```
this is functionally but not visually equivalent to
```python=
for f in [fname]:
vol = brainweb.get_mmr_fromfile(f, petNoise=1, t1Noise=0.75, t2Noise=0.75, petSigma=1, t1Sigma=1, t2Sigma=1)
```
2. Python has the object `list` which has similar characteristics, however it's not specifically designed for computation and it can contain anything.
3. At which line do you get the error? The error means that you are calling the method `as_array()` of an object which is `None`.
### Q: In dual-PET notebook can we register the mu-maps rather than the reconstructed images?
**A** (Kris): in the simulation, if the mu-maps are in the same place, then indeed you can register those. The SIRF registration tools will work independent of sampling distance etc. Of course, in clinical practice your mu-maps might not be aligned with the emission data.
### Q: Is there a way of preforming registration on the measured data? Although I do recognise that the transform is quite inexpensive to do on the fly within each iteration.
Kris will answer this on Wednesday
### Q: It seems that in MRI you can arbitrarily choose spatial sampling with gradients and that in-turn specifies the image space coordinates. Can this be somehow leveraged to help with registration between PET and MR?
A: (David) Yes, especially if you are happy for the MR component of PET/MR to just act as a spatial measuring tool. If you want clinical MR as well as PET, there is less flexibility.
Q cont'd: Why is this the case for clinical MRI?
**A** (Ash): Yes, if I understand what you are suggesting, then this corresponds to the spatial calibration that any PET/MR system will undergo.
Yes, you can choose the sampling of a reconstructed MRI by making sure the gradients are parallel to the PET reconstruction, and you could have the voxel sampling the same. This isn't necessary as can you can quite easily resample from one sampling scheme to another without registration. Although it does avoide a loss in resolution by resampling if you don't need registration.
As far as I'm aware, no PET/MR scanners perform registration before they use the MRAC for attenuation correction. However, this will only be as accurate as the system calibration and assumes there is no motion between the acquisitions, which is why in practice you often use some registration.
Actually its also true that in PET your choice of reconstruction sampling for your voxel coordinates is somewhat arbitrary. These are chosen to maximise symmetries, but you tecnhically could reconstruct a PET image on an arbitrary grid, and there are many grids possible that still maximise symmetries (and [some people don't even use cartesian](https://ieeexplore.ieee.org/document/5874407)). (Note added by Kris: SIRF/STIR currently only allows arbitrary voxel-size in transaxial planes. Axial-spacing is fixed to half the ring-distance of the PET scanner)
**A** (Kris): I'm assuming that David meant that PET voxel sizes are generally too large for clinical MR (at least "in-slice").
A few other things to think about:
- spatial alignment between PET and MR shouldn't influenced by voxel size (ignoring MR spatial deformation anyway), so I don't think choosing MR and PET voxel sizes the same helps with "registration". It does help of course with "sampling" and hence use in synergistic reconstruction.
- choosing smaller PET voxel sizes will increase noise-per-voxel. You'd generally have to compensate for that with extra regularisation.
**Q cont'd cont'd** Thank you all, yes this seems more complicated than just setting the read outs and slice selection to directly mimic the PET geometry and acquisition (although calibration seems to somewhat try this?). I have a few more questions:
- Axial voxel size mismatch and the reason for this is presumably due to physical constraints of the PET scanner? Am I right that this is sort of got to do with the distance between radial rings of detectors, and the amount of radial (my assumption being that a pair of photons can be detected across rings and not just on the same ring of detectors). Has this got to do with the "half the ring-distance of the PET scanner"?
- What is mean meant by MRAC, is this a high resolution density scan, and I guess the PET detector is not used for this acquire this data such that is not co-registered and requires registration or is registration just used for motion correction?
- I am a bit confused by the PET reconstruction. I thought that the scanner geometry prescribed the grid for reconstruction? Would we be resampling to another grid?
**Kris** suggests we discuss this a bit more on Wednesday.
**A** (Ash) I didn't attend yesterday so I assume you covered this.
Remember that a PET forward (and backward) projection is just related to the probability that a
PET Axial voxel constraints in STIR and hence SIRF are just to do with how STIR efficiently calculates the intersection volume of a voxel and
### Q: In cil_joint_tv_mr.ipynb, the shift (10 pixel) of T2w image doesn't make clear difference. Is the joint_tv reconstruction not sensitive to small motion or the difference is too small to be visible? Would you briefly talk about it?
**A** (vaggelis) In the Joint TV regularisation using the alternating minimisation strategy, we have 4 parameters to tune: $\alpha_{1}, \alpha_{2}$, $\lambda$ and $\eta$. Some of them are easy to tune, and also you can start with the easy choice $\alpha_{1}=\alpha_{2}=1, \lambda=0.5$. In addition, we need to tune the number of inner and outer iterations. Since we interested in a joint reconstruction, it's better to have less inner iterations than outer so we can see how the information from one modality affects the other and vice versa.
**Q cont'd** I used the default values for thoes parameters. I expected some changes due to the shift. I even tried a shift of 20 pixels. I don't see any degradation caused by the shift except the one caused by the sensitivity map.
**A**: (Kris) Maybe you can post a code snippet and some images? Yo
**Q cont'd cont'd** I have added one line to shift the T2w image highlighted below.
![](https://i.imgur.com/4e95GCP.png)
Here are the images:
![](https://i.imgur.com/0tqoDG4.png)
### Q: In cil_joint_tv_mr.ipynb, the decoupling joint_tv by setting lambda_par=0 or 1 doesn't make any difference on images. From the plots of objective value, it seems to change the values of objective value somehow. If this is the case, would you explain a little bit more about the benefit of joint recon vs. separated recon.
**A**(Vaggelis): If you use $\lambda=0$ or $\lambda=1$ then the
$$JTV_{\eta, \lambda}(u, v) = \sum \sqrt{ \lambda|\nabla u|^{2} + (1-\lambda)|\nabla v|^{2} + \eta^{2}}$$
becomes either
$$JTV_{\eta, 0}(u, v) = \sum \sqrt{ |\nabla v|^{2} + \eta^{2}}$$
or
$$JTV_{\eta, 1}(u, v) = \sum \sqrt{ |\nabla u|^{2} + \eta^{2}}$$
Now, when you minimise alternatingly, the first minimisation problem is wrt to $u$ so the $JTV$ will be treated as a constant and will vanish in the gradient descent. Similarly for the other minimisation problem. In a sense, you are solving two **distinct** Least Squares problem without any synergy involved.
**Q cont'd** I want to see the advantage of JTV vs. TV. In this example, the advantage of JTV is not clear over TV. Would you show an example which clearly demonstrate the advantage of JTV?
**A**(vaggelis): To run TV for both T1/T2 the code is:
```python=1
from cil.optimisation.algorithms import FISTA
from cil.plugins.ccpi_regularisation.functions import FGP_TV
from cil.optimisation.functions import LeastSquares
from cil.utilities.display import show2D
G = FGP_TV(alpha=0.1, max_iteration=100)
F1 = LeastSquares(A1, g1)
F2 = LeastSquares(A2, g2)
fista1 = FISTA(initial=x0[0], f=F1, g=G, update_objective_interval=10, max_iteration=100)
fista1.run(verbose=1)
fista2 = FISTA(initial=x0[0], f=F2, g=G, update_objective_interval=10, max_iteration=100)
fista2.run(verbose=1)
```
But we have to consider the following:
1) We need to optimise the parameters for both regularisers and compare them based on some criterion. First, will trust our eyes but then we can create a loop and compute the PSNR/SSIM since we have the ground truth.
2) Doing Gradient Descent with fixed step size is not ideal. It does the job but is slow. That was a last minute addition to demonstrate a joint reconstruction. For sure, we can use other algorithms to do the reconstruction. I was planning to, but at the end I did not have time to finish it before the training.
3) Also, we use a smooth version of JTV, in order to compute the gradient. For the TV reconstructions, we use the FISTA algorithm, so no "smooth TV" is needed here.
If you are interested, we can continue our discussion with an **issue** or **PR** in [CIL](https://github.com/TomographicImaging/CIL). Actually, it's one of my future plans to create some routines for synergestic reconstruction and we always welcome new contributions :smile:.
### Q: In BrainWeb.ipynb, I checked brainweb.utils.LINKS which contains dozons of subjects. By looking at the python code in site_pacages folder, it can only download T1,T2 and PET images. Brainweb also provide proton image. If I want to download proton images, would you explain which kind of values should be used? Or the meaning of the current value for T1 or T2 images. Are these values signial intensity or something else? Also, can we download more subjects by just adding different number of subjects? If there is more details, where we can find those information.
### Q: In Dual_PET.ipynb without motion, the solution initializes the FDG weights with the (uniform and initial) amyloid image and then calculates the first MAPEM iteration on the FDG image. Would it also be acceptable to use uniform weighting to initialize the FDG weights for the first MAPEM update FDG image?
i.e.
```python=
BowsherWeights = update_bowsher_weights(myPrior_FDG,FDG_uMap,num_bowsher_neighbours)
uniformWeights=BowsherWeights.copy()
uniformWeights[:,:]=1
uniformWeights[:,27//2]=0
current_image_FDG = MAPEM_iteration(OSEM_reconstructor_FDG,current_image_FDG,uniformWeights,nhoodIndVec_FDG,beta)
for it in trange(1, num_subiters+1):
BowsherWeights_amyl = update_bowsher_weights(myPrior_amyl,current_image_FDG,num_bowsher_neighbours)
current_image_amyl = MAPEM_iteration(OSEM_reconstructor_amyl,current_image_amyl,BowsherWeights_amyl,nhoodIndVec_amyl,beta)
BowsherWeights_FDG = update_bowsher_weights(myPrior_FDG,current_image_amyl,num_bowsher_neighbours)
# ensure extra iteration is not applied to FDG image
if it < num_subiters+1:
current_image_FDG = MAPEM_iteration(OSEM_reconstructor_FDG,current_image_FDG,BowsherWeights_FDG,nhoodIndVec_FDG,beta)
```
I understand this might not be the most efficient approach, but my question mainly concerns the use of the uniform weighting when performing the first update on our FDG image rather than the weights defined from the initial amyloid image.
**A** (Kris) This is a good question, and I don't have a good answer, i.e. I don't know which initialisation will give you the best results.
Most alternating schemes are sensitive to initialisation and number of updates etc. (Example: you could run a lot of OSEM iterations on the other tracer first, or just a few, or none at all. Clearly, these choices are going to influence what the FDG updates are going to do). This is therefore part of the tuning that you have to do for a particular application.
This is the reason that the "joint optimisation" scheme is preferred by some, as I discussed on Monday. If you optimise a common objective function with fixed parameters (and that objective function doesn't have local minima), then initialisation doesn't matter as long as you run enough iterations. You could verify that in the JointTV example. There the objective function is convex, and whatever (small enough) step-sizes and number of inner iterations you use, you should be able to get to the same optimum.
**Followup** Thank you for the explanation. I will have to go back and review Monday's lecture. In this case, I tried both methods of weight initialization and found minimal differences between the results while keeping all other parameters constant. The greatest difference is observed in the FDG image with differences around 10-15%, likely due to the influence on the FDG updates, as you mentioned. This in turn does not have a great effect on the resulting amyloid image which is around 1% different after updating the first FDG image with the two different methods.
![](https://i.imgur.com/bXZIc1W.png)
### Q: I believe there is a bug in the Dual_PET solution:
```
nhoodIndVec_fdg=compute_nhoodIndVec(fdg_init_image,weights_fdg)
nhoodIndVec_amyl=compute_nhoodIndVec(amyl_init_image,weights_amyl)
```
should be
```
nhoodIndVec_fdg=compute_nhoodIndVec(fdg_init_image.shape,weights_fdg.shape)
nhoodIndVec_amyl=compute_nhoodIndVec(amyl_init_image.shape,weights_amyl.shape)
```
**A**: (Kris) This looks quite likely and probably due to a last minute update from me. Apologies. Would you mind create an Issue on GitHub? Or even a PR...
**A**: (Matthew) No problem. I have submitted a PR.
### Q: In the Dual_PET example, we use PET images from two different tracers that appear to be mixed within the brain to guide the synergistic reconstruction. Can this approach be used if the tracer distribution is completely separated/not mixed within a given ROI or would artifacts and undesired effects become apparent?
**A** (Kris) If the 2 images are not at all related, then it would be detrimental to use synergistic reconstruction. Benefits and dangers are going to be application specific (as well as method specific of course). Lots of research needed on this topic!
### Q: In reconstruct_measured_data notebook, there is a bug when adding detector sensitivity modelling. It failed to read geo factors from the file. Here is the screenshot.
![](https://i.imgur.com/kMOGD6F.png)
**A**: (Kris) Did you follow the instructions to download data? https://hackmd.io/@SIRF-CIL-Fully3D/r1AxJKNou#Jupyterhub-and-SIRF-Exercises
However, I see a `$` in front of the filename. This might have been resolved by an update in the SIRF-Exercises. You could hardwire the filename for now.
### Q: Is it possible to upload all the notebooks with outputs to SIRF_Exercise on github? Some notebooks took a very long time to run on the laptop. Some can't finish. We can run most of notebooks on the cloud account provided, but not all of them, e.g, reconstruct_measured_data notebook. With the outputs, we can easily review the notebook after this training school.
**A**: (Kris) This is a good idea. We will think where the best place to put this is.
## Questions about Deep Learning notebooks
### Can you please indicate the notebooks that use/demonstrate Deep Learning? Thank you
**A** (Georg) The links to the notebooks are above in the [Deep Learning](https://hackmd.io/rOcPoB2VR6u1fayJBSB19w?both#Deep-Learning1) section. Morevoer, there are also available on the STFC cloud under ```pyapnetnet/notebooks```.
### I get an OOM (out of memory error) when fitting (training) deep learning model with tensorflow on a GPU.
**A** (Georg): Memory on GPUs is limited. On the GPUs in the STFC cloud, you have ca. 8GB available. When you get the OOM error, you have to either reduce the training batch size or training patch size such that less memory is needed when feeding a batch of data through the network. A batch size of 10 and a patch size of (10,10,10) when training a model with 6 hidden layers and 30 features should work on a GPU with 8 GB memory.
### Question about notebook "01_tf_data"
The lines where the script should visualize the first data set actually output 9 white squares. It also prints out the following error "TypeError: Invalid shape (196, 176, 0) for image data"
**A** (Georg) Can you post the output of:
```print(x.shape)```
```print(y.shape)```
```print(x[...,0].squeeze(), x[...,1].squeeze(), y[...,0].squeeze())```
```print(subject_paths)```
Here you are:
```python
x[...,1].squeeze(), y[...,0].squeeze())
print(subject_paths)
(0, 176, 196, 178, 2)
(0, 176, 196, 178, 1)
```
**A** (Georg) Where are you running the notebook? If you run it locally, have you downloaded and unzipped the data? If you run it on the STFC cloud,
have you adapted the ```data_dir``` to ```pathlib.Path('/mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/')``` in **all** the notebooks? To me it seems that the ```subjects_paths``` list is empty and the script is not loading any data. The first dimension of the arrays ```x``` and ```y``` is ```3*len(subjects_paths)```.
#### Follow-up
I am running on the cloud as recommended.
I have edited the notebook_0.
I changed the content of data_path. I get the following printout:
/mnt/materials/SIRF/Fully3D/DL/brainweb_petmr
01 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject04
02 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject05
03 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject06
04 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject18
05 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject20
06 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject38
07 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject41
08 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject42
09 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject43
10 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject44
11 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject45
12 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject46
13 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject47
14 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject48
15 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject49
16 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject50
17 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject51
18 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject52
19 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject53
20 /mnt/materials/SIRF/Fully3D/DL/brainweb_petmr/subject54
However, when I run "01_tf_data" I still get 9 empty figures followed by the error message "Invalid shape (196, 176, 0) for image data"
**A** (Georg) There is a ```data_dir```` variable in **all** the deep learning notebooks. You to adapt all of them (the notebooks are independent).
### Notebook "02_tf_models"
**Q** I am at a loss about the following statement. I do not understand the meaning:
"The batch and spatial dimensions of all layers are "None", since all layers preserve those dimensions. This in turn means the model can be applied to all batch sizes and spatial dimensions"
**A** By using ```None``` for those dimensions we do not fix those dimensions when defining the model. This is possible since the in each layer, the spatial shape of the layer output is the same as the spatial shape of the input. In other model architecture (e.g. ones that use several downsampling layers), people usually fix the spatial dimensions of the input tensor since otherwise you run can into troubles with the downsampling (e.g. if you downsample by a factor of 2 and you have odd spatial dimensions).
**Q** Can you please explain the layers printout?
**A** (Georg) I plan to do that in the session on Wednesday
**Q** We are requested to install plausibly an executable following the instruction on the folowing website https://graphviz.gitlab.io/download/)
This website lists an instructions that are platform dependent. WHich instructions should be applied by those who use the STFC cloud?
**A** (Georg)
The installation instruction for all major operating systems are [here](https://graphviz.org/download/).
Note 1: If you run everything on the STFC cloud after **restarting your GPU server** (as mentioned [above](https://hackmd.io/rOcPoB2VR6u1fayJBSB19w?both#Deep-Learning1)), the graphviz binaries and python wrapper packages will be there.
Note 2: The graphviz executables and the python packages ```graphviz``` and ```pydot``` are only needed to visualize the model - it generates a nice svg of the architecture. That means if you cannot get graphviz running locally, you can simple skip those cells.
### How to install pyapetnet
I am looking for instruction to install Pyapetnet on Mac. Where (directory)? Inside docker/devel or ouside docker folder? I was told I will not be able to run it because my Mac has an Intel GPU. Shall I give it up?
**A** `pyapetnet` has some dependencies to satisfy.
The commands below, issued from a terminal (in the docker container) should get you started. **Notice that this will download a couple of Gb of stuff** without data.
```bash=
sudo /opt/pyvenv/bin/conda install -c conda-forge cudatoolkit=11.2 \
cudnn=8.1 \
ipympl>=0.7 \
pydot \
graphviz \
pymirc>=0.22 \
pydicom>=2.0 \
nibabel>=3.0 \
matplotlib>=3.1
sudo apt-get update -qq
sudo apt-get install -yq --no-install-recommends graphviz
apt-get clean
sudo python -m pip install pyapetnet>=1.1
```
**Q** (Kris): why does `graphviz` occur twice in the above? Does this not create scope for conflicts?
**A** (Georg): To visualize models with keras, we need the pygraphviz package. This package is just a wrapper to the graphviz binaries that do not get installed with the python pygraphviz package.
### Question about TensorFlow and Keras
The above instruction will install Pyapetnet but I suspect I have to install TensorFlow and Keras as Pyapetnet uses it.
Can you suggest any installer (Docker, HomeBrew, Conda, ...)?
I installed Python 3.8 and made it the default python version on Mac. My question for you is:
" Will TensorFlow and Keras and Pyapetnet and all your s/w run the default python version or still call Python2.7 that I did not have the courage to uninstall. I read that unistalling Python2.7 will disrupt the computer configuration"
**Q** (Georg) Detailed installation instructions can be found [here](https://github.com/gschramm/pyapetnet#installation). Note that, pyapetnet (and tensorflow) are in independent of SIRF, so you can but you don't need to install it in the same docker environment. Of course, if you want to combine them in a project, you have to have them in the same "environment".
My personally prefered way:
- create a seprate virtual python environment to keep things seprated (e.g. different python versions). You can go this e.g. by using ```conda``` (see link above)
- in that virtual environment, you can install pyapetnet **and all its dependencies** (tensorflow, ...) from pypi by simply running ```pip install pyapetnet```. all dependencies of pyapetnet are listed in its ```setup.py``` such that ```pip``` will install them if they are not there.
- if you have an Nvidia GPU and you want that tensorflow uses your Nvidia GPU, you have to make sure that the correct versions of the cudatoolkit and cuDNN are installed on your system as well. I will discuss this in the Friday session. An overview about the versions is [here](https://www.tensorflow.org/install/source#gpu)
**A** (Georg) Tensorflow also runs on machines whithout Nvidia GPUs. There all the computations are done on the CPU. Using the CPU for CNNs is ca. 50 times slower than running on an Nvidia GPU (depending on your CPU). That means that running serious trainings of deep networks with e.g. 1000 epochs gets **very** slow (e.g. 25 days vs 1/2 day). However, when using doing inference (applying a trained model to a single data set), using the CPU only is fine (but still slower than an Nvidia GPU).
### Training on small patches and predicting bigger volumes
**Q** Yesterday I understood (maybe misunderstood) that the entire images are subdivided in a number of tiles. You pass one set of tiles (one tile from each image) to the CNN at a time.
The CNN extracts relevant features from the image tiles rather than from whole images.
So the CNN can only predict a tile of the target image.
I cannot see how the CNN can splice together the features extrated from image subsets (tiles) to form features in the whole images.How can the CNN learn the topology (spatial position)
(Follow-up)
I discussed this doubt with another 3 attendees on Gather.Town. They think that training a CNN through a sufficient number of patched that cover the whole real image, we generate a model capable of predicting a whole image not just a subset of it. To me that sounds magic.Like a Wizard's deed.
**A**: (Kris) This was discussed on Wednesday. Clearly, with the current architecture, the CNN only learns about things the size of the patches. It will also assume that all patches are similar (i.e. assumes translation invariance). If it needs to have "larger" patches, the architecture needs to be modified (either allowing physically larger patches at courser sampling to avoid running out of memory, or passing "patch location" information to the network). Designing CNNs for large 3D volumes is still an area of research.
Note that as Georg said, if translation invariance is acceptable, the CNN can be used like this for training. For prediction, you need to then "combine" patches somehow.
(Georg) As Kris said, the key is "locality" and "translation invariance". And there is definitely no magic involved. All important features that the network needs to learn to perform anatomy-guided deblurring and denoising of a region of the input OSEM image are contained in a local neighborhood of that region. E.g. due to spill-over and spill-in between neighboring regions. Our hope is that the network learns how to deblur and denoise the input OSEM by using features from the OSEM (e.g. spatial resolution) and the MR input (location of edges). In the end the relevant features that we need to solve our problem are local and invariant under translation (OSEM PSF in brain imaing and edge detection in MR images).
### Q: Number of parameters used for 3D convolution in example code
**Q** In the model summary given in `02_tf_models` for a 3D convolution in the hidden layers 24330 parameters are used. Shouldn't there be 27 (one 3D kernel) x 30 (from all previous features) x 30 (across all the new features) = 24300 parameters. Where are the 30 extra parameters coming from, some bias or something?
**A** The input to the layer is a tensor of shape (30,n1,n2,n3), so every "3D" kernel has the shape 30x3x3x3 and we want 30 kernels in the layer. So as you say that gives only 30x30x3x3x3 = 24300 parameters. The missing 30 parameters are indeed the "bias" terms that are applied and learned after every convolution. Slightly abusing notation, we can write the combination of a single convolution and a non-linear function (e.g. ReLU, PreLU, sigmoid) as
$$y = f\left(\left(\sum_i w_i x_i\right) + b\right)$$
where:
- $\sum_i w_i x_i$ is the convolution of input vector $x$ with kernel $w$ where the dimension of $w$ is typically much smaller than the one of $x$, such that the sum over $i$ only runs over a few elements
- $b$ is an additive and trainable bias term
- $f$ is the non-linear function
In general, the number of parameters of a hidden 3D convolutional layer is given by
$$\text{nfeat}_\text{out} (\text{nfeat}_\text{in}\,n_1\,n_2\,n_3 + 1)$$
## Tech support questions
Please ask any remaining questions here on cloud, VM, docker or self-build. Please provide as much information as you can.
### Question about Keras
I am reading the Keras CT example. I would like to try it out myself. If I install Keras shall I expect a sort of MatLab workspace where I can execute groups of instructions, place breakpoints, and check the contents of variables? Or shall I install Visula Studio and Python and then Keras? I ma at a loss about where to start.
Thank you
**A**: (Kris) Keras is "just" a bit of extra Python to make TensorFlow easier to use. It is not an IDE so doesn't give you extra ways to check your code etc.
(Georg) We will discuss how to install tensorflow (which now includes Keras) in detail on Friday.
### Question about IDE or the like
I have discussed my problem with the people I found in Gather.Town today. I would like to find a development environment tha can handle all the packages. I installed Docker tha gave me access to SIRF. I was told to install Conda to be able to install TensorFlow. Can Docker handle TensorFlow? Someone from the Apple Developers Community suggested me to install HomeBrew then to install TensorFlow.
Can Visual Studio (that I need for C#) handle TensorFlow? I feel overwhelmed and confused by all these installations.
**A**: (Kris) we certainly agree that this is all confusing. A few things that might help
- Docker provides an "isolated" container where we install a Linux environment, python with various packages, jupyter server etc. Inside that container, you can do `pip install` (or `conda install`) to install extra Python packages that will then be used by your Python (running in the jupyter notebook). You can get at a "terminal" inside your Docker container via the jupyter interface, just like we do on the cloud system.
I do not know the exact commands to install Tensorflow inside the docker container. I will leave that to Edo/Georg.
(Georg): Inside the docker image that runs on the cloud, tensorflow is indeed simply installed via ```pip install tensorflow```
- HomeBrew is a package manager (like conda and pip) which is popular on MacOS. It can be used to install TensorFlow, but pyapetnet does not have a HomeBrew package. I again leave it to others to confirm (or guess) if a HomeBrew-Tensorflow will work with a `pip` pyapetnet. It is probably best to stick to one package manager though (if you can).
Note that CIL cannot be installed via HomeBrew nor pip, and SIRF cannot be installed by any package manager at the moment.
- Visual Studio is an IDE and is therefore a bit similar to jupyter (i.e. it allows you to edit your python code, run it, display etc). VS has no interaction (as far as I know) with TensorFlow etc. It only understands Python (and C# and other languages), therefore if you can get TensorFlow/pyapetnet and other packages installed in your Python version, VS should be able to use it. I have personally not used Visual Studio to open jupyter notebooks, but as long as you have its Python extension installed, I believe that should work.
We are of course aware that this is complicated, but it is quite hard to come up with a cross-platform solution that "just works". We are sometimes just as bewildered by the many options as you are...
In this school, we have therefore chosen to use the cloud/docker/VM to avoid having to think about the many different options for installation on Linux/Mac/Windows.
## Install tensorflow locally with GPU support
Before going further, read tensorflow's official install manual
https://www.tensorflow.org/install
You have two options that run acroess platforms
### Installation from pypi using pip
Tensorflow can be simply installed via pip, even on machines without GPU via
```bash
pip install tensorflow
```
The "complicated" step to make tensorflow use your Nvidia GPU is the setup / installation of the correct CUDA libraries (CUDA toolkit and cuDNN).
**Different versions of tensorflow require different versions of the CUDA toolkit and cuDNN.** You have to make sure that the matching versions are installed on your machine. The correct versions are mentioned [here](https://www.tensorflow.org/install/source#gpu).
The CUDA toolkit is available [here](https://developer.nvidia.com/cuda-toolkit-archive) and cuDNN is available [here](https://developer.nvidia.com/cudnn).
Morever, the cudatoolkit and cudnn can be also installed with conda from the conda-forge channel.
### Installation using tensorflow's provided docker image
```bash
docker pull tensorflow/tensorflow:latest # Download latest stable image
docker run -it -p 8888:8888 tensorflow/tensorflow:latest-jupyter # Start Jupyter server
```
This has the advantage that all the CUDA libraries are correctly installed in the docker container. The disadvantage is obviously, that you get another docker image.
### Things you should avoid
- Installing tensorflow from any other source (e.g. homebrew or conda-forge). This can work, but you usually don't get proper GPU support.
### Q. I cannot run any cell today. When the cell running has completed then a number is assigned to the cell. Today I keep seeing an "*" in place of the cell number. How come?
Thank you
The number appears when a cell has finished running. The "*" usually means that the cell is still running. It could be a cell that's computationally expensive. But if it isn't finishing try restarting the kernel.

or

By clicking below, you agree to our terms of service.

Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet

Wallet
(
)

Connect another wallet
New to HackMD? Sign up