The April 2024 CIL online training will take place on Monday 29th and Tuesday 30th April from 1-5pm UK time.
This document is for participants to find information about the course, ask questions and share answers with each other.
CIL on GitHub incl. installation instructions: https://github.com/tomographicImaging/cil
Documentation: https://tomographicimaging.github.io/CIL/v23.1.0/index.html
CIL User Show-case notebook collection: https://github.com/TomographicImaging/CIL-User-Showcase/
Join Discord CIL support platform: https://discord.com/invite/9NTWu9MEGq
Publications on CIL: https://doi.org/10.1098/rsta.2020.0192 and
https://doi.org/10.1098/rsta.2020.0193
Recent paper from the CIL team on a directional reconstruction method (using the PDHG optimization algorithm) for limited-angle CT, which won third prize in the Helsinki Tomography Challenge 2022: https://www.aimsciences.org/article/doi/10.3934/ammc.2023011
Slides from today: Emailed to participants.
Time | Topic | Presenter |
---|---|---|
1:00 - 2:30 | Geometry, data reading and FBP/FDK reconstruction CIL-Demos/examples/1_Introduction/01_intro_walnut_conebeam.ipynb |
Jakob Sauer Jørgensen and Franck Vidal |
2:30 - 2:45 | Break | |
2:45 - 3:45 | Pre-processing CIL-Demos/demos/1_Introduction/02_intro_sandstone_parallel_roi.ipynb |
Jakob Sauer Jørgensen and Franck Vidal |
3:45 - 4:00 | Break | |
4:00 - 4:50 | Time to explore and discussCIL-Demos/demos/1_introduction/03_preprocessing.ipynb CIL-Demos/demos/1_introduction/exercises/02_preprocessing_seeds_conebeam.ipynb CIL-Demos/demos/1_introduction/exercises/03_where_is_my_reader.ipynb |
Jakob Sauer Jørgensen and Franck Vidal |
4:50 - 5:00 | Wrap up and where to go from here | Jakob Sauer Jørgensen and Franck Vidal |
Time | Topic | Presenter |
---|---|---|
1:00 - 2:15 | Introduction to iterative reconstruction CIL-Demos/demos/1_Introduction/04_FBP_CGLS_SIRT.ipynb |
Edoardo Pasca and Margaret Duff |
2:15 - 2:30 | Break | |
2:30 - 3:45 | Regularised Iterative Reconstruction CIL-Demos/demos/2_Iterative/01_optimisation_gd_fista.ipynb |
Edoardo Pasca and Margaret Duff |
3:45 - 4:00 | Break | |
4:00 - 4:45 | Time to explore and discussCIL-Demos/demos/1_Introduction/05_usb_limited_angle_fbp_sirt.ipynb CIL-Demos/demos/2_Iterative/05_Laminography_with_TV.ipynb CIL-Demos/demos/3_Multichannel/03_Hyperspectral_reconstruction.ipynb |
Edoardo Pasca and Margaret Duff |
4:45 - 5:00 | Next steps and questions | Edoardo Pasca and Margaret Duff |
In your breakout rooms, please:
-Introduce yourselves and your
background
-Explain what you want to get out of the training
-Tell the group about the strangest dataset you have ever come across
-Trainers will be in main room for support
1. What is the strangest dataset you have ever come across? Please write your answer below:
2. What are you hoping to get out of this course on CIL? Write your answer below:
[]
[]
Can you guess what these objects are
Don't write your answers here, give other people a chance to guess!
What questions do you have?
The objective function is the thing that you want to minimise it usually contains a data fitting term which measures how well your solution matches the measured data. It sometimes also contains a regularisation term which can provide extra information
That's a good question. If the operator is linear (like a tomographic operator) and the objective function is convex then if we are at a local minima then it must be global. We assume we are at a minima if our optimisation algorithm has converged, if in each iteration very little changes.
In the case of a non-linear operator or non-convex objective function, local minima are possible and optimisation is difficult. We could try different initialisations or algorithms with momentum to avoid local minima
step-size=None
in the initialization of the gradient descent?Excellent question. CIL's gradient descent automatically does a backtracking line search to find a suitable step-size if one is not provided. This guarantees convergence. Instead, if you want to pass a step-size, convergence is not always guaranteed e.g. if you pick a step-size too large you could keep jumping over the minimum point.
A step size of 1/(2*Lipschitz constant of the objective function) is usually a good number if you want a constant step size.
An interesting question - with the noise in the image, the |Au-b|_2^2 is not zero even when u is the ground truth thus the algorithms may move away from the initial ground truth point. CGLS generally converges faster than SIRT so the results from SIRT may look better because it has not yet converged. I wonder also if you are running SIRT with a non-negativity constraint which would also improve your SIRT solution.
Yes! Take a look at our new callbacks https://tomographicimaging.github.io/CIL/nightly/optimisation/?highlight=callbacks#utilities where you can pass a custom stop criteria
We suggest to use the appropriate methods to correct for this type of artifact: centre of rotation processor and ring-remover in CIL or methods found in other packages like tomopyor algotom.
After an appropriate pre-processing has been done, iterative methods may be used and further reduce the artifacts.
If there are similar characteristics between the datasets then you can probably use the same parameters. We don't have a mechanism to batch process in CIL at the moment but you can use CIL if you parallelise the code yourself.
We are currently working with Diamond, DIAD and I12 beamlines and with the ID15 beamline at the ESRF. We are happy to discuss further and support you in this work.
Swarm type of algorithms are useful when the objective function is not convex. For our XCT problem the objective functions are convex so we do not have swarm algorithms.
This depends on the application. As a rule of thumb, you should use analytical methods, and only if this leads to unsatisfactory results, you should investigate iterative methods.
Unfortunately, it is hard to set limits a priori. For least squares data fitting term and TV regularisation, I observed that a ball park for is by taking the 10% of the ratio of the norm of the projection operator and the norm of the GradientOperator
.
PDHG is a useful algorithm for optimisation objectives where both the data discrepancy and the regularisation term are not differentiable. You can also build up quite complex objectives, such as seen in this recent CIL paper: https://www.aimsciences.org/article/doi/10.3934/ammc.2023011/. In the case of just TV reconstruction of a least squares problem, you could use either FISTA or PDHG as optimisation methods with similar results.
Make sure you run the algorithm for the right number of iterations with the cell myFISTATV.run(200,verbose=2)
and if alpha is set to 0.02 the test should pass
Your reconstruction should look like this:
If it looks right just move on to the next cell and don't worry about the check.
A typical laminography cone beam configuration might look like this:
If your object has very different dimensions you might have something like this:
At low tilt angles you will still get a range of different pathlengths, however as you tilt more they will become closer to your z-dimension thickness. As you tilt more you lose information at a wider range of angles so it's worth simulating your sample and seeing what works with a balance between pathlengths (and contrast range) and missing data.
Absolutely. As the notebook progresses, more complex regularisation will be added, ending up with TV on space and TGV on regularisation on the channel dimension.
Thanks for your question! Later today you can have a look at the "Where's my reader" notebook which discusses loading your own data and sorting out your geometries. Then tomorrow we will look at iterative algorithms like CGLS.
Thanks, we are currently actively working at implementing a phase retrieval method - not currently for edge illumination methods but we would be interested to discuss!
Thanks, we are currently actively working at implementing a phase retrieval method.
There is generally a trade off between more blurring and more noise reduction. It is often trial and error for each dataset based on what you need the data for.
You can have a go with the walnut notebook, here in 2D for ease of computation.
I produced the following plot:
islicer(recon, direction='vertical', minmax=(-0.01, 0.06)
minmax sets the initial range of the colorbar, you can check any other parameters in our documentation https://tomographicimaging.github.io/CIL/nightly/utilities/#islicer-interactive-display-of-2d-slices
Unfortunately, this is not consistent with show2D
where the same parameter is called fix_range
Unfortunately each manufacturer chooses their own data format. We can work with you to create the appropriate reader for your machine. What will need to be done is: 1) to find the geometry information and 2) the projection data. Later on this afternoon you will have an option to look at the "Where's my reader" notebook which looks at setting up geometries for custom data. Similarly, the pre-processing sandstone notebook creates a geometry for a dataset stored as a mat
file.
Thanks, this will depend on the actual scanner. We mainly support NIKON and ZEISS: NIKON stores normalised data, ZEISS stores the flat fields and our reader will apply the normalisation. The other readers we have are NEXUS data reader, which is a reader for data stored in HDF5. Raw and TIFF readers do not have any information about flats and darks. Check here for a discussion on how to store this info
We do not have any way to do this natively, although I can think of methods to achieve it. We've had examples of creating a mask between the source and sample in an algorithm by implementing a MaskOperator (https://tomographicimaging.github.io/CIL/nightly/optimisation/?highlight=maskoperator#cil.optimisation.operators.MaskOperator)
Not at the moment - do you have an examples of other software or other places where it is implemented? Algotom has a function for specific types of stitching (https://algotom.github.io/toc/section1/section1_4.html#sinogram-stitching-for-a-half-acquisition-scan) - is this the kind of thing you mean? We have a description of a similar method in our user showcase notebooks (https://github.com/TomographicImaging/CIL-User-Showcase/blob/main/009_offset_CT_apple/Apple_offset_reconstruction.ipynb)
Not out of the box in CIL but this is a very interesting question. We would be interested in talking more about this
Non standard trajectories that can be described with individual positions of source, panel and object are currently being worked on.
They have different licenses, ASTRA is GPL and TIGRE BSD so this might be one consideration for you. TIGRE have papers that references the projectors they have implemented, whereas ASTRA does not as far as I know – this may be something that’s important to you if you want to understand the projectors. Otherwise both should work with most set-ups/geometries. ASTRA may have a slightly smaller peak RAM use via the CIL wrappers, but this is something we are working to address with TIGRE.
Thanks for your question, we answered this very quickly in the slides. A lot of this is experience but there are some resources online, that you can google. We can also share the slides after the training
We can support multi-channel images e.g. hyperspectral or dynamic CT, with any number of channels. There will be an opportunity to see this at the end of the day tomorrow
CIL Data containers have a maximum of 4 dimensions, 3 spatial + 1 channel dimension which could be energy or time, for instance.
If we crop the acquisition data we will be throwing away data and this is undesired
We consider the former case, the FOV is smaller than the the whole object. We pad to correct for the missing data. . This is because the data is region-of-
interest, i.e., the sample was larger than the field of view, so projections are truncated (i.e. do not have air on both sides as also seen in the sinograms). A simple way to compensate for this is to extend or pad the data on both sides of the projections.
The following plot depicts the scan configuration for the walnut dataset. If we move the object closer to the source (increasing the magnification) but we keep the size of the reconstruction volume constant, the object is larger than the detector. The X-rays go through the material at the corners of the object only at certain rotation angles, creating problems to algorithms. The last plot shows that by padding the data effectively elongating the detector, the object remains in view.
Padding the projections will help if the sample outside the FOV is a low/constant density. Highly attenuating features outside the FOV will cause issues with the reconstruction as the padded data is not a good approximation.
With 360 degrees if the geometry doesn’t describe the data well the features in the reconstructed image look blurred. By changing the centre-of-rotation offset and measuring the sharpness of the image we can find the geometry that best describes the data
The image-sharpness algorithm backprojects the sobel-filtered projections with different offsets. This means we end up with strong edges in the case of aligned geometry and blurring otherwise. We maximise the autocorrelation of the reconstruction.
The centre of rotation correctors only finds the offset in one slice (so a simple offset). However, you could run it twice and find the offset in two slices, then manually update the geometry using:
my_acquisition_geometry.set_centre_of_rotation(offset, angle)
my_acquisition_geometry.set_centre_of_rotation_by_slice( offset1, slice_index1, None, slice_index2)
The current centre of rotation corrector algorithms can only be applied to un-tilted geometry. This is something we hope to remove the restriction on in the future.
I notice this is missing from our documentation, so we’ll update it to be clearer.
Yes, you can use CIL for light tomography and indeed Franck has a project with a PhD student in which they had a play with a light tomography set up. We have also used CIL with neutron tomography data.
If you can define an adjoint for non-linear operator then the CIL algorithms will work. However, with the non-linearity you loose the guarantanees for convergence. Basically, you can give it a go but all bets are off!
Thank-you for letting us know!
Make sure to have chosen a user name from the online form circulated at the beginning of the training.
cil24-online-XX
?The authorised user names on the platform are cil24-online-xx
where xx
is a number between 1 and 75, that is cil24-online-1
, …, cil24-online-75
. If you used any different user name you won't get access to jupyterhub.
We do not have any way to recover the user password, so please remember it/write it down. Should you forget, please get in touch with one of the trainers.
Please try and refresh your page, if you are still getting this error please get back in touch
Please fill in this quick microsoft form:
https://forms.office.com/e/WtKHKyKhZY
Please share one thing you liked about the course today
Liked the step by step breakup. Easy to Understand
Please share one thing that could be improved about the course today
Maybe some bibliography, but there is lot of material