SyneRBI Hackathon

https://ukri.zoom.us/j/96748085345
Passcode 328111

SyneRBI challenge repo https://github.com/TomographicImaging/SyneRBI-Challenge

Fairly recent Docker image with SIRF+CIL (both master)
docker pull harbor.stfc.ac.uk/imaging-tomography/sirfcil:dev-jupyterhub-gpu

Aims

  • Optimising the use of STIR GPU capabilities via parallelproj for reducing the computation time.
  • Evaluation metrics for the submissions to the challenge, see Hackathon-000-Stochastic-QualityMetrics
  • Working with scanners other than currently supported Siemens Biograph mMR and GE Signa.
  • Preparation of data (PET and mu-maps) as well as ROIs for evaluation.
  • Contributing to CCP SyneRBI website (experience with WordPress will be acquired).

Groups

Team Topic
Edo Quality metrics, https://github.com/TomographicImaging/SyneRBI-Challenge/tree/recon_with_metrics/metrics Website (Training matrix)
Edo & Margaret & Vaggelis? Stochastic framework + quality metric callback + demonstration of CIL framework with SIRF on PET data
Casper + Margaret CIL callbacks CIL#1659
Kris SIRF release finalisation and STIR Array/cuvec
Evgueni Repackage the NEMA preprocessing from the SIRF exercise, https://github.com/SyneRBI/SIRF-Exercises/blob/master/notebooks/PET/reconstruct_measured_data.ipynb
Daniel Script to generate the ROI's for the NEMA phantom
Vaggelis ??
Imraj ??

framework

Main issue/epic: SyneRBI-Challenge#1

+ CIL callbacks CIL#1659 @Casper

  • update main.py @Casper
  • Stochastic-QualityMetrics & example notebook @Edo/Margaret/Vaggelis
  • Examples of adapting the CIL algorithms class e.g. OSEM
  • Partitioner for SIRF PET data
  • running with docker or devcontainers SuperBuild#718 @Casper
  • GitHub Action to evaluate metrics & update leaderboard @Casper
  • self-hosted runner @Casper/Edo
  • SIRF/CIL compat like https://github.com/TomographicImaging/CIL/issues/1522
    • SIRF objective function to be compatible with CIL Functions
      ​​​​# Solve with Nesterov (Thorax case)
      ​​​​initial = image.get_uniform_copy(0)
      
      ​​​​alpha = 0.25
      ​​​​tmp_f = KullbackLeibler(b=noisy_data, eta = background_term, backend="numba") 
      ​​​​f1 = OperatorCompositionFunction(tmp_f, acq_model)
      
      ​​​​f2 = RelativeDifferencePrior()
      ​​​​f2.set_epsilon(1e-5)
      ​​​​f2.set_penalisation_factor(alpha)
      ​​​​f2.set_up(image)
      
      ​​​​objective = f1 + f2
      
      ​​​​step_size = 0.5 
      ​​​​fista = FISTA(initial = initial, f = objective,  g = IndicatorBox(lower=0), step_size = step_size, 
      ​​​​             update_objective_interval=50, max_iteration=500)
      ​​​​fista.run(verbose=1)
      
    • Same as above for SIRF prior. These are smooth priors (.gradient method)
    • Sampler new class to be tested with SIRF Acquisition Data
      • HermanMeyer is important
    • Preconditioner and StepSize are important for the paper. For the competition people can implement their own classes to do it for PET recon.
    • Test Stochastic framework for SIRF data
      ​​​​## create list of funcs
      ​​​​def list_of_functions(data, data_background, attn_factors,  image):
      
      ​​​​    list_funcs = []
      ​​​​    list_ops = []
      ​​​​    for i in range(len(data)):
      
      ​​​​        fi = KullbackLeibler(b=data[i], eta = data_background[i], backend="numba") 
      ​​​​        acq_model = pet.AcquisitionModelUsingRayTracingMatrix()
      ​​​​        acq_model.set_num_tangential_LORs(10)
      ​​​​        asm_attn = pet.AcquisitionSensitivityModel(attn_factors[i]) 
      ​​​​        acq_model.set_acquisition_sensitivity(asm_attn)
      ​​​​        acq_model.set_up(data[i],image) 
      ​​​​        list_ops.append(acq_model)
      ​​​​        list_funcs.append(OperatorCompositionFunction(fi, acq_model))
      
      ​​​​    return list_funcs, list_ops
      
      ​​​​num_subsets = 32
      ​​​​method = SequentialSampling(64, num_subsets)
      
      ​​​​# split data, back, attn
      ​​​​data_split = [noisy_data.get_subset(i) for i in method.partition_list]
      ​​​​data_background_split = [background_term.get_subset(i) for i in method.partition_list] 
      ​​​​attn_factors_split = [attn_factors.get_subset(i) for i in method.partition_list]  
      
      ​​​​# create list of funcs
      ​​​​list_func, list_ops = list_of_functions(data_split, data_background_split, attn_factors_split, image)
      
      ​​​​# sampling method
      ​​​​selection = RandomSampling(len(list_func), num_subsets, seed=40) # shuffle=True
      
      ​​​​# callbacks (rse from fista for MANY its)
      ​​​​cb = MetricsDiagnostics(fista.solution, metrics_dict={"rse":RSE}, verbose=1)
      
      ​​​​num_epochs = 50
      
      ​​​​sg_func = SGFunction(list_func, selection=selection)
      ​​​​step_size_ista = 0.1
      ​​​​# sg_func.dask = True
      ​​​​sgd = ISTA(initial = initial, f = sg_func, step_size = step_size_ista, g=G, 
      ​​​​            update_objective_interval = num_subsets, 
      ​​​​            max_iteration = num_epochs * num_subsets)  
      ​​​​sgd.run(verbose=1, callback=[cb])
      
      ​​​​plt.figure()
      ​​​​plt.semilogy(sg_func.data_passes, sgd.rse,label='SGD')
      ​​​​plt.xlabel('data-pases')
      ​​​​plt.ylabel(r'$\|\|x^{k}-x^{*}\|\|$')
      ​​​​plt.legend()
      ​​​​plt.grid()
      
  • Hydra scripts (can show you what I did for stochastic)

Data