Meeting Minutes === ###### tags: `Templates` `Meeting` `AI` `Salting` `CDMS` `AI/ML` `ML` `ML/AI` `salting` :::info - **Location:** https://ucdenver.zoom.us/j/97994358165 - **Date:** July 13, 2022, 10 AM Mountain - **Participants:** - Amy Roberts (AR) - - **Meeting Goal(s)** ::: ## Summary <!-- Please fill me in after the meeting --> ## Notes <!-- Other important details discussed during the meeting can be entered here. --> - Sukee: on slide 10, what is the P vector? Aditi: it's a vector of amplitude, one for each template and channel. Sukee: what do we do with all the amplitudes? Aditi: we'll feed those into our machine learning network to determine e.g. location and energy. - Sukee: What's the covariance matrix V? Aditi, Scott: the diagonal is the PSD (this is what the simple chi-squared fit uses). This matrix allows for cross-talk. - Aditi: that the BDT is such an improvement is really saying that the position correction to the energy is not a linear effect - Sukee: how do you calculate the uncertainty for your BDT? (On slide 23). Aditi: I used statistical uncertainty and asked for the uncertainty of the fit. - Farnoush: you've done this study based on four templates, have you looked at the sensitivity of this on the number of templates used? Aditi: not on this data, but I've looked at this with other datasets. It's a delicate balance because including more templates makes you sensitive to noise effects! - Farnoush: other augmentation methods? Maybe GAN? The GANs that use a limited amount of input data may be helpful here. GAN for small data regime (NIPS 2020): https://arxiv.org/abs/2006.06676 - Amy: UMN has taken data with collimated radioactive sources, maybe this would be useful? Farnoush: You can train models with fuzzy labels, @Amy: Learning from Noisy labels https://arxiv.org/abs/2007.08199