# Mask R-CNN Memo
## To-Do's
* Investigate the "small sample training" (see below)
* Push the code to github (Felix)
* First, push to Felix repo, so that Kazu/Laura can pull
* Second, Felix merge remote (DeepLearnPhysics) develop branch into his, see if that's resolvable
* Reproduce the "small sample training" (Kazu/Laura)
## Small sample training (1 event)
The network is told to output the exact same number (i.e. same box location and size) all the time => input data doesn't matter. So it should learn immediately (~10 iterations or something).
* Why is there a box that's not proposed (after RPN loss flatten)?
* Check the box proposal within the pixels where the subject trajectory exists. Monitor the box proposal from this region during the training. It must have started random, then does it converge? (it has to... cuz loss flattens) How/Where does it converge? (i.e. why does it converge to a different loc? or is Class prediction zero?)
* Can we make sparse RPN?
* Interesting to see a performance comparison