---
tags: projects, DyReg++
---
# Weakly supervised set prediction from video
:::info
:bulb: Goal: Improve entity prediction from videos with only video-level supervision
:::
> Possible issues:
> * consistency in time
> * occlusions / variable number of nodes
> * hard to learn the shape
## :memo: Dataset
### Syntetic Moving Digits
* video: randomly moving digits, different sizes, allowing for duplicates
* task: find the subset of digits that move synchronously.
* Classification task: predict the pair smallest-largest sync digits.
* Detection task: detect sync digits.
<!--   -->
 Sync Digits (2,9)
 Sync Digits (0,6,9)
<!--  -->
 Sync Digits (9,9)
### Other options:
- [ ] Cater-hard: debiased version of Cater (https://github.com/necla-ml/cater-h)
- [ ] VQA: CLEVRER
## :memo: [March 23] Experiments
We only run experiments starting from our current model DyReG:
Step 0: Resnet13 backbone
Step 1: predict a set of 9 nodes (as localised regions)
Step 2: Use a GNN to capture relations
### ++Detection task (supervised)++
> GOAL: Investigate if the relational processing is hurting / improving the detection performance.
- supervise each node with an object bounding box (digits) to investigate if the model is able to detect the entities at least for the supervised scenario.
#### a. Supervised the nodes after Step 1 (before relational processing)
- supervise the prediction by double greedy matching loss (Champfar loss). For matching use score: -IoU+$L_1$.
#### b. Supervised the nodes after Step 2 (after relational processing)
- same as above but supervision comes after the GNN processing
==Observations:==
1. in this setting the model is able to detect boxes in good degree (Iou aprox 70%)
2. Looking at the IOU metric from prediction at different levels, we observe that the scores (over all nodes) for the deeper, relational layer is worse. This could be due to:
* relational processing actual hurts the localisation
* the loss forces predictions from all nodes, even if their corresponding region does not contain object. (note that number of nodes > number of digits) This forces them to predict boxes, from the info received from the neighbours.
3. Even in this supervised setting the models has the same bias to assign each node to a specific area (grid bias). This leads to a node to be temporally inconsistent, it jumps from one object to the other.
### ++Detection + sync classification (supervised)++
>
- Supervise the detection task after the GNN processing, but asking the model to also predict for each node if it is part of the synchronous set or not (such that the prediction also requires relational processing).
- For this task we replace the Chamfar loss with the loss used in DETR (allowing to exist nodes that are not associated with any GT detection. besides the bounding boxes, the nodes also predicts 1/3 classes: real box but asynchron, real box and synchron, background)
### ++Consistency in time (number of jumps)++
- For all the experiments we measure the number of times the predicted nodes "jump" from one entity to another as a measure of consistency.
### ++GT-based experiments (investigate if temporal consistency matters)++
> To understand if the "bias" that we observed in the prediction is harmful (nodes tends to have a prefered region and will predict the object that is closer to that region, ignoring the temporal consistency), we run some experiments that uses GT instead of models prediction, and establish the temporal matching such that it preserves/ it breaks the temporal consistency.
- Since we use the GT boxes, these experiments are supervised at the video-level(video classification).
- perfect tracking: each node follows a certain box over the video
- greedy matching: each node receives the closest box at each frame
- hungarian matching: matching between (grid) node positions and GT boxes
==Observations:==
Perfect tracking > Hungarian > Greedy
The model with the lower number of jumps have better performance.

## :memo: [April 13] Experiments
> Incorporate the perturbed version of the Hungarian matching (https://arxiv.org/pdf/2002.08676.pdf) inside our model such that regions predictions are aware during training of the temporal matching.
:x: tried different scenarios, but the common behaviour is that for large perturbations (big epsilon ~ 10) that would give random assignments, the model works to a certain degree. On the other hand, small perturbations (eps ~ 0.1 - 0.5) where the differentiable matching should work actually makes the model diverge.
Some problems that we have identified:
### 1. The gradients are inversely proportional to the value of epsilon
When the range of the cost matrix is bounded, it puts lots of constrainets on the value of epsilon. In our case, cost matrix is in [-1,1] so epsilon should be small, otherwise it gives a random matching after perturbation. But small epsilon lead to large gradients and the learning process diverge. (visually, the regions are predicted in the corner of the image)
*Solution: make the gradients independent of epsilon and scale it accordingly.*
:question: Is it a problem that we bounded the cost matrix? Why does the unnormalized one diverge?
### 2. Soft assignment, could be bad when using inside the model
We place the differentiable matching inside the model, not in the last layer as in that paper. This could leads to two problems:
a. the meaning of a node changes, it becomes an average of the nodes matched by the perturbations.
b. there will be differences in distributions between train and test that seems to affect the performance
*Solutions:*
*1. Always use M=1. It preserves the differentiable behaviour due to the randomness, but obtains a hard assignment. In the original paper M doesn't seems to affect the performance that much.
2. Vary the way we apply the matching.
option A: rearrange the nodes from the current time step (the nodes loses their meaning and the soft assignment will get worse over time)
option B: the matching doesn't rearange the nodes from the current time step, but guide the amount of information received from the previous time step. (so replace the recurrence with a message passsing step conditioned by the assignment matrix)*
### 3. The matching is applied sequentially
We apply the matching between every 2 consecutive time steps so both the errors and the perturbations(for option A) accumulate along the temporal dimension.
*Solution:*
*1. match all the steps t with the first step t=0. This way the order at t=0 could be the default permutation, but it only works when the objects are present from the beginning.
2. Use 2 matchings between each pair of time steps. One perturbed so differentiable, used for the current time step, and the other one being standard and used only to guide the following matching. (This reminds me of teacher forcing)*
==:bulb: Other idea:== Instead of reordering the nodes at each time step, we can try to force them to be predicted in the right order from the begining. Contrastive auxiliary loss would be the more natural way, trying to keep node_i^t close to node_i^(t+1) and far from node_j^(t+1). But we can obtain a similar behaviour, but maybe different in terms of optimisation and constraints (maybe inforce the ranking a little bit more) by directly suprevising the hungarian matching between two time steps to be the identity one (Fenchel loss, classical setup from the paper).
## :memo: [April 15] Experiments