# Meta R-CNN: Towards General Solver for Instance-Level Few-Shot Learning
- leading approaches derived from meta learning mainly focus on a single visual object.
- propose **meta-learning over ROI features** instead of a full image feature.
- introduce PRN (Predictor-Head Remodeling Network) that shares its main backbone with Faster/Mask R-CNN.
- PRN takes as inputs the few-shot obkects from the base and novel classes and outputs **class attentive vectors**. These vetors then take channel-wise soft attention to the ROI feature and to help discriminate better in prediction.

## Related work
Few-shot object recognition: Bayesian approaches, metric-learning and meta-learning.
- Bayesian approaches: design probabilistic model to discover the information among latent variables.
- **Metric-learning**(similarity-learning): focuses on distinguishing similar/dissimilar features among different class object.
- **Meta-learning**: parameterize the optimization algorithm or predict the parameters of a classfier.
- object detection problem can be classified into two main branches: one-stage/two-stage detectors.
- one-stage detector intends to predict bounding boxes and detection confidences of object catagories directly (e.g YOLO, SSD)
- **two-stage detector** intends to classify and regress the location of the region proposals generated by using convnets. (e.g R-CNN).
- Object Segmentation: image-based and proposal-based.
- image-based: produce a pixel-level segmentation map over the image .
- proposal-based: predict object masks based on the generated proposals.
## Tasks and Motivation
some problems with **few-shot visual object recognition problem**
- use Dnovel to train h(;θ) to classify Cnovel leads to overfitting
- Dbase ∪ Dnovel --> leads to data imbalance
- therefore, when using a meta-learning approach, **fast adaptation to novel tasks** is employed.
- a new problem setting can be constructed such that we have:
Cmeta ~ Cbase U Cnovel
meta-learner: h(xi, Dmeta;θ), where xi~Dtrain is from a mini-batch (query), Dmeta a reference set which contains some few-shot examples for each class (support), and h is trained to classify Dtrain into Cmeta (Cmeta = Cbase U Cnovel)
- **not sure if sampling from both base and novel class is the accepted approach in few-shot object detection practice.**
Some problems with **few-shot object detection/segmentation**
- an image contains multiple objects in diverse classes, positions and shapes
- thus, directly modelling h(xi;θ) is not suitable
- modelling h(xi, Dmeta;θ) is also unsuitable
- therefore, given the problem, the real goal is to model h(zi,j ,Dmeta; θ)
- the two-stage object detection/segmentation would be to first disentangle the background information into ROI (produced by taking RoI align on the region proposals).
- the second stage is to feed the RoI features into the predictor head for classification, localization and segmentation.
- the R-CNN predictor head will then be modelled as h(ˆzi,j ,Dmeta; θ) to classify each zi,j object in ^zi,j RoI feature.
## Meta R-CNN
- two components: (1) Fater R-CNN framework (2) PRN
- RPN is used to generate region proposals, PRN is used to infer **class attentive vectors**. Each region proposal is combined with class attentive vectors.
**Faster R-CNN review**
- Faster R-CNN is a two stage pipeline, in the first stage RPN is used to generate candidate object bounding boxes (so-called region proposals), and in the second stage shares RPN backbone to extract RoI features from n object proposals **after RoI alignment**
- ** sharing RPN backbone means that the same set features (base convolutional output) are used in both pipeline.
- mask R-CNN adds a parallel branch of mask in the **predictor head**.
**meta R-CNN**
- by adding a PRN, the predictor head is modified to a **meta-predictor head**
- extends the concept proposed by SNAIL to incorporate class-specfic soft-attention vectors to achieve channel-wise feature selection on each RoI feature.
- the soft-attention vectors are inferred from RoI.
- given each RoI feature, h(ˆzi,j ,Dmeta; θ') =h(ˆzi,j ⊗ vmeta, θ)=h(ˆzi,j ⊗ f(Dmeta; φ), θ)
- modify predictor heads from h(·, θ) to h(·,Dmeta; θ)
**class-attentive vectors inference**
- PRN implemented as a channel-wise soft-attention layer to produce class-attention vectors. It receives 4-channel inputs (RGB+same-spatial-size foreground structure label) and outputs attention vectors (attention layer comes after the shared backbone).
- **a total of m*k objects** in Dmeta where m denotes the length of Cmet and k is the number of instance.
- the class attentive vectors are then applied with average pooling
**remodelling R-CNN predictor heads**
- after obtaining the class attentive vectors, these attention vectors are used to attend to each RoI (channel-wise) feature zi,j (ith image and jth region).
- the result is the fed into the modified predictor heads in Faster/Mask R-CNN.
- the result of prediction generates m binary outcomes for each RoI feature z^i,j (m classes from Cmeta)
- hence, the class attentive vectors are used to locate/segment the object.
- if the highest confidence score is lower than the objectness threshold, this RoI would be treated as background and discarded.
## Implementation
- uses the same set of hyperparameters as in the Faster R-CNN.
**mini-batch construction**
- a mini-batch consists two sets of data: (1) Dtrain and (2) Dmeta, where sampled classes are consistent across the datasets (Cmeta~Cbase U Cnovel).
- Dtrain is for the R-CNN module, which consists of actual image objects, whereas Dmeta is for the PRN module which consists of M-way*k-shot resized images with their labelledmasks.
- however, Dmeta consists of M-way*K-shot instances. Therefore, an input image x may consist of some object classes and instances, Dmeta then consists of M*k standardized (resized) images with their structure label masks.
**channel-wise soft-attention layer**
- the layer takes as inputs the features extracted from the base layer (backbone), and then perform spatial pooling to align the RoI features and thus make their sizes identical. They then receive element-wise sigmoid to produce the attention vectors (e.g. 2048 by 1 in our experiment)
**meta-loss(auxiliary loss)**
- in order to encourage **diverse feature selection**, a loss term is designed to allow the features
**RoI meta-learning**
meta-learning Meta R-CNN is divided into two learning phases
- (1) meta-train: solely consider **base-class objects** to construct Dmeta and Dtrain
- (2) meta-test: consider **both base and novel classes** (this is because at testing time, the objects appearing in the image may span the entire possible class distribution)
- therefore, it can be noted that Cmeta and Dmeta will change adaptively according to the input image received.

- meta-learning for Meta-RCNN

**Inference**
- two inference processes based on Faster/Mask RCNN module and PRN (Meta-traning)
- in training, **object attentive vectors** inferred from Dmeta would replace the class attentive vectors to compute object detection/segmentation losses.
- in testing, all classes of k-shot visual objects are used to infer **class attentive vectors** using PRN to achieve few-shot object detection/segmentation.
-
## Experiments
- baseline: Faster R-CNN (backbone: ResNet101).
- generic object detection tracks of **Pascal VOC 2007, 2012, and MS-COCO benchmarks**.
- in the **first phase of meta-learning**, only base class objects are considered.
- in the **second phase of meta-learning**, there are K-shot annotated bounding boxes for novel classes and 3K bounding boxes for base classes.
- the method is evalauted on the COCO benchmark, within which 20 object categories also exist in PASCAL VOC. Hence, 60 are treated as the base classes and the rest of 20 (that also exist in PASCAL VOC) are treated as novel classes.
- **cross-benchmark transfer setup** of few-shot object detection: trained on COCO novel classes and evalauted on PASCAL VOC2007 test set.
## Does the method help improve the generalization ability of of Faster R-CNN?
Three baselines for comparison (based on different training strategies)
- FRCN+joint: jointly trained with both base and novel classes (single-phase)
- FRCN+ft: two-phase training, use base class to traing FRCN and use both base and novel classes to fine-tune.
- FRCN+ft-full: similar to FRCN+ft except it trains the network to fully converge (thus called FRCN+ft-full)
- Meta-RCNN is also compared against a modified YOLO v2 (YOLO v2 does not use any RoI features, it uses the image itself). This is for realising whether Meta-RCNN.
### Evaluation using PASCAL VOC



### Evaluation using COCO
### Cross-benchmark setup: trained on COCO and evaluated on PASCAL (to evaluate cross-domain generalizability)
## Ablation studies
### Meta-learning + RoI method