# [Paper Reading]Chen et al. A Closer Look at Few-shot Classification. ICLR 2019.
Chen et al. A Closer Look at Few-shot Classification. ICLR 2019.[m]
## Intruction
1. a consistent comparative analysis of several representative **few-shot classification algorithms**, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences
2. a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the miniImageNet and the CUB **datasets**
3. a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification **algorithms**
## Problem Definition
The problem is to learn to generalize to unseen classed during training, which in order to achieve Few-shot classification.
## Contribute
1. Providing a unified testbed for several different few-shot classification algorithms for a fair comparison.
2. Illustration of a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets.
3. Investigation of a practical evaluation setting where base and novel classes are sampled from different domains
## Method
### Baseline

#### **Training stage**
Training a feature extractor f~θ~ and the classifier C from scratch by minimizing a standard cross-entropy classification loss L~pred~ using the training examples in the base classes
x~i~ ∈ X~b~.
#### **Fine-tuning stage**
To adapt the model to recognize novel classes in the fine-tuning stage, we fix the pre-trained network parameter θ in our feature extractor f~θ~ and train a new classifier C by minimizing L~pred~ using the few labeled of examples.
### Meta-learning Algorithms

#### **Meta-training stage**
Randomly select N classes, and sample small base support set S~b~ and a base query set Q~b~ from data samples within these classes. The objective is to train a classification model M that minimizes N-way prediction loss LN−way of the samples in the query set Q~b~. Here, the classifier M is conditioned on provided support set S~b~. By making prediction conditioned on the given support set, a meta-learning method can learn how to learn from limited labeled data through training from a collection of tasks.
#### **Meta-testing stage**
All novel class data **X~n~** are considered as the support set for novel classes **S~n~**, and the classification
model M can be adapted to predict novel classes with the new support set **S~n~**.
## Experimental results
### Evaluation on baseline model and using standard setting

### Effect of increasing the network **DEPTH**

There is a observation that **in the CUB dataset, the gap among existing methods would be reduced if their intra-class variation are all reduced by a deeper backbone**
### Effect of domain differences between base and novel classes

This part shows that **as the domain difference
grows larger, the adaptation based on a few novel class instances becomes more important.**
### Effect of further Adaptation

Learning to learn adaptation in the meta-training stage would be an important direction for future meta-learning research in few-shot classification.