# Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference ###### tags: `Research`, `prompt`, `PTMs` > [Paper link](https://arxiv.org/pdf/2001.07676.pdf) | [Note link](https://blog.csdn.net/qq_36426650/article/details/120788059) | [Code link](https://github.com/timoschick/pet) | EACL 2021 ## Abstract This paper introduces Pattern-Exploiting Training (PET), a **semi-supervised** training procedure that **reformulates input examples as cloze-style phrases** to help language models understand a given task. These phrases are then used to **assign soft labels to a large set of unlabeled examples**. Finally, standard supervised training is performed on the resulting training set. PET outperforms supervised training and strong semi-supervised approaches in **low-resource settings** by a large margin ## Introduction Before this paper, applying standard supervised learning to small training sets often performs poorly. For instance, assume the model is given the following pieces of text: * $T_1$: This was the best pizza I’ve ever had. * $T_2$: You can get better sushi for half the price. And the labels of $T_1$ and $T_2$ are $l$ and $l'$ If the model asked to infer the correct label for $T_3$ * $T_3$: Pizza was average. Not worth the price. But, things get easiler if the model know that the underlying task is to identify whether the text says anything about prices. --- In this work, they show that providing task descriptions can successfully be combined with standard supervised learning in few-shot settings, it's called **P**attern-**E**xploiting **T**raining (PET), a **semi-supervised training** procedure that uses natural language patterns to **reformulate input examples into cloze-style phrases.** <center> <img src = "https://i.imgur.com/amWJXJx.png"> </center><br> PET works in three steps: First, for each pattern a separate PLM is **finetuned on a small training set $\mathcal{T}$** The ensemble of all models is then used to annotate a large unlabeled dataset $\mathcal{D}$ **with soft labels**. Finally, a standard **classifier** is trained on the soft-labeled dataset. They also devise iPET, an **iterative variant of PET** in which this process is repeated with increasing training set sizes. ## Related Work * Zero-shot learning of challenging tasks (question answering (QA) / reading comprehension), it requires a semantic parser * Cloze-style phrases to probe the knowledge that PLMs acquire during pretraining * Few-shot learning in NLP (exploiting examples from related tasks / data augmentation) The idea behind iPET – training multiple generations of models on data labeled by previous generations ## Pattern-Exploiting Training * $M$: masked language model with vocabulary $V$ * mask token $\_ \_ \_ \_ \in V$ (also means $[\rm{MASK}]$) * $\mathcal{L}$: a set of labels for target classification task $A$ An input for task $A$ as a sequence of phrases $\rm{\textbf{x}}$ $= (s_1, ..., s_k)$ with $s_i \in V^*$ For example, $k = 2$ if $A$ is textual inference (two input sentences). They define a *pattern* to be a function $P$ that takes $\rm{\textbf{x}}$ as input and outputs a phrase or sentence $P($$\rm{\textbf{x}}$$) \in V^*$ that contains exactly **one mask token** its output can be viewed as a **cloze question**. Furthermore, they define a *verbalizer* as an injective function $v \ : \ \mathcal{L} \rightarrow V$ that maps each label to a word from $M$'s vocabulary. $(P, v)$ as a *pattern-verbalizer pair* (PVP) * $P$ create $[\rm{MASK}]$ from $\mathrm{\textbf{x}}$ * $v$ is the relation, which means $\mathcal{L} \rightarrow V$ (e.g. positive $\rightarrow$ great) :::info **How they solve task $A$ using PVP ($P, v$)** Given an input $\rm{\textbf{x}}$, and applying $P$ to obtain an input representation $P($$\rm{\textbf{x}}$$)$ Using $M$ to determine the label $y \in \mathcal{L}$ for which $v(y)$ is the most likely substitute for the mask. ::: ### PVP Training and Inference Given some input $\rm{\textbf{x}}$, and define the score for label $l \in \mathcal{L}$ as $$ s_{\mathbf{p}}(l \mid \mathbf{x})=M(v(l) \mid P(\mathbf{x})) $$ and obtain a probability distribution over labels using softmax: $$ q_{\mathbf{p}}(l \mid \mathbf{x})=\frac{e^{s_{\mathbf{p}}(l \mid \mathbf{x})}}{\sum_{l^{\prime} \in \mathcal{L}} e^{s_{\mathbf{p}}\left(l^{\prime} \mid \mathbf{x}\right)}} $$ ### Auxiliary Language Modeling This paper suffer from a problem, whinch few training examples are available and catastrophic forgetting can occur. So they use language modeling as auxiliary task. Final loss combine both cross-entropy $L_{CE}$ and language modeling loss $L_{\mathrm{MLM}}$, $\alpha$ here set $10^{-4}$ $$ L = (1 - \alpha) \cdot L_{CE} + \alpha \cdot L_{\mathrm{MLM}} $$ ### Combining PVPs To handle which PVPs perform well, they use strategy similar to knowledge distillation. 1. They define a set $\mathcal{P}$ of PVPs that intuitively make sense for a given task $A$. 2. Finetuning a separate language model $M_{\mathrm{\textbf{p}}}$ for each $\mathrm{\textbf{p}} \in \mathcal{P}$ 3. Using the ensemble $\mathcal{M} = \{ M_{\mathrm{\textbf{p}}} \ | \ \mathrm{\textbf{p}} \in \mathcal{P}\}$ of finetuned models to annotate examples from $\mathcal{D}$, they combine the unnormalized class scores for each example $\mathrm{\mathbf{x}} \in \mathcal{D}$ as $$ s_{\mathcal{M}}(l \mid \mathbf{x})=\frac{1}{Z} \sum_{\mathbf{p} \in \mathcal{P}} w(\mathbf{p}) \cdot s_{\mathbf{p}}(l \mid \mathbf{x}) $$ where $Z = \sum_{\mathrm{\textbf{p}} \in \mathcal{P}} w(\mathrm{\textbf{p}})$ and the $w(\mathrm{\textbf{p}})$ are weighting terms for the PVPs. And they experiment with two different realizations of this weighing term * $w(\mathrm{\textbf{p}}) = 1$ for all $\mathrm{\textbf{p}}$ * $w(\mathrm{\textbf{p}})$ to be the accuracy obtained using $\mathrm{\textbf{p}}$ on the training set *before* training 4. Finetuning a PLM $C$ with standard sequence classification head on $\mathcal{T}_C$ ![](https://i.imgur.com/qggwOYE.png) ### Iterative PET (iPET) iPET train several *generations* of models on datasets of increasing size, avoiding issue that some patterns perform (possibly much) worse than others, the training set $\mathcal{T}_C$ for the final model may therefore contain many mislabeled examples. 1. Firstly, enlarge the original dataset $\mathcal{T}$ by labeling selected examples from $\mathcal{D}$ using a random subset of trained PET models. 2. Then training a new generation of PET models on the enlarged dataset (repeatedly) * $\mathcal{M}^0=\left\{M_1^0, \ldots, M_n^0\right\}$ be the initial set of PET models finetuned on $\mathcal{T}$ * Each $M_i^j$ is trained for PVP $\mathrm{\textbf{p}}_i$ on its own training set $\mathcal{T}^j_i$ * In each iteration, we multiply the training set size by a fixed constant $d \in \mathbb{N}$ To maintain the label ratio of the original dataset, they use those steps below: 1. Obtaining $\mathcal{N} \subset \mathcal{M}^{j-1} \backslash\left\{M_i^{j-1}\right\}$ by randomly choosing $\lambda \cdot (n - 1)$ models from the previous generation with $\lambda (0. 1]$ 2. Using the subset to create a labeled dataset $$ \mathcal{T}_{\mathcal{N}}=\left\{\left(\mathbf{x}, \arg \max _{l \in \mathcal{L}} s_{\mathcal{N}}(l \mid \mathbf{x})\right) \mid \mathbf{x} \in \mathcal{D}\right\} $$ To avoid training future generations on mislabeled data, we prefer examples for which the ensemble of models is confident in its prediction. When drawing from $\mathcal{T_N}$, setting the proability of each ($\mathrm{\textbf{x}}, y$) proportional to $s_{\mathcal{N}} (l \ | \ \mathrm{\textbf{x}})$ 3. Then combining new dataset as $\mathcal{T}_i^j=\mathcal{T} \cup \bigcup_{l \in \mathcal{L}} \mathcal{T}_{\mathcal{N}}(l)$ 4. After training $k$ generations of PET models, using $\mathcal{M}^k$ to create $\mathcal{T}_C$ and train $C$ as in basic PET. ## Experiments Evaluating on four English datasets: Yelp Reviews, AG’s News, Yahoo Questions and MNLI. Using x-stance to investigate how well PET works for other languages. RoBERTa large as language model, for x-stance, using XLM-R ### Sentiment Analysis Type Tasks * Yelp <table> <tr> <th> Patterns </th> <th> Verbalizer </th> </tr> <tr> <td> <img src="https://i.imgur.com/8hW3CuU.png"> </td> <td> <img src="https://i.imgur.com/HAg1s5g.png"> </td> </tr> </table> ### Thematic classification tasks * AGNews / Yahoo <table> <tr> <th> Patterns </th> <th> Verbalizer </th> </tr> <tr> <td> <img src="https://i.imgur.com/EZtb6JL.png"> </td> <td> <center> maps categories 1–4 to <br> “World”, “Sports”, “Business” and “Tech” </center> </td> </tr> <tr> <td> <img src="https://i.imgur.com/EZtb6JL.png"> </td> <td> <center> maps categories 1–10 to <br> “Society”, “Science”, “Health”, <br> “Education”, “Computer”, “Sports”, <br> “Business”, “Entertainment”, “Relationship” and “Politics” </center> </td> </tr> </table> ### Sentence pair type task * MNLI <table> <tr> <th> Patterns </th> <th> Verbalizer </th> </tr> <tr> <td> <img src="https://i.imgur.com/3ZDN3zW.png"> </td> <td> <img src="https://i.imgur.com/wqTcHqZ.png"> </td> </tr> </table> ![](https://i.imgur.com/shbXgTA.png) <center> <img src="https://i.imgur.com/D0hNgzd.png"> </center> ### X-Stance Multilingual stance detection dataset with German, French and Italian examples <table> <tr> <th> Patterns </th> <th> Verbalizer </th> </tr> <tr> <td> <img src="https://i.imgur.com/RKZBcAg.png)"> </td> <td> mapping 0 to “Yes” and 1 to “No” </td> </tr> </table> <center> <img src="https://i.imgur.com/N4o1ahb.png"> </center> ## Analysis ### Combining PVPs <center> <img src="https://i.imgur.com/T27j1Ft.png"> </center> ### Auxiliary Language Modeling <center> <img src="https://i.imgur.com/UAyirki.png"> </center> ### Iterative PET <center> <img src="https://i.imgur.com/cNxREEi.png"> </center> ### In-Domain Pretraining <center> <img src="https://i.imgur.com/1r0eSaI.png"> </center> ## Conclusion PET, consists of defining pairs of **cloze question patterns** and **verbalizers** that help leverage the knowledge contained within pretrained language models for downstream tasks. When the initial amount of training data is limited, PET gives large improvements over standard supervised training and strong semi-supervised approaches.