<style>
img {
display: block;
margin-left: auto;
margin-right: auto;
}
</style>
> [Paper link](https://arxiv.org/pdf/2203.15556.pdf) | [Note link](https://zhuanlan.zhihu.com/p/600759852) | NeurIPS 2022
## Abstract
They investigate the optimal model size and number of tokens for training a Transformer language model under a given compute budget. They find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant.
They think that model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled.
They test this hypothesis by training a predicted compute-optimal model, *Chinchilla*, that uses the same compute budget as Gopher but with 70B parameters and 4× more more data. *Chinchilla* uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks.
## Introduction
A series of Large Language Models (LLMs) have recently been introduced, with the largest dense language models now having over 500 billion parameters. In practice, the allocated training compute budget is often known in advance: practitioners have access to a certain number of accelerators for a given period of time.

In this work, they revisit the question: *Given a fixed FLOPs budget, how should one trade-off model size and the number of training tokens?*
To answer the question, they define some terms:
- $N$: the number of model parameters
- $D$: the number of training tokens
- $L(N, D)$: the final pre-training loss
- $C$: computational budget
They are interested in minimizing $L$ under the constraint $\text{FLOPs}(N, D) = C:$
$$
\tag{1} N_{opt}(C), D_{opt}(C) = \arg \min_{N, D \text{ s.t. FLOPs }(N, D) = C} L(N, D)
$$
where $N_{opt}(C)$, and $D_{opt}(C)$ describe the optimal allocation of a computational budget $C$.
## Related Work
- Large language models
- Modelling the scaling behavior
- Estimating hyperparameters for large models
- Improved model architectures
## Estimating the optimal parameter/training tokens allocation
They present three different approaches to answer the question. They start by training a range of models varying both model size and the number of training
tokens and use the resulting training curves to fit an empirical estimator of how they should scale. **The resulting predictions are similar for all three methods and suggest that parameter count and number of training tokens should be increased equally with more compute.**
### Fix model sizes and vary number of training tokens
Training each model for 4 different number of training sequences. They fit power laws to estimate the optimal model size and number of training tokens for any given amount of compute obtaining a relationship $N_{opt} \propto C^a$ and $D_{opt} \propto C^b$. They find that $a = 0.50$ and $b=0.50$.

### IsoFLOP profiles
In this section, they vary the model size for a fixed set of 9 different training FLOP counts (ranging from $6 \times 10^{18}$ to $3 \times 10^{21}$ FLOPs), and consider the final training loss for each point.

Again, they fit exponents of the form $N_{opt} \propto C^a$ and $D_{opt} \propto C^b$ and find that $a = 0.49$ and $b = 0.51$.
### Fitting a parametric loss function
Lastly, they model all final losses from experiments in Approach 1 & 2 as a parametric function of model parameter count and the number of seen tokens. Following a classical risk decomposition, they propose the following functional form
$$
\tag{2} \hat{L} (N, D) \overset{\Delta}{=} E + \frac{A}{N^\alpha} + \frac{B}{D^\beta}
$$

They show contours of the fitted function $\hat{L}$ in Figure 4 (left), and the closed-form efficient computational frontier in blue. From this approach, they find that $a = 0.46$ and $b = 0.54$.
### Optimal model scaling
They find that the three approaches, despite using different fitting methodologies and different trained models, yield comparable predictions for the optimal scaling in parameters and tokens with FLOPs.
All three approaches suggest that as compute budget increases, model size and the amount of training data should be increased in approximately equal proportions.
## *Chinchilla*
Based on their analysis in [Section 3](#Estimating-the-optimal-parametertraining-tokens-allocation) the optimal model size for the *Gopher* compute budget is somewhere between 40 and 70 billion parameters.
They test this hypothesis by training a model on the larger end of this range—70B parameters—for 1.4T tokens, due to both dataset and computational efficiency considerations, they call *Chinchilla*.


### Results
**Language modelling**

**MMLU**

More detail results from original paper
- Reading comprehension
- BIG-bench
- Common sense
- Closed-book question answering
- Gender bias and toxicity
## Discussion & Conclusion
**They propose three predictive approaches towards optimally setting model size and training duration, based on the outcome of over 400 training runs.**
All three approaches predict that Gopher is substantially over-sized and estimate that for the same compute budget a smaller model trained on more data will perform better.
They directly test this hypothesis by training Chinchilla, a 70B parameter model, and show that it outperforms Gopher and even larger models on nearly every measured evaluation task.