owned this note
owned this note
Published
Linked with GitHub
# Week 2 Course Discussion (Problem C)
:::warning
English contents only, including the subject.
:::
## BF16 format
```
┌─────────┬───────────┬───────────┐
| Sign(1) |Exponent(8)|Mantissa(7)|
└─────────┴───────────┴───────────┘
Sign : 1/0 -> -1/+1
Exponent : 2^E-127^
Mantissa : fraction
```
### Compare to FP32
1. Similarity
* expressed with 32 bits
* expressed exponent using 8 bits
* bias = 127
2. Difference
* 16 valid bits
* fewer mantissa bits (FP32 has 23 mantissa bits) -> less precision
* does not support denormals
### Compare to FP16 :
1. Similarity
* 16 valid bits
2. Difference :
* fewer mantissa bits (FP16 has 10 mantissa bits)
* more exponent bits (FP16 with 5 exponent bits)
* fp16 supports denormalization
* differece bias (FP16 with bias = 15)
### Findings
* BF16 remains the same range of number expression with fp32 but costs less memory by shortening the length of mantissa bits, leading to less precision
* Due to the additional exponent bits comparing to FP16, BF16 is less easier to encounter underflow
## Normalization/Denormalization
> Normalization :
- The normal value in FP32 is calculated as :
$$
v = (-1)^S \times 2^{E-127} \times \left(1 + \frac{M}{128}\right)
$$
- $E$ is made by 8 bits and the range of valid $E$ is :
$$
E \in [1, \ 254]
$$
- An important feature of normal values is the implicit leading 1 in the mantissa, which confines the representable numbers for a given $E$ to a specific range :
| $E$ | exponent value | Range of $v$ |
|:--------:|:--------------:|:------------------------:|
| 1 | $2^{-126}$ | $[2^{-126}, \ 2^{-125})$ |
| 2 | $2^{-125}$ | $[2^{-125}, \ 2^{-124})$ |
| $\vdots$ | | |
| 254 | $2^{127}$ | $[2^{127}, \ 2^{128})$ |
The existence of normalization ensures that the values in every range section are represented precisely in a continous manner.
- If we extend the normal value range to $[2^{-127}, \ 2^{128})$, i.e, $E \in [0, \ 127]$, then according to normalization formula, the minimum normal value would be $2^{-127}$. However, this creates a large gap between $2^{-127}$ and $0$, which can leads to sudden underflow. Therefore we define the values with $E=0, M \neq 0$ to be denormal values in order to shorten the gap.
>Denormalization
* Denormal value for FP32 :
$$
v = (-1)^S \times 2^{1-127} \times \left(0 + \frac{M}{128}\right)
$$
In denormal values, the exponent is fixed at $-126$, and the implicit leading $1$ used in normal numbers is set to $0$.
* Why is denormalization important?
- The minimum normal value for FP32 will be
$$
2^{1-127} \times(1+2^{-0}) = 2^{-126}
$$
- Without denormalization, there will be a large gap between the smallest normal value and 0, in result we define denormal value to fill the gap and avoid sudden underflow
- After introducing denormal value, the smallest denormal value would be :
$$
2^{-126} \times (2^{-23}) = 2^{-149}
$$
It is much closer to zero compared to the minimum normal number.
* Denormalization happens when $E=0$, $M \neq 0$, but why is the exponent set to $1-127$ instead of $0-127$ ?
- If the exponent is set to $0-127$, the maximum denormal value and minimum denormal value will become :
$$
\begin{align*}
& \max{v_{denormal}}=2^{-127}(1-2^{-23})=2^{-127}-2^{-150} \\
& \min{v_{denormal}}=2^{-127}(2^{-23})=2^{-150}
\end{align*}
$$
The maximun denormalized value becomes less than $2^{-127}$, which creates a huge gap between $[2^{-127}, \ 2^{-126})$
- While the exponent is set to $1-127$, the maximum denormal value is :
$$
\begin{align*}
& \max{v_{denormal}}=2^{-126}(1-2^{-23})=2^{-126}-2^{-149} \\
& \min{v_{denormal}}=2^{-126}(2^{-23})=2^{-149}
\end{align*}
$$
Values between $[2^{-127}, \ 2^{-126})$ can be represented, with a slightly increased spacing toward zero, which is more tolerable than in the previous case.
## Square Root Discussion
According to the format of BF16, we can seperate it into two parts : exponent and mantissa (neglecting sign bit)
Therefore, we can alter the square root operation into :
$$
\sqrt{a} = \sqrt{2^{e_a} \times m_a} = 2^{e_a/2} \times \sqrt{m_a}
$$
---
1. Exponent Operation
* The exponent bits represent the power of 2, thus we can implement $2^{e_a/2}$ using bit shift
* If exponent is odd, we need to do extra operation :
$$
2^{\frac{e_a-1+1}{2}} \times \sqrt{m_a} = 2^{\frac{e_a}{2}}\times \sqrt{2\times m_a}
$$
Thus, we can obtain the new exponent and mantissa :
$$
e_r = \frac{e_a - 1}{2}, \quad m' = 2 \times m_a
$$
---
2. Special Case
* If exponent == 0xFF, the value might be either NaN (mantissa is not zero) or +-inf (mantissa is zero).
The output will be NaN if the input value is whether NaN or -inf.
* If both exponent and mantissa are zero, the output will be zero, same as the input.
* If sign bit is 1, meaning that the value $a$ is negative, the output will be NaN
* If the exponent is zero while the mantissa is nonzero, the value $a$ is a denormal number, which is not supported in bfloat16.Therefore it will be flushed to zero.
<!-- ## Quantization (to integer)
In order to facilitate memory storage and accelerate hardware calculation, it is neccesary to reduce floating point storage by using quantization.
Quantization incluses :
* fp32 to fp16
* fp32 to bfloat16
* fp32 to int8
* fp32 to int4
> **int8**
```
┌─────────┬───────────┐
| Sign(1) |Exponent(7)|
└─────────┴───────────┘
``` -->
## Integer Calculation
- [Formula Derivation](#Formula-Derivation)
- [Precise integer in FP32, BF16](#Precise-integer-in-FP32,-BF16)
- [Rounding in FP32, BF16](#Rounding-in-FP32,-BF16)
- [Conclusion](#Conclusion)
### Formala Derivation
1. According to the calculation of floating points :
$$
v = (-1)^S \times 2^{E-bias} \times \left(1 + \frac{M}{2^{\verb|#|M}}\right),\ \verb|#|M = \text {number of mantissa bits}
$$
2. We can further derive the formula by applying Distributive Law :
$$
v = (-1)^S \times\left( 2^{E-bias} + 2^{E-bias-\verb|#|M} \times M\right)
$$
---
### Precise integer in FP32, BF16
#### Exponent
- For normalized value, both FP32 and BF16 have 8 bits for exponent, meaning that the range of exponent bits for FP32 and BF16 are :
$$
E \in [1, \ 254]
$$
The bias of FP32 and BF16 are 127, thus the dynamic range for two are (discluding mantissa) :
$$
2^{-126} \le x\le 2^{127}
$$
- The range of integer dynamic range is :
$$
2^0 \le x\le 2^{127}
$$Therefore, the range of $E$ for representing integer is :
$$
E \in[127, 254]
$$
- The exponent bits can only express numbers that equal to the power of 2 :
| Range | Number of missed value |
|:-----------------------:|:----------------------:|
| $[2^0 , \ 2^1]$ | 0 |
| $[2^1 , \ 2^2]$ | $2^1-1$ |
| $\vdots$ | $\vdots$ |
| $[2^{126} , \ 2^{127}]$ | $2^{126}-1$ |
As the value of $E$ increases, the gap between consecutive exponent-only integers become larger.
Thus, mantissa bits are used to fix this problem
---
#### Mantissa
Although mantissa can reduce the gap between 2 adjacent power-of-two values, there's still some integer which cannot be expressed due to the limited number of mantissa bits.
- Reminder :
* In this section we consider $M$ as nonzero values
* The exponent part of the numerized mantissa value $2^{E-bias-\verb|#|M+\log_2{M}}$ must be positive so that the value remains integer.
* $M$ is formed by unsigned binary sets and thus must not go under zero.
- Precisely represented values :
>FP32 case
The bias of FP32 is 127 and $\verb|#|M=23$.
- The range of valid $M$ :
$$
M \in [1, 2^{23}-1] \rightarrow 0 \le \log_2{M} \lt 23
$$Since the exponent of $2^{E-127-23+\log_2{M}}$ must be positive, the available $E$ can be obtain :
$$
127 \lt E \le 150
$$The above implies that the integers $v\in [1, \ 2^{151})$ can be expressed without the lost of precision.
> BF16 case
The bias of BF16 is 127 and $\verb|#|M=7$.
- The range of valid $M$ :
$$
M \in [1, 2^{7}-1] \rightarrow 0 \le \log_2{M} \lt 7
$$The range of $E$ can be derived :
$$
127 \lt E \le 134
$$Thus, the integers $v \in [1, \ 2^{135})$ can be expressed without lost of precision.
---
### Rounding in FP32, BF16
Integers that cannot be precisely represented are rounded to the nearest integer, which leads to representation error.
> FP32
The value $v\notin [1, \ 2^{151})$ might be rounded.
- Step size of the two adjacent representable integers increases as the value of $E$ becomes bigger :
* $E=151$ :
$$2^{151-127-23+\log_2{M}} \rightarrow 2^{1+\log_2{M}}$$Step size between [$2^{24}, \ 2^{25}$] is 2
* $E=152$
$$2^{152-127-23+\log_2{M}} \rightarrow 2^{2+\log_2{M}}$$Step size between [$2^{25}, \ 2^{26}$] is 4
* $E=254$ :
$$
2^{254-127-23+\log_2{M}} \rightarrow 2^{104+\log_2{M}}
$$Step size between [$2^{127}, \ 2^{128}$] is $2^{104}$
>BF16
The value $v\notin [1, \ 2^{134})$ might be rounded.
- The range of exactly representable integers is much smaller than that of FP32, which means that the range of error gain bigger and the magnitude of error increases for the same $E$ :
* $E=135$
$$2^{135-127-7+\log_2{M}} \rightarrow 2^{1+\log_2{M}}$$Step size between [$2^{8}, \ 2^{9}$] is 2
* $E=151$
$$2^{151-127-7+\log_2{M}} \rightarrow 2^{17+\log_2{M}}$$Step size between [$2^{24}, \ 2^{25}$] is $2^{17}$
* $E=254$ :
$$
2^{254-127-7+\log_2{M}} \rightarrow 2^{120+\log_2{M}}
$$Step size between [$2^{127}, \ 2^{128}$] is $2^{120}$
---
> Error difference between FP32, FP16
Estimate the error rate by counting the precise number. Take $E=254$ as example :
FP32
- Integers that can be well represented occur every $2^{104}$ steps, therefore the rate of represented precise integer can be calculated :
$$
\mathbb{P_{correct}} = \frac{K}{2^{255}-2^{254}+1}, \ K= \frac{2^{255}-2^{254}}{2^{104}}+1
$$
It can be further derived :
$$
\frac{2^{150}+1}{2^{254}+1} \approx 2^{-104}
$$
- The approximated error rate for FP32 is $1-2^{-104}$
---
BF16
- Integers that can be well represented occur every $2^{104}$ steps :
$$
\mathbb{P_{correct}}=\frac{K}{2^{255}-2^{254}+1}, \ K= \frac{2^{255}-2^{254}}{2^{120}}+1
$$
It can be further derived :
$$
\frac{2^{134}+1}{2^{254}+1} \approx 2^{-120}
$$
- The approximated error rate for FP32 is $1-2^{-120}$
---
### Conclusion
- Due to the difference in mantissa bit widths between fp32 and bf16, the difference in the number of precisely representable integers is approximately $2^{151}-2^{134}$
- The error rate of BF16 can be way larger than the error rate of FP32
- BF16 is less suitable for integer calculation than FP32 due to the smaller number of exactly represented integer and larger of error rate
---