# Week 3 Course Discussion
## Question : How to multiply a BF16 by 2
## Introduction
* A floating point number format proposed by Google Brain, specifically designed for machine learning applications. It occupies 16 bits in computer memory and represents a shortened version of the FP32.
## Format
BF16 consists of three components:
* 1 bit: Sign bit
* 8 bits: Exponent (same as FP32)
* 7 bits: Mantissa (FP32 has 23 mantissa bits)
```
┌─────────┬───────────┬───────────┐
| Sign(1) |Exponent(8)|Mantissa(7)|
└─────────┴───────────┴───────────┘
S: Sign bit (0 = positive, 1 = negative)
E: Exponent bits (8 bits, bias = 127)
M: Mantissa/fraction bits (7 bits)
```
## Key Principles
1. BF16 maintaining the same 8-bit exponent as FP32, which means it preserves the full dynamic range of 32-bit floating-point numbers.
2. Converting between FP32 and BF16 is straightforward, since the exponent bits are preserved directly. And, the mantissa can be reduced through truncation or other rounding mechanisms.
3. BF16 offer sufficient accuracy for most machine learning training tasks.
## Suitable Use Cases
1. Machine Learning Training.
[BFloat16: The secret to high performance on Cloud TPUs](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus)
2. Hardware Acceleration
[Improve your model's performance with bfloat16](https://docs.cloud.google.com/tpu/docs/bfloat16)
3. Large-Scale Models
[Efficient Large-Scale Training with Pytorch FSDP and AWS](https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/)
## Limitations and Constraints
1. Less precision than FP16 or FP32, because only 7 bits for mantissa.
2. Not all hardware platforms support BF16 natively.
3. Accuracy trade-off, BF16 may require fine-tuning or mixed precision to achieve FP32-level results
## BF16 format
* The value 𝑣 of a BFloat16 number is calculated as:
$$
v = (-1)^S \times 2^{1-127} \times \left(0 + \frac{M}{128}\right)
$$
* **Infinity**: positive and negative infinity are represented with their corresponding sign bits
```
val s_exponent_signcnd
+inf = 0_11111111_0000000
-inf = 1_11111111_0000000
```
* **Not a number**: exponent == 11111111 AND fraction ≠ 0
```
val s_exponent_signcnd
+NaN = 0_11111111_klmnopq
-NaN = 1_11111111_klmnopq
at least one of k, l, m, n, o, p, or q is 1
```
* **Zero**: both exponent and mantissa are zero
```
val s_exponent_signcnd
0 = 0_00000000_0000000
0 = 1_00000000_0000000
```
* **Denormal**: exponent = 0 and mantissa ≠ 0
$$
v = (-1)^S \times 2^{-126} \times \left(0 + \frac{M}{128}\right)
$$
## Implementation
The following function implements multiplication of a bf16 value by 2. I will run several test cases and compare the results of my implementation with bf16_mul from [Quiz1](https://hackmd.io/@sysprog/arch2025-quiz1-sol#Problem-C), Problem C.
```
bf16_t bf16_multiply_by_2(bf16_t x) {
uint16_t sign = x.bits & BF16_SIGN_MASK;
uint16_t exponent = (x.bits & BF16_EXP_MASK) >> BF16_EXP_SHIFT;
uint16_t mantissa = x.bits & BF16_MANT_MASK;
// 1. Zero
if (bf16_iszero(x)) {
return x;
}
// 2. Infinity
if (bf16_isinf(x)) {
return x;
}
// 3. NaN
if (bf16_isnan(x)) {
return x;
}
// 4. Denormalized
if (bf16_isdenorm(x)) {
uint16_t new_mantissa = mantissa << 1;
if (new_mantissa & 0x80) {
// turn into exponent=1
return (bf16_t){ .bits = (sign | (1 << BF16_EXP_SHIFT) | (new_mantissa & BF16_MANT_MASK)) };
} else {
// still denorm
return (bf16_t){ .bits = (sign | new_mantissa) };
}
}
// 5. Normalized number: exponent + 1
uint16_t new_exponent = exponent + 1;
if (new_exponent >= BF16_MAX_EXP) {
// overflow → Inf
return (bf16_t){ .bits = sign | BF16_POS_INF };
}
return (bf16_t){ .bits = (sign | (new_exponent << BF16_EXP_SHIFT) | mantissa) };
}
```
## Test data
| # | Input Hex | Category | Description | My Mul2 Result | Quiz1 Mul2 Result | Match? |
| -- | --------- | ------------------- | ----------------- | ----------------- | ----------------- | ---------- |
| 1 | `0x0000` | Zero | +0 | `0x0000` (Zero) | `0x0000` (Zero) | ✔ Match |
| 2 | `0x8000` | Zero | -0 | `0x8000` (Zero) | `0x8000` (Zero) | ✔ Match |
| 3 | `0x7F80` | Infinity | +Inf | `0x7F80` (Inf) | `0x7F80` (Inf) | ✔ Match |
| 4 | `0xFF80` | Infinity | -Inf | `0xFF80` (Inf) | `0xFF80` (Inf) | ✔ Match |
| 5 | `0x7FC0` | NaN | Canonical NaN | `0x7FC0` (NaN) | `0x7FC0` (NaN) | ✔ Match |
| 6 | `0x7FFF` | NaN | All-ones fraction | `0x7FFF` (NaN) | `0x7FFF` (NaN) | ✔ Match |
| 7 | `0x7F81` | NaN | Fraction = 1 | `0x7F81` (NaN) | `0x7F81` (NaN) | ✔ Match |
| 8 | `0xFF81` | NaN | Negative NaN | `0xFF81` (NaN) | `0xFF81` (NaN) | ✔ Match |
| 9 | `0x0001` | Denormal | Min denorm | `0x0002` (Denorm) | `0x0000` (Zero) | ❌ Mismatch |
| 10 | `0x0002` | Denormal | Random denorm | `0x0004` (Denorm) | `0x0000` (Zero) | ❌ Mismatch |
| 11 | `0x007F` | Denormal | Max denorm | `0x00FE` (Normal) | `0x00FE` (Normal) | ✔ Match |
| 12 | `0x0080` | Normalized | Min normalized | `0x0100` (Normal) | `0x0100` (Normal) | ✔ Match |
| 13 | `0x4000` | Normalized | 2.0 | `0x4080` (Normal) | `0x4080` (Normal) | ✔ Match |
| 14 | `0x7F7F` | Normalized | Max finite | `0x7F80` (Inf) | `0x7F80` (Inf) | ✔ Match |
| 15 | `0xC000` | Negative Normalized | -2.0 | `0xC080` (Normal) | `0xC080` (Normal) | ✔ Match |
```
int main(void) {
// ====== test data (include edge case) ======
bf16_t test_vals[] = {
// ---- Zero ----
{0x0000}, // +0
{0x8000}, // -0
// ---- Infinity ----
{0x7F80}, // +Inf
{0xFF80}, // -Inf
// ---- NaN ----
{0x7FC0}, // Canonical NaN
{0x7FFF}, // NaN (all ones fraction)
{0x7F81}, // NaN (fraction=1)
{0xFF81}, // negative NaN
// ---- Denorm (subnormal) ----
{0x0001}, // max denorm
{0x0002}, // random denorm
{0x007F}, // min denorm
// ---- Normalized values ----
{0x0080}, // min normalized
{0x4000}, // 2.0
{0x7F7F}, // max finite
// ---- Negative normalized ----
{0xC000}, // -2.0
};
const char *labels[] = {
"+0", "-0",
"+Inf", "-Inf",
"NaN_canonical", "NaN_allones", "NaN_frac1", "NaN_neg",
"min_denorm", "den2", "max_denorm",
"min_norm","2.0","max_finite",
"-2.0",
};
// bf16 constant:2.0 = 0x4000
bf16_t bf16_two = {.bits = 0x4000};
int N = sizeof(test_vals) / sizeof(test_vals[0]);
for (int i = 0; i < N; i++) {
bf16_t x = test_vals[i];
bf16_t y1 = bf16_multiply_by_2(x); // my multiply 2 function
bf16_t y2 = bf16_mul(x, bf16_two); // quiz1 bf16_mul function
printf("Case %-15s: input = 0x%04X \n", labels[i], x.bits);
print_bf16(" my Mul 2 ", y1);
print_bf16(" Quiz1 mul2 ", y2);
if (y1.bits == y2.bits)
printf(" Match\n\n");
else
printf(" Mismatch (my=0x%04X, Quiz1=0x%04X)\n\n",
y1.bits, y2.bits);
}
return 0;
}
```
## Results


* It can be observed that mismatch happened for denormal values. The reason is that the bf16_mul function does not support denormals and instead flushes them to zero.
* For denormalized values, I multiplies the mantissa by 2 by shifting it left. If the shifted mantissa’s highest bit becomes 1, the number crosses into the normalized range, so it is converted into a normalized number with exp = 1. Otherwise, the number remains denormal, and only the mantissa is updated while the exponent stays 0.
## Extended Discussion : multiplying x by 3
We will implement two approaches.
1. Use the existing “multiply_by_2” function and then add 1.
2. Directly compute x + x + x.
```
bf16_t bf16_multiply_by_3(bf16_t x)
{
// if x is 0、INF、NaN, return x
if (bf16_iszero(x) || bf16_isinf(x) || bf16_isnan(x)) {
return x;
}
// y = x * 2
bf16_t y = bf16_multiply_by_2(x);
// result = y + x = 3x
return bf16_add(y, x);
}
```
```
bf16_t bf16_multiply_by_3_add(bf16_t x)
{
// if x is 0、INF、NaN, return x
if (bf16_iszero(x) || bf16_isinf(x) || bf16_isnan(x)) {
return x;
}
bf16_t result = bf16_add(x, x);
return bf16_add(result, x);
}
```
The two approaches produce identical numerical results.
However, the second method—implemented by performing x + x + x using the bf16_add function twice—achieves a faster runtime compared to the first approach, which uses the “multiply-by-2 then add” strategy.
I conducted the runtime comparison in VSCode, using the PowerShell command `Measure-Command { .\mul3.exe }
` to measure execution time.
| Method | Description | Total Seconds | Total Milliseconds | Ticks | Notes |
| -------------- | ------------------------------------ | --------------- | ------------------ | ------- | ------ |
| **Approach 1** | `multiply_by_2(x) + x` | **0.0176441 s** | **17.6441 ms** | 176,441 | Slower |
| **Approach 2** | `x + x + x` using **bf16_add twice** | **0.0047588 s** | **4.7588 ms** | 47,588 | Faster |
## Results


## Reference
* [Quiz1 of Computer Architecture (2025 Fall)](https://hackmd.io/@sysprog/arch2025-quiz1-sol)
* [bfloat16 floating-point format](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format)