# What to Add for the `log(0)` Case in R?
A few months ago, a labmate asked in our lab’s group chat what we usually do when we encounter `log(0)` during Seurat’s log normalization process. Since `log(0)` is undefined, if you don’t add an extra value, it will produce `-Inf`.
Back then, everyone’s answer was to use `log1p` for Laplace smoothing or add an extremely small value, something like `0.00000001`. But how small should this number really be? And will such an arbitrary choice satisfy even the most critical code reviewers?
I ran into this exact issue while writing a function for my R package, so I did some research and discovered a built-in constant in R called `.Machine$double.eps`.
## What is .Machine$double.eps (Machine epsilon)?
> Machine epsilon = the smallest number that, when added to 1, is distinguishable from 1.
On most systems:
```{r}
.Machine$double.eps
# [1] 2.220446e-16
```
So it’s about `2.2 × 10^(-16)`. This number is tiny enough not to change your data, but large enough to stop `log(0)` from breaking your code.
## How to Use It
Clamp your values away from zero:
```{R}
safe_vals <- pmax(vals, .Machine$double.eps)
```
If a value is `0`, it gets replaced with `2.2e-16`.
Now `log(safe_vals)` will be finite, not `-Inf`.
## Quick Demo
```{R}
vals <- c(0, 1e-10, 1e-5, 1)
safe_vals <- pmax(vals, .Machine$double.eps)
log(vals)
# [1] -Inf -23.02585 -11.51293 0.00000
log(safe_vals)
# [1] -36.04365 -23.02585 -11.51293 0.00000
```
Notice how the first value is now finite instead of blowing up.
## Takeaway
If you ever worry about `log(0)` in R: add `.Machine$double.eps`.
It’s the safety floor that prevents `NaN` and `-Inf`.