We aim to estimate some quantity $\mu$ for which we have a prior $\mu \sim N(0, 1)$. Assume that $\mu$ has true value 1. We take a series of noisy measurements of $\mu$, $Y_i \sim N(\mu, 1) = N(1, 1)$; assume we know the measurement error of this noise (ie: that the variance of these measurements is 1).
At any point $t$, when we have observed $Y_1, Y_2, \dots, Y_t$, our posterior mean estimate of $\mu$ is $E(t) = \frac{1}{1+t}\sum_{i=1}^tY_i$ (this is [a conjugate update of our prior](https://en.m.wikipedia.org/wiki/Conjugate_prior#When_likelihood_function_is_a_continuous_distribution)). Notice that $E(t+1)-E(t)=Y_{t+1} - \frac{\sum_{i=1}^tY_i}{(t+1)(t+2)}$, which has a positive expectation, $\mu \left( 1-\frac{t}{(t+1)(t+2)} \right)$, with a Gaussian distribution. Therefore, we should expect positive updates.
The probability of a positive update would, in reality, depend on how noisy the observations are and how good our prior is.