# Notes on the Kelly Criterion
Suppose you are given the choice between two bets, one of which has the larger expectation and the other one a larger geometric expectation. You can only bet once. Which bet do you choose?
A variant of the question:
Suppose you are give the opportunity to take a bet of positive expectation and you are free to choose the bet size. You can only make the bet one time. What bet size should you choose?
The answer to the question is not so simple and might depend a lot on additional information.
Let us first look at the problem of repeated bets.
## Repeated Bets
### Sums and Maximizing Expectation
$$X_t = \sum_{s = 1}^t A_{1, s}$$
$$Y_t = \sum_{s = 1}^t A_{2, s}$$
with $\mathbb E[A_1] > \mathbb E[A_2]$ and $(A_{i, s})_s$ are independent copies of $A_i$.
models the following case:
- first bet has larger expectation
- you can take the bet over and over
- bet size fixed, or chosen once
- bet size small compared to bankroll or easy to get loan: no risk of ruin
the total wealth is the sum of all the outcomes
due to the law of large numbers one can show that eventually $X_t > Y_t$, i.e.,
$$\mathbb P(\exists t_0 \forall t \geq t_0: X_t > Y_t) = 1.$$
(maybe check if this is really true. weaker statements of the form $\lim_{t\to\infty}\mathbb P(X_t > Y_t) = 1$ follow more easily.)
### Products and Maximizing Geometric Growth
$$X_t = \prod_{s = 1}^t M_{1, s}$$
$$Y_t = \prod_{s = 1}^t M_{2, s}$$
with $\mathbb E[\log(M_1)] < \mathbb E[\log(M_2)]$ and $(M_{i, s})_s$ are independent copies of $M_i$.
models the following case:
- second bet has larger geometric expectation
- you take the bet over and over
- bet changes bankroll in relative manner
look at
$$\log(X_t) = \sum_{s = 1}^t \log(M_{1, s}), \quad \log(Y_t) = \sum_{s = 1}^t \log(M_{2, s})$$
instead. this is possible since inequalities of the type $X_t < Y_t$ are equivalent to $\ln(X_t) < \ln(Y_t)$ and we can choose to study the latter. in this multiplicative case, working with the logarithm is helpful since sums of random variables fit to the setting of the law of large numbers and are therefore easier to analyse. essentially we can apply the techniques from above to the new bets $A_i^\text{geom} = \log(M_i)$.
since $\mathbb E[A_1^\text{geom}] = \mathbb E[\log(M_1)] < \mathbb E[\log(M_2)] = \mathbb E[A_2^\text{geom}]$, by the argument above we eventually have $\log(X_t) < \log(Y_t)$ and thus
$$ \mathbb P(\exists t_0 \forall t \geq t_0: X_t < Y_t) = 1.$$
### Combining Products and Sums
$X_{t + 1} = M_t X_t + A_t$ does not easily fall in the categories above. cf. sushi rewards
### A generalization
growth compensation function $f \colon [0, \infty[ \times \mathbb R \to \mathbb R$ such that $f$ is monotone in the second argument: $x < y$ implies $f(t, x) < f(t, y)$.
if
$$\lim_{t \to \infty} \mathbb E[f(t, X_t)] = r_1 < r_2 = \lim_{t \to \infty} \mathbb E[f(t, Y_t)]$$
and
$$\lim_{t \to \infty} \mathbb V[f(t, X_t)] = \lim_{t \to \infty} \mathbb V[f(t, Y_t)] = 0$$
then eventually
$$X_t < Y_t.$$
- in the case of additive bets, the function $f(t, x) = \frac{x}{t}$ can be used; the bet with higher expectation wins since
$$r_1 = \lim_{t \to \infty} \mathbb E[f(t, X_t)] = \lim_{t \to \infty} \mathbb E[\frac{\sum_{s = 1}^t A_{1, s}}{t}] = \lim_{t \to \infty} \frac{1}{t} \sum_{s = 1}^t \mathbb E[A_{1, s}] = \mathbb E[A_1]$$
and similarly $r_2 = \lim_{t \to \infty} \mathbb E[f(t, Y_t)] = \mathbb E[A_2]$
$$\lim_{t \to \infty} \mathbb V[f(t, X_t)] = \lim_{t \to \infty} \mathbb V[\frac{\sum_{s = 1}^t A_{1, s}}{t}] = \lim_{t \to \infty} \frac{1}{t^2} \sum_{s = 1}^t \mathbb V[A_{1, s}] = 0$$
and similarly $\lim_{t \to \infty} \mathbb V[f(t, Y_t)] = 0$
- in the case of multiplicative bets, the function $f(t, x) = \frac{\log(x)}{t}$ can be used; the bet with higher geom. expectation wins since
$$r_1 = \lim_{t \to \infty} \mathbb E[f(t, X_t)] = \lim_{t \to \infty} \mathbb E[\frac{\sum_{s = 1}^t \log(M_{1, s})}{t}] = \mathbb E[\log(M_1)]$$
and similarly $r_2 = \lim_{t \to \infty} \mathbb E[f(t, Y_t)] = \mathbb E[\log(M_2)]$
$$\lim_{t \to \infty} \mathbb V[f(t, X_t)] = \lim_{t \to \infty} \mathbb V[\frac{\sum_{s = 1}^t \log(M_{1, s})}{t}] = \lim_{t \to \infty} \frac{1}{t^2} \sum_{s = 1}^t \mathbb V[\log(M_{1, s})] = 0$$
and similarly $\lim_{t \to \infty} \mathbb V[f(t, Y_t)] = 0$
- if one uses the function $f(t, x) = \frac{\log(x)}{t}$ for the additive case, then $r_1 = r_2 = 0$ and one cannot deduce, using this $f$, which strategy is better
- if one uses the function $f(t, x) = \frac{x}{t}$ for the multiplicative case, then $r_1 = r_2 = +\infty$ and one cannot deduce, using this $f$, which strategy is better
Proposition: Let $(X_t)_{t \in \mathbb N}$ and $(Y_t)_{t \in \mathbb N}$ be families of real valued random variables, let $f\colon \mathbb N \times \mathbb R \to \mathbb R$ be a function which is strictly increasing in the second argument.
If $\mathbb E[f(t, X_t)]$, $\mathbb E[f(t, Y_t)]$, $\mathbb V[f(t, X_t)]$, and $\mathbb V[f(t, Y_t)]$ all exist and
$$\lim_{t \to \infty} \mathbb E[f(t, X_t)] = r_1 < r_2 = \lim_{t \to \infty} \mathbb E[f(t, Y_t)]$$
and
$$\lim_{t \to \infty} \mathbb V[f(t, X_t)] = \lim_{t \to \infty} \mathbb V[f(t, Y_t)] = 0$$
then
$$\lim_{t\to\infty}\mathbb P(X_t < Y_t) = 1.$$
Poof:
We show that the probability of the complement $\mathbb P(X_t > Y_t)$ becomes small using the [Chebychev's inequality](https://en.wikipedia.org/wiki/Chebyshev%27s_inequality).
By the monotonicity of $f$, the inequality $X_t > Y_t$ is equivalent to $f(t, X_t) > f(t, Y_t)$, which is in turn equivalent to
$$f(t, X_t) - f(t, Y_t) - \mathbb E[f(t, X_t) - f(t, Y_t)] > -\mathbb E[f(t, X_t) - f(t, Y_t)].$$
Hence the probabilities of the events coincide,
$$\mathbb P(X_t > Y_t) = \mathbb P(f(t, X_t) - f(t, Y_t) - \mathbb E[f(t, X_t) - f(t, Y_t)] > -\mathbb E[f(t, X_t) - f(t, Y_t)]).$$
Introducing absolute values only makes the probability larger,
$$\mathbb P(X_t > Y_t) \leq \mathbb P(|f(t, X_t) - f(t, Y_t) - \mathbb E[f(t, X_t) - f(t, Y_t)]| > -\mathbb E[f(t, X_t) - f(t, Y_t)]).$$
Since $r_1 < r_2$, for sufficiently large $t$ we will always have $-\mathbb E[f(t, X_t) - f(t, Y_t)] > 0$.
Thus we can apply Chebychev's inequality to obtain
$$\mathbb P(X_t > Y_t) \leq \frac{\mathbb V[f(t, X_t) - f(t, Y_t)]}{E[f(t, X_t) - f(t, Y_t)]^2}.$$
Since
$$\mathbb V[f(t, X_t) - f(t, Y_t)] \leq \mathbb V[f(t, X_t)] + 2\mathbb V[f(t, X_t)]^\frac{1}{2}\mathbb V[f(t, Y_t)]^\frac{1}{2} + \mathbb V[f(t, Y_t)],$$
the numerator converges to $0$ for $t \to \infty$, while the denominator converges to $(r_1 - r_2)^2$. Thus $\lim_{t\to\infty}\mathbb P(X_t > Y_t) = 0$ and consequently
$$\lim_{t\to\infty}\mathbb P(X_t < Y_t) = 1.$$
### Remarks on Utility
utility did not play a role in the previous two subsections. this might seem unintuitive. but it is infact so:
let $u$ be a (strictly increasing) monotone function and suppose we are interested in maximizing $u(X_t)$ for large $t$. then the monotonicity of $u$ implies that $u(X_t)$ is largest precisely when $X_t$ is. so all the arguments from above apply.
#### instructive examples
suppose we are in the multiplicative example form above and we are not interested in $X_t$ but only in $\log(X_t)$. looking directly at $\log(X_t) = \sum_{s = 1}^t \log(M_{1, s})$, we have an additive structure. the right tool for the asymptitic behavior is the expectation of the individual summands. it results in the same analysis as above.
suppose we are in the additive example from above and we are not interested in $X_t$ but only in $\log(X_t)$. looking direktly at $\log(X_t) = \log(\sum_{s = 1}^t A_{1, s})$ we see that the easy to analyse additive structure emerges after exponentiation. this results in the same analysis as above.
## Single Bet
not clear to me at all, might depend on utility, difficult to reason about due to unclear meaning of probabilities