# TD 1 - Session 2
## A) Binomial law
### 4/ Limit $M\to\infty$
Using Stirling formula
$$
\log(n!)\approx n\log n-n
$$
expand the expression for $\Pi_M(n)$ for large $M$ and $n$. Recall that $0\leq n\leq M$
$$
\Pi_M(n)=\binom{M}{n}p^{n}(1-p)^{M-n}\,.
$$
Take the logarithm on both sides
$$
\log(\Pi_M(n))=\log[\binom{M}{n}p^{n}(1-p)^{M-n}]=\log(M!)-\log(n!)-\log((M-n)!)+n\log (p)+(M-n)\log(1-p)
$$
Using Stirling approximation
$$
\log(\Pi_M(n))\approx M\log(M)-n\log n -(M-n)\log(M-n)+n\log(p)+(M-n)\log(1-p)
$$
We want to find the value $n^*$ that maximises the probability. This is also called typical value.
$$
\partial_n \log(\Pi_M(n))\approx -\log n-1 +\log(M-n)+1+\log(p)-n\log(1-p)=\log(\frac{M-n}{n}\frac{p}{1-p})=0
$$
This gives $n^*=Mp=\langle n\rangle$.
$$
\partial_n^2 \log(\Pi_M(n))\approx -\frac{1}{n}-\frac{1}{M-n}
$$
$$
\partial_n^2 \log(\Pi_M(n^*))\approx -\frac{1}{Mp}-\frac{1}{M(1-p)}=-\frac{1}{Mp(1-p)}
$$
$$
f(n)\approx f(n^*)+\frac12 f''(n^*)(n-n*)^2
$$
In our case, we find
$$
\log(\Pi_M(n))\approx \log(\Pi_M(n^*)) -\frac{1}{2Mp(1-p)}(n-n^*)^2
$$
Taking the exponential on both sides
$$
\Pi_M(n)\approx \Pi_M(n^*)\exp( -\frac{(n-n^*)^2}{2Mp(1-p)})
$$
This is a Gaussian distribution with mean $\langle n\rangle=Mp$ and variance $Var(n)=Mp(1-p)$. This is valid for $M\gg1$, $n\gg1$ and $M-n\gg1$. The condition for this to be valid is that $n^*\gg \sqrt{Var(n)}$. This will imply that $pM\gg 1$. Similarly, you also have $(1-p)M\gg 1$.
## B) Walker
We denote by $x$ the position of the walker. We assume that the length of a step is $a>0$ and that the time required for the walker to take a step is $\tau>0$.
# 1/
Write $x$ as a function of $a$, $M$, and $n$.
$$
x=an-a(M-n)=a(2n-M)
$$
We would like to compute the first two moments of $x$. Recall that the average of the sum of two random variables $X$ and $Y$ is
$$
\langle a X+b Y+c\rangle=a\langle X\rangle+b\langle Y\rangle +c
$$
where $a,b,$ and $c$ are constants. On the other hand, the variance
$$
Var(a X+b Y+c)=a^2 Var(X)+b^2 Var(Y)
$$
only if $X$ and $Y$ are independent.
In our case,
$$
\langle x\rangle=\langle a(2n-M)\rangle=2a\langle n\rangle -aM=2a Mp-aM=aM(2p-1)
$$
In the particular case where $p=1/2$ (unbiased random walker) $\langle x\rangle=0$. Similarly, the variance of $x$ is
$$
Var(x)=Var( a(2n-M))=4a^2Var(n)=4a^2Mp(1-p)
$$
It is useful to introduce the total time $t=\tau M$.
## 2/ Drift velocity
The drift velocity is defined as
$$
V=\lim_{t\to\infty}\frac{\langle x\rangle}{t}
$$
For our random walker we get
$$
V=\frac{a}{\tau}(2p-1)\,.
$$
For $p=1/2$, there is no drift: $V=0$.
## 3/ Diffusion constant
By definition
$$
D=\frac{Var(x)}{2t}
$$
In our case
$$
D=4p(1-p)\frac{a^2}{\tau}
$$
The dimensionality of this quantity is $[D]=L^2/T$.
## 4/ Continuum limit
I want to describe the probability density function (PDF) $p_t(x)$ of $x$. By definition the probability that $x\in (y,y+dx)$ is $p_t(y)dy$. We expect $x$ to be Gaussian with mean $\langle x\rangle =aM(2p-1)$ and variance $Var(x)=4a^2Mp(1-p)$
$$
p_t(x)\approx \frac{1}{\sqrt{2\pi Var(x)}}e^{-(x-\langle x\rangle)^2/(2Var(x))}
$$
Using the definition of $V$ and $D$, we obtain
$$
p_t(x)\approx \frac{1}{\sqrt{4\pi D t}}e^{-(x-Vt)^2/(4Dt)}
$$
You can also check that
$$
\Pi_M(n)=p_t(x)dx=p_t(x)2a.
$$
## D) The d-dimensional case and applications
We consider a random walker in $d$-dimensions. At every step, the random walker is taking a jump $\delta \vec{x}=h_1 \vec{e_1}+h_2\vec{e2}+\ldots+h_d\vec{e_d}$, where $h_1,\ldots h_d$ are random variables and $\vec{e_1},\ldots\vec{e_d}$ are the unit vectors in the different directions. The random variables $h_i$ are independent and identically distributed (i.i.d.) from a probability distribution $p(h)$. For instance, it can be
$$
p(h)=\cases{
p\quad \text{ if }h=1\\
1-p \quad \text{ if }h=-1.
}
$$
Alternatively, you can consider continuous distributions $p(h)\propto e^{-h^2/2}$, i.e., a Gaussian distribution. We simpli assume that $Var(h)<\infty$, so that the central limit theorem (CLT) holds.
### 1/
If we look at the motion in the direction $i$ (let us call $x_i$ the position in the $i$-th direction), we have that
$$
x_i=\sum_{j=1}^{M}h_i^j
$$
Therefore, we know that the distribution of $x_i$ at time $t$, for $t\gg 1$, is
$$
p_t(x_i)=\frac{1}{\sqrt{4\pi Dt}}e^{-x_i^2/(4Dt)}\,,
$$
where I have assumed that $\langle h\rangle=0$.
### 2/
Let's recall some notions of independence/joint distribution/marginal distribution. We consider two variables $X$ and $Y$. Let the joint distribution of $X$ and $Y$ be (for discrete variables)
$$
p(x,y)=Prob.(X=x , Y=y)
$$
You can also define the marginal probability distribution of $X$$ as
$$
p(x)=Prob.(X=x )
$$
This quantity is related to the joint distibution $p(x,y)$ by
$$
p(x)=\sum_y p(x,y)
$$
If $X$ and $Y$ are independent, you have
$$
p(x,y)=p(x)p(y)
$$
In our case we have $d$ random variables $x_1,\ldots , x_d$ and we want to know the joint distribution
$$
p_t(\vec{x})=p_t(x_1)p_t(x_2)\ldots p_t(x_d)
$$
since the different variables are independent. Using the expression for $p_t(x_i)$, we find
$$
p_t(\vec{x})=\frac{1}{(4\pi D t)^{d/2}}e^{-(x_1^2+\ldots +x_d^2)/(4Dt)}=\frac{1}{(4\pi D t)^{d/2}}e^{-\vec{x}^2/(4Dt)}
$$
### 3/ Joint law versus marginal law
Let's consider the two-dimensional case. Let's call $x_1=x$ and $x_2=y$. We want to compute the marginal probability distribution $Q_t(r)$ of $r=\sqrt{x^2+y^2}$. We know the distribution of $x$ and $y$, which is
$$
p_t(x,y)=\frac{1}{4\pi D t}e^{-(x^2+y^2)/(4Dt)}
$$
To do this, we perform a change of variables $(x,y)\to (r,\theta)$. We have the relation $x=r \cos(\theta)$ and $y=r\sin(\theta)$. After few steps of algebra, you get
$$
Q_t(r,\theta)=r \frac{1}{4\pi D t}e^{-r^2/(4Dt)}
$$
Integrating over $\theta\in(0,2\pi)$, we obtain
$$
Q_t(r)=r \frac{1}{2 D t}e^{-r^2/(4Dt)}
$$
The average value of this quantity $r$ is by definition
$$
\langle r\rangle=\int_{0}^{\infty}dr~r~Q_t(r)\,.
$$