# Lectures 27-28: Anderson Localization
Anderson Localization is a phenomenon which was discovered by P. Anderson in 1958. Roughly speaking, the observation is that certain solids with a regular lattice structure conduct electricity, but when impurities are added to them they cease to do so --- this is sometimes called the "metal insulator transition". Qualitatively, the physical explanation is that adding impurities causes electrons to get "trapped". The quantitative question here is: how much disorder is required to cause this transition, and what is its nature (gradual vs. sharp)?
Mathematically, the question is modeled as follows. Let $B$ be a $d-$dimensional lattice (with wraparound) on $n$ vertices, and let
$$H_0\psi(x) = \sum_{y\sim x} -\psi(y)$$
be the Anderson "tight-binding" hamiltonian, where $\sim$ indicates adjacency in $B$. Then an explicit calculation with Fourier series reveals that the spectrum of $H_0$ is contained in $[-2d,2d]$ and all of the eigenvectors are "delocalized" in the sense that the corresponding probability measures are uniform.
Now consider the *random* Hamiltonian
$$H = H_0 + \lambda V$$
where $V$ is a diagonal matrix with i.i.d. entries $V_x$ sampled from the uniform distribution; the parameter $\lambda\ge 0$ is the disorder strength.
A MATLAB experiment reveals that the eigenvectors undergo a drastic change depending on the value of $\lambda$: for large enough $\lambda$, they become "localized" i.e., concentrated on a few vertices.
Physicists have predicted that in $d=1,2$ any $\lambda>0$ will lead to localization for sufficiently large $n$ and in $d\ge 3$ there is a critical $\lambda_c$ below which localization does not occur and above which it does. Mathematicians have been able to prove the full prediction in $d=1$ only. In higher $d$ there are many proofs of localization given large enough $\lambda$, but *no proofs* of delocalization in general for small $\lambda>0$ (except for the infinite regular tree, which is not a lattice; see Klein'96). Only very recently, Ding and Smart have shown that in $d=2$ the *edge* eigenstates are localized for all $\lambda>0$.
In these lectures, we will prove a very general result of Aizenman and Molchanov (1993) which shows that for every $d$, localization occurs for large enough disorder. The notion of localization we use is not entirely obvious, but it implies all of the other reasonable notions (see Hundertmark Sec. 2).
$\newcommand{\E}{\mathbb{E}}$
$\newcommand{\P}{\mathbb{P}}$
**Theorem.** Fix $d\ge 1, s\in (0,1)$, and $\lambda>0$. Suppose $B$ is a graph on $n$ vertices with maximum degree $d$. Let $G(z):=(H-z)^{-1}$ denote the Green's function of the random Hamiltonian $H$ as above. Then for every $x,y\in B$ and $z=E+i\eta$ with $\eta>0$:
$$ \E |G_{xy}(z)|^s \le C\exp(-\alpha d_B(x,y)),$$
where $C,\alpha$ are constants depending only on $\lambda,d$ and $d_B(\cdot,\cdot)$ is the graph distance in $B$.
We now explain why this property is a reasonable notion of "localization". We will need the following estimate on the minimum eigenvalue gap of $H$, which we will prove in the next lecture.
**Minami Estimate.** With probability $1-1/n^2$, the minimum gap between distinct eigenvalues of $H$ is at least $\gamma:=1/n^3$.
**Corollary.** Fix $\eta>0$. Then with probability $1-1/n$, every normalized eigenvector $\psi$ of $H$ satisfies
$$|\psi(x)\psi(y)|\le O(1/n^5)$$
for every pair of vertices $x,y$ at distance $C\log n$ where $C$ depends only on $d,\lambda$.
*Proof.* Choose $z_1,\ldots,z_k$ so that every $\lambda\in [-2d-1,2d+1]$ is within distance $\eta$ of some $z_i$; note that $k=O(d/\eta)$. For every $x,y\in B$ and $i\le k$, Markov's inequality implies:
$$ \P [|G_{xy}(z_i)|\ge (Ct)^{1/s}\exp(-\alpha d(x,y)/s)]\le 1/t.$$
Taking a union bound over $x,y,i$ and setting $t=n^3/\eta,$ we have that with probability at least $1-O(n^2/\eta t)=1-O(1/n)$,
$$|G_{xy}(z_i)|\le (Cn^3/\eta)^{1/s}\exp(-\alpha d(x,y)).$$
Fix and index $j$. By our choice of $\eta$ there is a unique $z_i$ such that $|z_i-\lambda_j|<\eta$ and $|z_i-\lambda_{j'}|>\gamma$ for every $j'\neq j$. For this $z_i$:
$$ Im G_{xy}(z_i) = \eta\sum_{k}\frac{\psi_k(x)\psi_k(y)}{(z_i-\lambda_k)^2+\eta^2} = \sum_k c_k \psi_k(x)\psi_k(y)$$
where $|c_j|\ge 1/2\eta$ and $|c_k|<\frac{\eta}{\gamma^2}<1$. Thus,
$$|\psi_j(x)\psi_j(y)| \le 2\eta (n + |G_{xy}(z_i)|)\le 1/n^5.$$
for $d(x,y)\gg \log n$ by setting $\eta=1/n^6$, as desired.$\square$
In particular, the above corollary implies that any two vertices with $\psi(x)$ of magnitude at least $1/n^2$ must have distance at most $O(\log n)$.
**Remark.** The key point of the bound in AM is that it does not deteriorate as $\eta\rightarrow 0$. To see that the "fractional exponent" $s$ is required, observe that for $H_0=0$ the expectation of the diagonal terms is infinite.
The proof of the Aizenman-Molchanov theorem rests on two lemmas. The first lemma estimates the diagonal terms of the Green's function.
$\newcommand{\R}{\mathbb{R}}$
**Lemma 1. (Diagonal Estimate)** For every $z\notin \R$, $x\in B$ with $H$ as above:
$$ \E [|G_{xx}(z)|^s | V_y, y\neq x] \le \frac{1}{(1-s)\lambda^s}.$$
*Proof.* Let $H=\tilde{H}+\lambda V_xe_xe_x^*$. Use the fact that $A^{-1}-B^{-1}=A^{-1}(B-A)B^{-1}$ to derive the identity
$$G_{xx}(z) = \frac{1}{\widetilde{G_{xx}}^{-1}+\lambda V_x}$$
where crucially $\widetilde{G_{xx}}^{-1}$ does *not* depend on $V_x$. Thus, conditional on $V_y,y\neq x$ the above integral is an explicit univariate integral in $V_x$, which is easily bounded. $\square$
**Lemma 2. (Self-Avoiding Walk Representation)** Assume $B$ is finite, $x\neq y$ are vertices in $B$, and $z\notin \R$. Then
$$ G^B_{xy}(z) = \sum_{w, SAW, x\rightarrow y} \prod_{j=0}^{|w|} G^{B_j}_{w_jw_j}(z)$$
where $B_j:=B\setminus \{w_0,\ldots,w_{j-1}\}.$
*Proof.* Expand $G_{xy}^B$ in a Neumann series for large $z$. Resum the $xy$-walks as $xx$-loops followed by $x'y$-walks in $B\setminus \{x\}$. This yields the recurrence:
$$ G_{xy} = G_{xx}\sum_{x'\sim x}G_{x'y}^{B\setminus\{x\}}$$
Induction yields the claim for large $z$. But observe that this is an identity of rational functions, analytic in the upper half plane. Thus it must hold for all $z\notin\R$ .$\square$
Combining these results, we complete the proof of the theorem as follows.
$$ \E |G_{xy}|^s = \E |\sum_{w, xy-SAW} \prod_{j=0}^{|w|}G_{w_jw_j}^{B_{j-1}(w)}|^s \le \E \sum_{w, xy-SAW} |\prod_{j=0}^{|w|}G_{w_jw_j}^{B_{j-1}(w)}|^s,$$
by a fact about complex numbers. Now by linearity of expectation, each term in the right hand side is at most
$$ \E |\prod_{j=1}^{|w|}G_{w_jw_j}^{B_{j-1}(w)}| \E [G_{xx}|V_{w_1},\ldots,V_{w_|w|}] \le \E |\prod_{j=1}^{|w|}G_{w_jw_j}^{B_{j-1}(w)}| \frac{1}{(1-s)\lambda^{s}}.$$
By induction, we obtain that each term is $((1-s)\lambda^s)^{|w|+1}$. Using that the number of self-avoiding walks originating at $x$ of length $k$ is at most $d(d-1)^k$ and doing the geometric sum yields the theorem.$\square$
### Minami type Bound
We now sketch the proof that the minimum eigenvalue gap of $H$ is at least $1/n^3$ with high probability.
*Step 1.* For every $z\notin \R$,
$$ \E Im(G_{xx}(z))\le O(\lambda^{-1}).$$
The proof uses the formula $G_{xx}=(\widetilde{G_{xx}}^{-1}+\lambda V_x)^{-1}.$ Taking the imaginary part yields a $\lambda V_x^2$ term, which upon integrating yields a constant.
*Step 2.* If $I$ is an interval, then
$$ \E tr(P_I(H))=O(n\lambda^{-1}|I|).$$
The proof is Stone's formula and the bound from step 1.
*Step 3.* If $I$ is an interval, then
$$ \P [tr(P_I(H))\ge 2]\le O(n^2\lambda^{-2}|I|^2).$$
*Step 4.* Choose $k=O(1/\gamma)$ intervals $I_1,\ldots,I_k$ of width $\gamma$ such that any pair of eigenvalues with distance $\gamma$ must fall in one of them. Then:
$\P[mingap\le\gamma]\le \P[\exists j:tr(P_{I_j}(H)\ge 2)]\le O( \frac{1}{\gamma}\gamma^2n^2 )=O(\gamma n^2)$, where we used a union bound.
This probability is at most $1/n$ for $\gamma=1/n^3$.
(These notes are roughly based on https://faculty.math.illinois.edu/~dirk/preprints/localization3.pdf and Chapter 3 of https://uknowledge.uky.edu/cgi/viewcontent.cgi?article=1071&context=math_etds)

or

By clicking below, you agree to our terms of service.

Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet

Wallet
(
)

Connect another wallet
New to HackMD? Sign up