Try   HackMD

Lecture 6: Spectral Theorem for Compact Operators

More on Compact Operators

In the last lecture we defined compact operators, and stated that they are equivalent to norm limits of finite rank operators. We begin by proving one direction of this theorem (the other direction is left to the homework).

Proof. Assume

TK(H), let
BH
be the unit ball, and let
K=T(B)
. Let
n
be an integer. The set
zK{y:zy<1/n}

is an open cover of
K
so it must have a finite subcover; call this
Sn
. Let
Pn
be the orthogonal projection onto
span(Sn)
. Note that
PnT
is a finite rank operator since
Sn
is finite.

Consider a point

xH. Choose
zSn
such that
Txz<1/n
. Since
PnTx
is the closest point to
Tx
in
span(Sn)
, we must have
PnTxTxzTx<1/n,

whence
PnTT<1/n
, so that
PnTT0
as
n.
.

Two interesting families of compact operators are the following.

Integral Kernel Operators. Suppose

KL2(0,1)2. A Cauchy-Schwartz exercise shows that
TKKL2(0,1)2
, where
TKf:=01K(x,y)f(y)dy.

Let
{ϕi}
be an ONB of
L2(0,1)
. It is easy to check that the family of bivariate functions
{ϕi(x)ϕj(y)}i,j
is an ONB of
L2(0,1)2
, so we may expand
K(x,y)=ijcijϕi(x)ϕj(y).

Let
Kn(x,y)=i,jncijϕi(x)ϕj(y).
As the difference
TKnTK
is an integral kernel operator of the same type, we have
TKTKnKKnL2=i,j>n|cij|20

as
n
, showing that
Tk
is compact by the previous theorem.

Diagonal Operators. If

αn is any sequence with
|αn|0
, then the diagonal multiplication operator
T(ei)=αiei
is compact, since it is approximated by
Tn(ei)=αiei{in}
.

Note that diagonal multiplication operators in

L2(0,1),
Tgf=g(x)f(x)
for
gL(0,1)
are not compact, as seen by considering
g
to be the indicator of any set of positive Lebesgue measure, since then the image
Tg(B)
contains infinitely many vectors pairwise separated by a constant distance.

Finally, we mention that the set

K(H) of compact operators on
H
is a 2-sided
ideal in
L(H)
, which means that if
TK(H)
then
T,BT,TBK(H)
for any
BL(H)
. These properties are easy to verify from the characterization as of
K(H)
as norm limits of finite rank operators.

The Spectral Theorem for Compact Operators

We will now show that the diagonal operators above are in a sense the only examples of compact operators, up to isomorphism.

Definition. If

Tf=λf for some
fH{0}
then
f
is called an eigenvector of
T
and
λ
is called an eigenvalue.

Theorem. Suppose

T=TK(H). Then:

  1. The eigenvalues of
    T
    are real and may be ordered
    λ1,λ2,0
    .
  2. If
    λ0
    is an eigenvalue of
    T
    , the eigenspace
    Vλ:=ker(λT)
    is finite dimensional.
  3. There is an orthonormal basis of
    H
    consisting of eigenvectors of
    T
    .

The last property implies that we have the expansion

T=nλnϕnϕn
where
ϕn
denotes the linear functional
(ϕn,x)
dual to
ϕn
. Alternately, this can be expressed as
T=UDU

where
U(en)=ϕn
is unitary and
D
is a diagonal multiplication operator.

Proof of Theorem. We begin by observing that for any eigenvectors

Tv=λv and
Tw=μw
:
μ(v,w)=(v,Tw)=(Tv,w)=(Tv,w)=λ(v,w).

Plugging in
v=w
shows that
λ=λ
so
λ
must be real. For
μλ
we then must have
(v,w)=0
, so eigenvectors from distinct eigenvalues must be orthogonal.

Lemma 1. For every

ϵ>0, the subspace
Sϵ:=span{x:Tx=λx,|λ|ϵ}

is finite dimensional.
Proof of Lemma. We first show that for every nonzero eigenvalue
λ
,
Vλ
is finite dimensional. Assume not, i.e., there is an infinite sequence of orthonormal vectors
{xn}
such that
Txn=λxn
. Then
TxnK=T(B)
and we have
TxnTxm=|λ|xnxm=2|λ|

whenever
nm
. But compactness implies that every sequence in
K
must have a convergent subsequence, so this is impossible.

The orthogonality of distinct eigenspaces (which are closed since they are kernels) implies that

Sϵ=|λ|ϵVλ.
Assume for contradiction that there are infinitely many direct summands, and choose one unit eigenvector
xn
from each eigenspace. By the same argument above, we have
TxnTxm=λnxnλmxmϵ

whenever
nm
, since the hypotenuse of a right triangle is longer than its shorter side. This is impossible by compactness.

Lemma 1 implies properties (1) and (2) and that there are at most countably many eigenvalues. We now show that there are enough to form an orthonormal basis; the key is to show that we can always find one eigenvector, and the rest will follow by induction.

Lemma 2. Either

T or
T
is an eigenvalue of
T
.

Proof of Lemma. Let

c=supx=1Tx. If
c=0
then we are done since
T=0
and any unit vector in the kernel will do. Otherwise assume
c>0
and let
{xn}B
be a sequence such that
Txnc
. By compactness of
K
we may pass to a subsequence such that
Txn
converges to some vector
y0
.

Let

A=c2T2, and observe that
(x,Ax)0
for every
xH
, since
(x,T2x)=Tx2c2
for every
x
. Observe that
(xn,Axn)=c2Txn20.

Since
A
is positive it has a square root
A
, so we have
(xn,A2xn)=Axn20.

Since
TA
is a bounded operator and
TA=AT
, this implies
TAxn=ATxn0,

which by continuity of
A
implies
Ay=0
. Thus we have
(c2T2)y=(cT)(c+T)y=0.

If
(c+T)y=0
then
c
is an eigenvalue; otherwise
c
is an eigenvalue with eigenvector
(c+T)y0
.

To finish the proof of the theorem, let

γn denote the dimension of
Vλn
and let
{ϕnj}n=1,j=1,γn
denote a sequence of orthonormal eigenvectors for all of the (countably many by Lemma 1) nonzero eigenvalues
λn
. Set
M=span{ϕnj}n,j
(meaning the set of finite linear combinations). Observe that
T(M)M
by construction, and since
T
is continuous this implies
T(M)M.
On the other hand, if
yM
and
xM
we have
(Ty,x)=(y,Tx)=0

since
TxM
. Thus, both
M
and
M
are closed invariant subspaces of
T
.

Observe that if

P=proj(M), the restricted operator
PTP
is also compact. Since it has no nonzero eigenvectors (by construction), Lemma 2 implies that it must satisfy
PTP=0
, which means
Tx=0
whenever
xM
. Thus, every vector in
M
is an eigenvector of
T
with eigenvalue
0
.

To complete the proof, take

{ψk} to be any orthonormal basis of
M
. Since
H=MM
the union
{ϕnj}n,j{ψk}
is an ONB of
H
consisting of eigenvectors of
T
, as desired.

Remark. As pointed out by Tarun in class, it is possible to prove Lemma 2 by considering

cIT instead of
c2IT2
if one assumes
T=supx=1(x,Tx)
, which can be achieved by possibly replacing
T
with
T
.

Remark. Yeshwanth pointed out that one can also prove Lemma 2 by showing that

y/cxn20, which can be seen by expanding the left hand side in inner products. This proof has the advantage of not using a square root.