Ting-Yun Chang
More Matrices
Try
HackMD
Ting-Yun Chang
·
Follow
Last edited by
Ting-Yun Chang
on
Feb 11, 2023
Linked with GitHub
Contributed by
More Matrices
Eigenvalue/ Eigenvector
A
: an
n
×
n
matrix,
v
: a vector
∈
R
n
,
λ
:
a scalar
If there exist a nonzero vector
v
such that
A
v
=
λ
v
, we say
v
is an eigenvector and
λ
is the eigenvalue for
v
Fact:
eigenvectors corresponding to distinct eigenvalues are linearly independent
Matrix Diagonalization
Fact
: not all matrices are diagonalizable
If an
n
×
n
matrix
A
is diagonalizable, it can be decomposed as
A
=
P
D
P
−
1
D
is a diagonal matrix with
λ
1
⋯
λ
n
diagonal entries
P
=
[
p
1
⋯
p
n
]
is an invertible matrix
A
P
=
P
D
A
P
=
[
A
p
1
⋯
A
p
n
]
P
D
=
[
λ
1
p
1
⋯
λ
n
p
n
]
A
p
i
=
λ
i
p
i
. Thus,
p
i
is an eigenvector of
A
with eigenvalue
λ
i
Because
P
is invertible, the column vectors of
P
(the eigenvectors of
A
) are independent
A
n
=
P
D
n
P
−
1
(computationally efficient!)
Symmetric Matrix
definition:
A
=
A
⊤
Fact:
symetric matrix is diagonalizable, and its eigenvectors can be chosen to form an
orthonormal basis
of
R
n
A
=
U
D
U
−
1
(diagonalization)
U
is an orthogonal matrix
its column vectors are orthonormal
U
⊤
U
=
I
=
U
U
⊤
U
−
1
=
U
⊤
A
=
U
D
U
⊤
Quadratic form:
x
⊤
A
x
A
is a symmetric matrix,
x
is a vector
x
⊤
A
x
=
x
⊤
U
D
U
⊤
x
Let
y
=
U
⊤
x
,
x
⊤
A
x
=
(
x
⊤
U
)
D
(
U
⊤
x
)
=
y
⊤
D
y
=
Σ
i
λ
i
y
i
2
≤
λ
max
Σ
i
y
i
2
=
λ
max
y
⊤
y
=
λ
max
x
⊤
x
proof.
y
⊤
y
=
(
U
⊤
x
)
⊤
(
U
⊤
x
)
=
x
⊤
U
U
⊤
x
=
x
⊤
U
U
−
1
x
=
x
⊤
x
(orthogonal maxtrix preserves norm)
λ
max
(resp.
λ
min
): the largest (resp. smallest) eigenvalues among all eigenvalues of
A
λ
min
x
⊤
x
≤
x
⊤
A
x
≤
λ
max
x
⊤
x
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up
Comment