# Advanced Algebra
《高等代数》(第四版), 北京大学数学系前代数小组, 高等教育出版社, 2013 年
[TOC]
---
## Chapter1 Polynomial
### Bézout's Identity
#### For the pair of integers
Let
$$
a,b \in \mathbb Z \wedge gcd(a,b)=d,
$$
then
$$
\exists x, y \in \mathbb Z,
$$
such that
$$
ax + by = d.
$$
#### For three or more polynomials
Given that $\mathcal F$ refers to the univariate polynomial ring. Let
$$
\{f_i\} \subset \mathcal F \wedge \gcd(\{f_i\})=u,
$$
then
$$
\exists \mathcal \{g_i\} \subset \mathcal F,
$$
such that
$$\sum_{f_i,\ g_i} f_i(x)g_i(x)=u(x).$$
### Theorem 6 (multiplying polynomials)
Given that $p(x)$ is irreducible, then
$$
p(x)^k \mid f(x)(k \ge 1) \Rightarrow p'(x)^{k-1} \mid f'(x).
$$
#### Counter example for ⇐
Let
$$
p(x)=f'(x)=x+1,
$$
so
$$
f(x)=\frac{1}{2} x^2+x.
$$
Obviously, $p(x) \nmid f(x)$.
#### Corollary 6.1
Given that $p(x)$ is irreducible,
$$
p(x)^k \mid f(x)(k \ge 1) \Leftrightarrow p(x) \mid \gcd(f(x),f'(x)).
$$
**_Note that_**
$$
p(x)^k \mid f'(x)(k \ge 1) \Leftarrow p(x) \mid \gcd(f(x),f'(x)).
$$
#### Corollary 6.2
Given that $\mathcal F$ refers to the univariate polynomial ring, then
$$
\forall p \in \mathcal F-\{0\}, \exists p(x) \nmid f(x) \Leftrightarrow \gcd(f(x), f'(x)).
$$
### [Fundamental Theorem of Algebra](https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra)
Every **non-constant** **single-variable** polynomial with **complex** coefficients has **at least** one **complex** root.
#### Corollary 1
Given that
$$
\alpha \in \mathcal C \wedge f(x)=\sum_{\{a_i\} \subset \mathbb C}a_ix^i,
$$
then
$$
\alpha \mid f(\alpha) \Leftrightarrow \overline{\alpha} \mid f(x).
$$
**_Proof._**
$$
0=\overline{f(a)}=\sum_{\{a_i\} \subset \mathbb C}\overline{a_i \alpha^i}=\sum_{\{a_i\}\subset \mathbb C}a_i \overline{\alpha^i}=f(\overline{a})
$$
$\square$
#### Corollary 2 (Real coefficient polynomial factorization theorem)
Given that
$$
f(x)=\sum_{\{a_i\} \subset \mathbb R-\{0\},\ j \in \mathbb N} a_i x^j,
$$
then it can be factored into the unique irreducible muplying polynomials up to quadratic.
### Definition 8.5 Primitive Polymial
Given that
$$
f(x)=\sum_{\{a_i\} \subset \mathbb Z-\{0\},\ j \in \mathbb N} a_i x^j,
$$
if $\gcd(\{a_i\})=\pm 1$, then the polynomial $f$ is called _primitive_.
### Gauss's Lemma
#### Lemma1
If $f(x)$ and $g(x)$ are primitive polynomials over the integers, then product $f(x)g(x)$ is also primitive.
#### Lemma2
A non-constant polynomial in $Z[X]$ is irreducible in $Z[X]$ if and only if it is both irreducible in $Q[X]$.
#### Lemma3
Let $f(x)$, $g(x)$ are non-constant polynomials in $Z[X]$, and $f(x)=g(x)h(x)$, if $g(x)$ is primitive, then $h(x)$ is in $Q[X]$.
### Eisenstein's Criterion
Given that
$$
f(x)=\sum_{\{a_i\} \subset \mathbb Z-\{0\},\ 0 \le i \le n} a_i x^i,
$$
if there is a prime $p$, such that
1. $p \nmid a_n;$
2. $p \mid a_{n-1},a_{n-2},...,a_0;$
3. $p^2 \nmid a_0;$
then $f$ is irreducible $Q[X]$.
### Fundamental Theorem of Symmetric Polynomials
Every symmetric polynomial $P(x_1,...,x_n) \in A[x_1,...,x_n]^{S_n}$ has a unique representation
$$
P(x_1,...,x_n)=Q(e_1(x_1,...,x_n),...,e_n(x_1,...,x_n)),
$$
in which
$$
e_i(x_1,...,x_n)=\sum_{\{0 \lt k_1 \lt k_2... \lt k_i \le n\}}\prod x_{k_j}
$$
### Univariable Polynomial Discriminant
Let
$$
A(x)=\sum_{i=0}^n a_n x^n \wedge A(x)=a_n \prod_{i=1}^n (x-x_i),
$$
according to the previous theorem,
$$
\exists D=\prod_{i \lt j} x_i-x_j=Q(e_1(x_1,...,x_n),...,e_n(x_1,...,x_n))
$$
then Q is called the discriminant of the polynomial, where the polynomial has multiple roots if and only if the $D=0.$
Under the lower degree cases:
$$
D_2=a_1^2-4 a_0 a_2,
$$
$$
D_3=a_1^2 a_2^2-4a_0 a_2^3-4a_1^3 a_3-27a_0^2 a_3^2 +18a_0 a_1 a_2 a_3
$$
### Exercise
#### C2
Given that
$$
\partial(\frac{f(x)}{\gcd(f(x),g(x))}) \gt 0 \wedge \partial(\frac{g(x)}{\gcd(f(x),g(x))}) \gt 0,
$$
prove that
$$
\exists u(x),v(x) \wedge u(x)f(x)+v(x)g(x)=\gcd(f(x),g(x)),
$$
s.t.
$$
\partial(u(x)) \lt \partial(\frac{f(x)}{\gcd(f(x),g(x))}),
$$
$$
\partial(v(x)) \lt \partial(\frac{g(x)}{\gcd(f(x),g(x))}).
$$
**_Proof._**
Obviously, $\exists u_1,v_1$, s.t.
$$
u_1(x)f(x)+v_1(x)g(x)=\gcd(f(x),g(x)).
$$
Let
$$
u_1(x)=\frac{f(x)}{\gcd(f(x),g(x))}q(x)+u(x),
$$
$$
v_1(x)=-\frac{g(x)}{\gcd(f(x),g(x))}q(x)+v(x),
$$
s.t.
$$
\partial(u(x)) \le \partial(\frac{f(x)}{\gcd(f(x),g(x))}).
$$
$\because u(x) \neq 0$, or $\partial(\frac{g(x)}{\gcd(f(x),g(x))})=0$,
then
$$
\partial(u(x)) \lt \partial(\frac{f(x)}{\gcd(f(x),g(x))}),
$$
vice versa.
$\square$
### Lagrange Interpolation Formula
Given the polynomial $L(x)$, s.t. $L(a_i)=b_i(i=1,2,...,n)$.
Let
$$
F(x)=\prod_{i=1}^n(x-a_i),
$$
then
$$
L(x)=\sum_{i=1}^n \frac{b_i F(x)}{(x-a_i)F'(a_i)}
$$
**\*Can be constructed from a very geometry perspective!!!**
Consider that let
$$
f(x)=\frac{F(x)}{(x-a_i)F'(a_i)}
$$
$$
f(a_i)=1 \wedge \forall a_j(j \neq i),\exists f(a_j)=0
$$
### [Newton Identities](https://app.yinxiang.com/shard/s24/nl/24842916/96920c73-ebca-46ae-8162-15d824eb2831/)
## Chapter2 Determinant
### Vandermonde Determinant
Given n-order determinant
$$
d=
\begin{vmatrix}
1 & 1 & 1 & \cdots & 1\\
a_1 & a_2 & a_3 & \cdots & a_n\\
a_1^2 & a_2^2 & a_3^2 & \cdots & a_n^2\\
\vdots & \vdots & \vdots & & \vdots\\
a_1^{n-1} & a_2^{n-1} & a_3^{n-1} & \cdots & a_n^{n-1}\\
\end{vmatrix}_{n \times n},
$$
we have the following conclusion
$$
d=\prod_{1 \le j \lt i \le n}(a_i-a_j)
$$
### [Laplace Expansion](https://app.yinxiang.com/shard/s24/nl/24842916/1cf416fa-863d-4fd6-bbaf-fcda9f6a468b/)
### Excercise
#### C3.1
Prove that
$$
\begin{vmatrix}
a_{11}+x & a_{12}+x & \cdots & a_{1n}+x\\
a_{21}+x & a_{22}+x & \cdots & a_{2n}+x\\
\vdots & \vdots & & \vdots\\
a_{n1}+x & a_{n2}+x & \cdots & a_{nn}+x\\
\end{vmatrix}_{n \times n}=
\begin{vmatrix}
a_{11} & a_{12} & \cdots & a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2n}\\
\vdots & \vdots & & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn}\\
\end{vmatrix}_{n \times n}
+
x \sum_{i=1}^{n} \sum_{j=1}^{n}A_{ij}.
$$
**_Proof._**
Left
$$
\begin{vmatrix}
a_{11}+x & a_{12}+x & \cdots & a_{1n}+x & 0\\
a_{21}+x & a_{22}+x & \cdots & a_{2n}+x & 0\\
\vdots & \vdots & & \vdots & \vdots\\
a_{n1}+x & a_{n2}+x & \cdots & a_{nn}+x & 0\\
-x & -x & \cdots & -x & 1\\
\end{vmatrix}_{(n+1) \times (n+1)}
$$
$$
\begin{vmatrix}
a_{11} & a_{12} & \cdots & a_{1n} & 1\\
a_{21} & a_{22} & \cdots & a_{2n} & 1\\
\vdots & \vdots & & \vdots & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn} & 1\\
-&-&-&-&-\\
-x & -x & \cdots & -x & 1\\
\end{vmatrix}_{(n+1) \times (n+1)}=Right(Use\ Laplace\ Expansion)
$$
$\square$
---
## Chapter3 Simultaneous Linear Equations
---
## Chapter4 Matrix
### Theorem 1 (|AB|=|A||B|)
Given square matrices $A$ and $B$ of equal size, the
$$
|AB|=|A||B|.
$$
**_Proof._**
$$
|A||B|=
\begin{vmatrix}
a_{11} & a_{12} & \cdots & a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2n}\\
\vdots & \vdots & & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn}\\
\end{vmatrix}_{n \times n}
\begin{vmatrix}
b_{11} & b_{12} & \cdots & b_{1n}\\
b_{21} & b_{22} & \cdots & b_{2n}\\
\vdots & \vdots & & \vdots\\
b_{n1} & b_{n2} & \cdots & b_{nn}\\
\end{vmatrix}_{n \times n}
$$
$$
\begin{vmatrix}
a_{11} & a_{12} & \cdots & a_{1n} & 0 & 0 & \cdots & 0\\
a_{21} & a_{22} & \cdots & a_{2n} & 0 & 0 & \cdots & 0\\
\vdots & \vdots & & \vdots & \vdots & \vdots & & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn} & 0 & 0 & \cdots & 0\\
-1 & 0 & \cdots & 0 & b_{11} & b_{12} & \cdots & b_{1n}\\
0 & -1 & \cdots & 0 & b_{21} & b_{22} & \cdots & b_{2n}\\
\vdots & \vdots & & \vdots & \vdots & \vdots & & \vdots\\
0 & 0 & \cdots & -1 & b_{n1} & b_{n2} & \cdots & b_{nn}\\
\end{vmatrix}_{2n \times 2n}
$$
$$
\begin{vmatrix}
0 & 0 & \cdots & 0 & c_{11} & c_{12} & \cdots & c_{1n}\\
0 & 0 & \cdots & 0 & c_{21} & c_{22} & \cdots & c_{2n}\\
\vdots & \vdots & & \vdots & \vdots & \vdots & & \vdots\\
0 & 0 & \cdots & 0 & c_{n1} & c_{n2} & \cdots & c_{nn}\\
-1 & 0 & \cdots & 0 & b_{11} & b_{12} & \cdots & b_{1n}\\
0 & -1 & \cdots & 0 & b_{21} & b_{22} & \cdots & b_{2n}\\
\vdots & \vdots & & \vdots & \vdots & \vdots & & \vdots\\
0 & 0 & \cdots & -1 & b_{n1} & b_{n2} & \cdots & b_{nn}\\
\end{vmatrix}_{2n \times 2n}
$$
$$
\begin{vmatrix}
c_{11} & c_{12} & \cdots & c_{1n}\\
c_{21} & c_{22} & \cdots & c_{2n}\\
\vdots & \vdots & & \vdots\\
c_{n1} & c_{n2} & \cdots & c_{nn}\\
\end{vmatrix}_{n \times n}
$$
$$
\begin{vmatrix}
\left(
\begin{matrix}
a_{11} & a_{12} & \cdots & a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2n}\\
\vdots & \vdots & & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn}\\
\end{matrix}
\right)_{n \times n}
\left(
\begin{matrix}
b_{11} & b_{12} & \cdots & b_{1n}\\
b_{21} & b_{22} & \cdots & b_{2n}\\
\vdots & \vdots & & \vdots\\
b_{n1} & b_{n2} & \cdots & b_{nn}\\
\end{matrix}
\right)_{n \times n}
\end{vmatrix}
(Use\ Laplace\ Expansion)
$$
$\square$
### Definition 9 Adjugate Matrix
Given a square matrix
$$
A=
\left(
\begin{matrix}
a_{11} & a_{12} & \cdots & a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2n}\\
\vdots & \vdots & & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn}\\
\end{matrix}x
\right)_{n \times n},
$$
let $A_{ij}$ is the _algebraic cofactor_ of $a_{ij}$, then
$$
A^*=
\left(
\begin{matrix}
A_{11} & A_{12} & \cdots & A_{1n}\\
A_{21} & A_{22} & \cdots & A_{2n}\\
\vdots & \vdots & & \vdots\\
A_{n1} & A_{n2} & \cdots & A_{nn}\\
\end{matrix}
\right)_{n \times n}.
$$
We can get that
$$
AA^*=A^*A=
\left(
\begin{matrix}
d & 0 & \cdots & 0\\
0 & d & \cdots & 0\\
\vdots & \vdots & & \vdots\\
0 & 0 & \cdots & d\\
\end{matrix}
\right)_{n \times n}=
dE,
$$
in which $d=|A|$.
If $d=|A| \neq 0$, then
$$
A(\frac 1 d A^*)=(\frac 1 d A^*)A=E
$$
### Theorem 3 (Invertibility)
Matrix $A$ is invertible $\Leftrightarrow d=|A| \neq 0$ (A is _non-degenerate_ or _non-singular_). If $A$ is invertible, then
$$
A^{-1}=\frac 1 d A^*
$$
### Another Proof for Cramer's Rule
Given the simultaneous linear equations
$$
\begin{cases}
a_{11}x_1+a_{12}x_2+...+a_{1n}x_n=b_1\\
a_{21}x_1+a_{22}x_2+...+a_{2n}x_n=b_1\\
...\\
a_{n1}x_1+a_{n2}x_2+...+a_{nn}x_n=b_1\\
\end{cases},
$$
which can be written as
$$
AX=B,
$$
if $|A| \neq 0$, then we can get the unique $X=A^{-1}B=\frac 1 d A^*B$.
### LU Decomposition (Lower–Upper Decomposition)
Given $A=(a_{ij})_{n \times n} \neq 0$, where
$$
\begin{vmatrix}
a_{11} & \cdots & a_{1k}\\
\vdots & & \vdots\\
a_{k1} & \cdots & a_{kk}\\
\end{vmatrix}
\neq
0,
1 \le k \le n,
$$
then $\exists$ lower triangle matrix $B_{n \times n}$, such that
$$
BA=Upper\ triangle\ matrix.
$$
**_Proof._**
Use MI (Mathematical Induction), the statement holds for $n=1$.
Assume $n-1$ holds, then
$$
For
\ A_1=
\left(
\begin{matrix}
a_{11} & \cdots & a_{1,n-1}\\
\vdots & & \vdots\\
a_{n-1,1} & \cdots & a_{n-1,n-1}\\
\end{matrix}
\right),
$$
$$
\exists
\ Lower\ triangle\ matrix\ (B_1)_{(n-1) \times (n-1)},
\ s.t. B_1A_1=Upper\ triangle\ matrix.
$$
$A$ can be partitioned into
$$
A=
\left(
\begin{matrix}
A_1 & \beta\\
\alpha & a_{nn}\\
\end{matrix}
\right),
$$
then
$$
BA=
\left(
\begin{matrix}
B_1 & O\\
-\alpha A^{-1} & 1\\
\end{matrix}
\right)
\left(
\begin{matrix}
A_1 & \beta\\
\alpha & a_{nn}\\
\end{matrix}
\right)=
\left(
\begin{matrix}
B_1 & O\\
O & 1\\
\end{matrix}
\right)
\left(
\begin{matrix}
E & O\\
-\alpha A^{-1} & 1\\
\end{matrix}
\right)
\left(
\begin{matrix}
A_1 & \beta\\
\alpha & a_{nn}\\
\end{matrix}
\right)
$$
$$
\left(
\begin{matrix}
B_1 & O\\
O & 1\\
\end{matrix}
\right)
\left(
\begin{matrix}
A_1 & \beta\\
O & -\alpha A^{-1} \beta + a_{nn}\\
\end{matrix}
\right)=
\left(
\begin{matrix}
A_1 B_1 & B_1 \beta\\
O & -\alpha A^{-1} \beta + a_{nn}\\
\end{matrix}
\right)
(Upper\ triangle\ matrix)
$$
$\square$