--- tags: Giang's linear algebra notes --- # Chapter 6: Matrices [toc] ## Review on matrices **Elementary row operations on matrices**: These operations take an $m\times n$ matrix as input and returns a different $m\times n$ matrix as output 1. Row interchange, i.e. interchange row $i$ with row $j$ for some $i,j\in\{1,\dots,m\}$ 2. Row multiplication, i.e. take a row $i$ and multiply it by a *nonzero* scalar $c$ 3. Row addition, i.e. add $c$ multiplies of row $i$ to row $j$ where $c$ is a scalar **Elementary row matrices** are matrices. An elementary row operation amounts to multiplying a given matrix on the left by one of the *elementary row matrices* **Invertibility of elementary matrices**: Every elementary row matrix is invertible, and its inverse is another elementary matrix >**NOTE**: There are also *elementary column operations* arising from multiplying a matrix on the right by an elementary matrix, instead of the left >**NOTE**: Every matrix can be row-reduced into row-echelon form ## Rank of a matrix **Linear transformation rank**: (Recall) The *rank* of a linear transformation $T:V\to W$ is $$\text{rank}(T)=\dim[\text{range}(T)]$$ **Invertible linear transformation**: Let $V$ and $W$ be $n$-dimensional spaces, and let $\dim(V)=\dim(W)=n$. Let $V\to W$ be a linear transformations. Then $T$ is invertible if and only if $\text{rank}(T)=n$, i.e. $T$ has the maximum rank possible **Theorem**: Let $T:V\to W$ be a linear transformation from one finite-dimensional space to another, let $S:U\to V$ be an invertible transformation, and let $Q:W\to Z$ be another invertible transformation. Then $$\text{rank}(T)=\text{rank}(QT)=\text{rank}(TS)=\text{rank}(QTS)$$ **Matrix rank**: If $A$ is a matrix, then *the rank of $A$*, denoted $\text{rank}(A)$, is $$\text{rank}(A)=\text{rank}(L_{A})$$ **Matrix rank and column dimension**: The rank of a matrix $A$ is equal to the dimension of the space spanned by its columns **Corollary**: If only $k$ of the columns of a matrix $A$ are nonzero, then the rank of the matrix can be at most $k$ **Invariation under elementary operations**: Elementary row or column operations do not change the rank of a matrix **Theorem**: Let $A$ be a matrix in row-echelon form. Then $\text{rank}(A)$ equals to the number of nonzero rows **Procedure to compute rank of a matrix** 1. Row reduce, or column reduce, until we reach row-elelon form 2. Count the number of nonzero rows **Theorem**: Let $A$ be an $m\times n$ matrix with rank $r$. Then we can use elementary row and column matrices to place $A$ in the form $$\begin{bmatrix}I_{r} & \boldsymbol{0}_{r\times n-r}\\ \boldsymbol{0}_{m-r\times r} & \boldsymbol{0}_{m-r\times n-r} \end{bmatrix}$$ where $I_{r}$ is the $r\times r$ identity matrix, and $\boldsymbol{0}_{m\times n}$ is the $m\times n$ zero matrix **A factorization theorem**: Let $A$ be an $m\times n$ matrix with rank $r$. Then we have an $m\times m$ matrix $B$, which is a product of elementary matrices, and an $n\times n$ matrix $C$, also a product of elementary matrices such that $$A=B\begin{bmatrix}I_{r} & \boldsymbol{0}_{r\times n-r}\\ \boldsymbol{0}_{m-r\times r} & \boldsymbol{0}_{m-r\times n-r} \end{bmatrix}C$$ >**NOTE**: The theorem above is an example of a *factorization theorem*, which takes a general matrix and factors it into simpler pieces **Rank of product of invertible matrices**: Let $A$ be an $m\times n$ matrix, $B$ be an $m\times m$ invertible matrix, and $C$ be an $n\times n$ invertible matrix. Then $$\text{rank}(A)=\text{rank}(BA)=\text{rank}(AC)=\text{rank}(BAC)$$ **Rank of transpose matrices**: Let $A$ be an $m\times n$ matri with rank $r$. Then $A^{T}$ has the same rank as $A$ **Matrix rank and dimension of row space**: The rank of a matrix is equal to the dimension of the space spanned by its rows **Corollary**: If only $k$ of the rows of a matrix are nonzero, then the rank of the matrix can be at most $k$ **Rank of linear transformation under change of basis**: Let $T:V\to W$ be a linear transformation, and let $\beta,\gamma$ be finite bases for $V$ and $W$. Then $$\text{rank}(T)=\text{rank}([T]_{\beta}^{\gamma})$$ >**NOTE**: The rank of a matrix measures, in some sense, how close to zero, or how degenerate, a matrix is **Rank of zero matrix**: The only matrix with rank $0$ is the zero matrix **Upper bound on matriux rank**: The largest rank an $m\times n$ matrix can have is $\min\{m,n\}$ ## Inverting matrices via row operations **Invertible matrix as product of elementary matrices**: Let $A$ be a $n\times n$ matrix. Then $A$ is invertible if and only if it is the product of elementary $n\times n$ matrices **Method of computing $A^{-1}$**. Using row operations to invert a matrix. * Explain: Let $A$ be a $n\times n$ invertible matrix and $$E_{a}\dots E_{1}A=I$$ Then $$A^{-1}=E_{a}\dots E_{1}I$$ Thus, if we concatenate $A$ and $I$ together, and apply row operations on the concatenated matrix to turn $A$ component into $I$, then the $I$ component will automatically turn into $A^{-1}$ ## Determinants >**NOTE**: The best way to understand determinants is by means of *exterior algebra*. Without tools of exterior algebra, i.e. wedge product, proving any of the fundamental properties of determinants becomes very messy **Minor**: Let $A$ be a $n\times n$ matrix, then *the minor $\tilde{A}_{ij}$ of $A$ is a $n-1\times n-1$ matrix, which is $A$ with the $i$th row and $j$th column removed* **Cofactor expansion of determinant**: To compute the determinant of an $n\times n$ matrix $A$, we pick a row $i$ and set $$\det(A)=\sum_{j=1}^{n}(-1)^{i+j}A_{ij}\det(\tilde{A}_{ij})$$ In the formula above, $\det(\tilde{A}_{ij})$ are sometimes called *cofactor.* One can also perform cofactor expansion along a column instead of a row $i$, i.e. $$\det(A)=\sum_{i=1}^{n}(-1)^{i+j}A_{ij}\det(\tilde{A}_{ij})$$ **Theorem**: It does not matter which row, or column, to perform cofactor expansion >**NOTE**: *Cofactor expansion of determinant* is messy **Symmetry property of determinant**: $\det(A^{T})=\det(A)$ **Determinant of a set of vectors**: A $n\times n$ matrix can be thought of as a collection of $n$ row vectors in $\textbf{R}^{n}$, or as a collection of $n$ column vectors in $\textbf{R}^{n}$. Thus we can define $$\det(v_{1},\dots,v_{n})=\det\big(\begin{bmatrix}v_{1} & \cdots & v_{n}\end{bmatrix}\big)$$ where $\{v_{i}\}_{i=1}^{n}$ are row, or column, vectors of interest >**NOTE**: The order in which we arrange $\{v_{1},\dots,v_{n}\}$ will be important ## Basic properties of determinants **Linearity of the determinant of $n$ vectors**. 1. $\det(v_{1},\dots,v_{j-1},v_{j}+w_{j},v_{j+1},\dots,v_{n})=\det(v_{1},\dots,v_{j-1},v_{j},v_{j+1},\dots,v_{n})+\det(v_{1},\dots,v_{j-1},w_{j},v_{j+1},\dots,v_{n})$, and 2. $\det(v_{1},\dots,v_{j-1},cv_{j},v_{j+1},\dots,v_{n})=c\cdot\det(v_{1},\dots,v_{j-1},v_{j},v_{j+1},\dots,v_{n})$ **Antisymmetry of determinant**. If we switches two column vectors, then the determinant changes sign >**NOTE**: The antisymmetry of determinant is not completely obvious from the cofactor expansion definition, although the presence of $(-1)^{i+j}$ does suggest some sort of sign change might occur after row switching or column switching >**NOTE**: The determinant is, in fact, the only expression which obeys linearity and antisymmetry properties, and which also has the property that the identity matrix has determinant one ## Properties relating to elementary row operations **Determinant and row interchange**: If $A$ is an $n\times n$ matrix, and $B$ is the matrix $A$ with two distinct rows $i$ and $j$ interchanged, then $$\det(B)=-\det(A)$$ **Corollary**: If two rows of a matrix $A$ are the same, then $\det(A)=0$ **Determinant and row multiplication**: If $A$ is an $n\times n$ matrix, and $B$ is the matrix $A$ but with one row $i$ multiplied by a scalar $c$, then $$\det(B)=c\det(A)$$ **Corollary**: If a matrix $A$ has one of its row equal to zero, then $\det(A)=0$ **Determinant and row addition**: If $A$ is an $n\times n$ matrix, and $B$ is the matrix $A$ but with $c$ copies of one row $i$ added to another row $j$, then $$\det(B)=\det(A)$$ **Summary**: If $E$ is an elementary matrix, then $$\det(EA)=\det(E)\det(A)$$ and $$\det(AE)=\det(A)\det(E)$$ ## Determinant and invertibility **Nonzero determinant implies invertibility**: An $n\times n$ matrix is invertible if and only if its determinant is nonzero >**NOTE**: The theorem above is one of the most important properties of a determinant, i.e. it measures how invertible a matrix is **Nonzero determinant implies linear independence**: Let $v_{1},\dots,v_{n}$ be $n$ column vectors in $\textbf{R}^{n}$. Then $v_{1},\dots,v_{n}$ are linearly dependent if and only if $\det(v_{1},\dots,v_{n})=0$ **Corollary**: The theorem above means that $\det(v_{1},\dots,v_{n})$ measures linear independence >**NOTE**: If we writes down a typical $n\times n$ matrix $A$, then $\det(A)$ will in general just be some random number and will usually not be zero. Thus, most matrices are invertible, and most collections of $n$ vectors in $\textbf{R}^{n}$ are linearly independent **Procedure to compute the determinant of a matrix** 1. Use row and column operations to convert the matrix into some sort of triangular or diagonal form, for which the determinant is easy to compute 2. Work backwards to recover the original determinant of the matrix **Determinant of product of matrices**: If $A$ and $B$ are $n\times n$ matrices, then $$\det(AB)=\det(A)\det(B)$$ >**NOTE**: $\det(A+B)\neq\det(A)+\det(B)$ in general ## Expansions **Determinant of similar matrices**: If two matrices are similar, then they have the same determinant **Corollary**: Let $T:\textbf{R}^{n}\to\textbf{R}^{n}$ be a linear transformation, and $\beta,\beta'$ be two ordered bases for $\textbf{R}^{n}$. Then $$\det([T]_{\beta}^{\beta})=\det([T]_{\beta'}^{\beta'})$$ Thus, to take the determinant of a matrix, it does not matter what basis we use