# Subspaces
### Big Idea
Subspaces of $\mathbb{R}^n$ are special geometric sets—like lines, planes, and hyperplanes—that must pass through the origin. A **basis** for a subspace is the most efficient set of vectors that both span the space and are linearly independent. The **Rank-Nullity Theorem** provides a fundamental link between the dimensions of the nullspace and the range of a matrix.
### Definition
:::info
**Definition**
A subset $U \subseteq \mathbb{R}^n$ is a **subspace** if it satisfies the following three conditions:
1. (Zero Vector) $U$ contains the zero vector, $\mathbf{0}$.
2. (Closed under Addition) $\mathbf{u}_1 + \mathbf{u}_2 \in U$ for all $\mathbf{u}_1,\mathbf{u}_2 \in U$.
3. (Closed under Scalar Multiplication) $c \mathbf{u} \in U$ for all $c \in \mathbb{R},\mathbf{u} \in U$.
:::
:::warning
**Example**
* The smallest subspace is the zero subspace, $\{ \mathbf{0} \}$, and the largest is the entire space, $\mathbb{R}^n$.
* In $\mathbb{R}^2$, any line passing through the origin is a subspace.
* In $\mathbb{R}^3$, any line or plane passing through the origin is a subspace.
* A set like $U = \left\{ \begin{bmatrix} x \\ y \end{bmatrix} : y \geq 0 \right\}$ is **not** a subspace because it is not closed under scalar multiplication (e.g., multiplying by $c=−1$ takes a vector out of the set).
:::
## Linear Independence and Span
:::info
**Definition**
* A **linear combination** of vectors $\mathbf{u}_1,\dots,\mathbf{u}_m$ is a sum of their scalar multiples: $$c_1 \mathbf{u}_1 + \cdots + c_m \mathbf{u}_m$$
* The **span** of vectors $\mathbf{u}_1,\dots,\mathbf{u}_m \in \mathbb{R}^n$, is the set of all possible linear combinations:
$$
\mathrm{span} \{ \mathbf{u}_1 , \dots , \mathbf{u}_m \} = \{ c_1 \mathbf{u}_1 + \cdots + c_m \mathbf{u}_m: c_1,\dots,c_m \in \mathbb{R} \}
$$
:::
:::danger
**Theorem** The span of any set of vectors is always a subspace of $\mathbb{R}^n$ and is the smallest subspace that contains all those vectors.
:::
:::info
**Definition** A set of vectors $\{ \mathbf{u}_1,\dots,\mathbf{u}_m \} \subset \mathbb{R}^n$ is linearly independent if the only way to form the zero vector, $\mathbf{0}$, as a linear combination is when all scalar coefficients are zero ($c_1 = \cdots = c_m = 0$). In simpler terms, no vector in a linearly independent set can be written as a combination of the others.
:::
:::warning
**Examples**
* The set $\{\begin{bmatrix}1 \\ 2 \end{bmatrix},\begin{bmatrix}2 \\4\end{bmatrix}\}$ is linearly dependent because the second vector is twice the first, meaning you can find non-zero coefficients to form the zero vector (e.g., $-2\begin{bmatrix}1 \\2 \end{bmatrix} + \begin{bmatrix}2 \\4\end{bmatrix}= \mathbf{0}$). Their span is just a line.
* The standard basis vectors $\{ \begin{bmatrix}1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix}\}$ are linearly independent, and their span is the entire space, $\mathbb{R}^2$.
:::
#### Key Properties
The concept of linear independence is fundamentally linked to solving homogeneous matrix systems:
1. A set of vectors $\{ \mathbf{u}_1,\dots,\mathbf{u}_m \} \subset \mathbb{R}^n$ is **linearly independent** if and only if the matrix equation $U \mathbf{c}=0$ has only the trivial solution, $\mathbf{c}=0$.
2. The columns of a matrix are linearly independent if and only if every column is a **pivot column** in the reduced row echelon form ($\mathrm{rref}$).
3. If a matrix has fewer pivots than columns, the vectors are linearly dependent, and the free variables in the $\mathrm{rref}$ solution indicate the specific linear dependency.
:::warning
**Example**
For the set $\left\{\begin{bmatrix}1\\2\\3 \end{bmatrix},\begin{bmatrix}1\\1\\1 \end{bmatrix}, \begin{bmatrix}7\\10\\13 \end{bmatrix} \right\}$ , the matrix $U$ has only 2 pivots in its rref (shown below). Since there are 3 columns, the vectors are *linearly dependent*.
\begin{aligned}
U =
\begin{bmatrix}
1 & 1 & 7 \\
2 & 1 & 10 \\
3 & 1 & 13
\end{bmatrix}
&\xrightarrow{\substack{r_2:r_2-2r_1 \\ r_3: r_3 -3r_1}}
\begin{bmatrix}
1 & 1 & 7 \\
0 & -1 & -4 \\
0 & -2 & -8
\end{bmatrix} \\
&\xrightarrow{\substack{r_2:-r_2 \\ r_3: r_3 -2r_2}}
\underbrace{\begin{bmatrix}
1 & 1 & 7 \\
0 & 1 & 4 \\
0 & 0 & 0
\end{bmatrix}}_{\mathrm{ref}(V)} \\
&\xrightarrow{\substack{r_1:r_1-r_2}}
\underbrace{\begin{bmatrix}
1 & 0 & 3 \\
0 & 1 & 4 \\
0 & 0 & 0
\end{bmatrix}}_{\mathrm{rref}(V)}
\end{aligned}
The last column of the $\mathrm{rref}$ indicates the dependency: $3 \mathbf{u}_1 + 4 \mathbf{u_2} = \mathbf{u}_3$, which rearranges to $-3 \mathbf{u}_1 - 4 \mathbf{u_2} + \mathbf{u}_3 = \mathbf{0}$.
:::
## Basis and Dimension
A **basis** is the most efficient set of vectors that describes a subspace $U \subseteq \mathbb{R}^n$.
:::info
**Definition**
A set of vectors $\{ \mathbf{u}_1 , \dots , \mathbf{u}_m \}$ forms a **basis** of a subspace $U$ if it satisfies two conditions:
1. **Linear Independence:** No vector in the set can be written as a linear combination of the others.
2. **Spanning:** The span of the set equals the entire subspace ($\text{Span}\{\mathbf{u}_1,\ldots,\mathbf{u}_k\} = U$).
The dimension $\mathrm{dim} \ U$ of the subspace $U$ is the number $m$ of vectors in any of its bases.
:::
### Key Properties of a Basis
* The *choice of basis is not unique* (e.g., both $\{\begin{bmatrix}1\\0\end{bmatrix}, \begin{bmatrix}0\\1\end{bmatrix} \}$ and $\{ \begin{bmatrix}1\\1\end{bmatrix}, \begin{bmatrix}1\\-1\end{bmatrix} \}$ are bases for $\mathbb{R}^2$).
* The number of vectors in every basis for a given subspace is always the same; this fixed number is the dimension.
* If a set of $k$ vectors $\{\mathbf{u}_1,\dots,\mathbf{u}_k\}$ is linearly independent, it automatically forms a basis for the subspace they span, $\text{Span}\{\mathbf{u}_1,\ldots,\mathbf{u}_k\}$.
* If a subspace $U$ is known to be $k$-dimensional, any set of $k$ linearly independent vectors in $U$ is guaranteed to form a basis for $U$.
:::warning
**Example**
* The standard basis for $\mathbb{R}^3$ is $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}$, where $\mathbf{e}_1=\begin{bmatrix}1\\0\\0\end{bmatrix}$, $\mathbf{e}_2=\begin{bmatrix}0\\1\\0\end{bmatrix}$, and $\mathbf{e}_3=\begin{bmatrix}0\\0\\1\end{bmatrix}$. Thus, $\dim \mathbb{R}^3 = 3$.
* The set $U = \text{Span}\left\{
\begin{bmatrix}0\\1\\0\end{bmatrix},
\begin{bmatrix}1\\0\\0\end{bmatrix},
\begin{bmatrix}1\\1\\0\end{bmatrix}
\right\}$ is the $xy$-plane in $\mathbb{R}^3$. Since the third vector is a linear combination of the first two, the set $\{\begin{bmatrix}0\\1\\0\end{bmatrix}, \begin{bmatrix}1\\0\\0\end{bmatrix}\}$ is a basis for $U$. Therefore, $\dim U=2$.
:::
## Nullspace and Range
The *nullspace* and *range* are two fundamental subspaces associated with any matrix $A$, and their dimensions are related by the Rank-Nullity Theorem.
### Nullspace (Kernel)
:::info
**Definition**
The **nullspace** of an $m \times n$ matrix $A$, denoted $\mathcal{N}(A)$, is the set of all vectors $\mathbf{x} \in \mathbb{R}^n$ that $A$ maps to the zero vector: $A\mathbf{x} = \mathbf{0}$.
:::
* $\mathcal{N}(A)$ is a subspace of $\mathbb{R}^n$.
* The nullspace can be found by solving the homogeneous system $A \mathbf{x}=0$. The number of free variables in the reduced row echelon form ($\textrm{rref}$) of $A$ determines the dimension of $\mathcal{N}(A)$.
* If $A$ is invertible, $\mathcal{N}(A) = \{\mathbf{0}\}$, meaning its dimension is zero.
:::warning
**Example**
$C = \begin{bmatrix}
1 & 2 & 3 & 10 \\ 2 & 6 & -1 & -1 \\ 1 & 3 & 1 & 4\end{bmatrix}$
* $\text{rref}(C) =
\begin{bmatrix}
1 & 3 & 0 & 1 \\
0 & 0 & 1 & 3 \\
0 & 0 & 0 & 0
\end{bmatrix}$ shows two free variables ($x_2$ and $x_4$).
* The solution $$\text{rref}(C)\mathbf{x} = \mathbf{0} \Leftrightarrow \begin{cases}
x_1 = -3s - t \\
x_2 = s \\
x_3 = -3t \\
x_4 = t
\end{cases}
\quad s,t \in \mathbb{R}$$ is expressed as a linear combination of two vectors $$\left\{ \; s \begin{bmatrix}-3\\1\\0\\0\end{bmatrix} + t \begin{bmatrix}-1\\0\\-3\\1\end{bmatrix} \;\middle|\; s,t \in \mathbb{R} \; \right\}$$ which form the basis for $\mathcal{N}(C)$: $$\mathcal{N}(C) = \text{Span}\left\{
\begin{bmatrix}-3\\1\\0\\0\end{bmatrix},
\begin{bmatrix}-1\\0\\-3\\1\end{bmatrix}
\right\}$$
* Thus, $\mathrm{dim}\ \mathcal{N}(C)=2$.
:::
### Range (Column Space)
:::info
**Definition**
The **range** of an $m \times n$ matrix $A$, denoted $\mathcal{R}(A)$, is the set of all possible vectors $A \mathbf{x}$:
$$
\mathcal{R}(A) = \{ A \mathbf{x} : \mathbf{x} \in \mathbb{R}^n \} = \text{Span}\{\mathbf{a}_1, \ldots, \mathbf{a}_n\}
$$
The **rank of** $A$ is defined as the dimension of the range: $\mathrm{rank}(A) = \dim \mathcal{R}(A)$.
:::
* $\mathcal{R}(A)$ is a subspace of $\mathbb{R}^m$.
* A basis for $\mathcal{R}(A)$ is formed by the original columns of $A$ that correspond to the pivot columns in $\textrm{rref}(A)$.
:::warning
**Example**
For the matrix $C$ above, $\text{rank}(C) = 2$ (two pivot columns). The first and third columns of $C$ are the pivot columns in $\textrm{rref}(C)$, so a basis for $\mathcal{R}(C)$ is the corresponding columns from the original matrix:
$$
\mathcal{R}(C) = \text{Span}\left\{ \begin{bmatrix}1\\2\\1\end{bmatrix}, \begin{bmatrix}3\\-1\\1\end{bmatrix} \right\}
$$
:::
### Rank-Nullity Theorem
The **Rank-Nullity Theorem** formalizes the relationship between the dimensions of these subspaces for an $m \times n$ matrix $A$:
$$
\dim \mathcal{R}(A) + \dim \mathcal{N}(A) = n
$$
(The dimension of the column space plus the dimension of the nullspace equals the number of columns in the matrix.)
#### Other Key Dimensional Relationships
* The rank of a matrix equals the rank of its transpose: $\dim \mathcal{R}(A) = \dim \mathcal{R} (A^T) = \text{rank}(A)$.
* The dimension of the left nullspace $\mathcal{N}(A^T)$ is: $$\dim \mathcal{N}(A^T) = m - \dim \mathcal{R}(A^T) = m - \text{rank}(A)$$
:::warning
**Example**
Let
$$
A = \begin{bmatrix}
1 & -2 & 1 & 0 & -1 \\
1 & -2 & 1 & 1 & 2 \\
0 & 0 & 1 & -1 & 1 \\
1 & -2 & 1 & 1 & -6
\end{bmatrix}
$$
* $\text{rref}(A)$:
$$
\text{rref}(A) =
\begin{bmatrix}
1 & -2 & 0 & 0 & 3 \\
0 & 0 & 1 & 0 & -4 \\
0 & 0 & 0 & 1 & -5 \\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
$$
* Dimensions:
* $\dim \mathcal{R}(A) = \#\text{pivots} = 3 = \text{rank}(A)$
* $\dim \mathcal{R}(A^T) = 3=\text{rank}(A^T)$
* $\dim \mathcal{N}(A) = n - \#\text{pivots} = 5-3 = 2$
* $\dim \mathcal{N}(A^T) = m - \#\text{pivots} = 4-3 = 1$
* Find a basis for $\mathcal{N}(A)$:
* $\mathcal{N}(A) = \mathcal{N}(\text{rref}(A))$.
* Solve $\text{rref}(A)\mathbf{x} = 0$:
$$
\begin{cases}
x_1 = 2s - 3t \\
x_2 = s \\
x_3 = 4t \\
x_4 = 5t \\
x_5 = t
\end{cases}
\quad s,t \in \mathbb{R}
$$
* So
$$
\mathcal{N}(A) =
\left\{ s \begin{bmatrix}2\\1\\0\\0\\0\end{bmatrix} +
t \begin{bmatrix}-3\\0\\4\\5\\1\end{bmatrix}
\;\middle|\; s,t \in \mathbb{R} \right\}
= \text{Span}\left\{
\begin{bmatrix}2\\1\\0\\0\\0\end{bmatrix},
\begin{bmatrix}-3\\0\\4\\5\\1\end{bmatrix}
\right\}
$$
* Find a basis for $\mathcal{R}(A)$
$$
\mathcal{R}(A) = \text{Span}\left\{
\begin{bmatrix}1\\1\\0\\1\end{bmatrix},
\begin{bmatrix}1\\-1\\1\\1\end{bmatrix},
\begin{bmatrix}0\\1\\-1\\1\end{bmatrix}
\right\}
\quad (\text{pivot columns})
$$
* Find a basis for $\mathcal{R}(A^T)$
\begin{aligned}
\mathcal{R}(A^T) &= \mathcal{R}((\text{rref}(A))^T)
\\
&=\text{pivot columns of} \;\text{rref}(A)^T \\
&= \text{pivot rows of}\; \text{rref}(A) \\
&= \text{Span}\left\{
\begin{bmatrix}1\\-2\\0\\0\\3\end{bmatrix},
\begin{bmatrix}0\\0\\1\\0\\-4\end{bmatrix},
\begin{bmatrix}0\\0\\0\\1\\-5\end{bmatrix}
\right\}
\end{aligned}
* Find a basis for $\mathcal{N}(A^T)$
* Direct computation gives
$$
\text{rref}(A^T) =
\begin{bmatrix}
1 & 0 & 0 & 2 \\
0 & 1 & 0 & -1 \\
0 & 0 & 1 & -2 \\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{bmatrix}
$$
* Solve $\text{rref}(A^T)\mathbf{x} = 0$:
$$
\begin{cases}
x_1 = -2s \\
x_2 = s \\
x_3 = 2s \\
x_4 = s
\end{cases}
\quad s \in \mathbb{R}
$$
* Thus
$$
\mathcal{N}(A^T) = \text{Span}\left\{
\begin{bmatrix}-2\\1\\2\\1\end{bmatrix}
\right\}
$$
:::
:::success
**Exercise**
Let
$$
\mathbf{u} = \begin{bmatrix}1\\1\\2\\1\end{bmatrix}, \quad
\mathbf{v} = \begin{bmatrix}0\\1\\0\\0\end{bmatrix}, \quad
\mathbf{w} = \begin{bmatrix}0\\1\\1\\1\end{bmatrix}, \quad
\mathbf{x} = \begin{bmatrix}1\\3\\3\\2\end{bmatrix}, \quad
\mathbf{y} = \begin{bmatrix}2\\1\\3\\1\end{bmatrix}
$$
1. Explain why the set $\{\mathbf{u}, \mathbf{v}, \mathbf{w}, \mathbf{x}, \mathbf{y}\}$ is linearly dependent.
2. Let the matrix $A$ be such that $\mathbf{u}, \mathbf{v}, \mathbf{w}, \mathbf{x}, \mathbf{y}$ are the columns of $A$. That is
$$A = \begin{bmatrix} 1 & 0 & 0 & 1 & 2 \\ 1 & 1 & 1 & 3 & 1 \\ 2 & 0 & 1 & 3 & 3 \\ 1 & 0 & 1 & 2 & 1 \end{bmatrix}$$ Specify the dimensions of $\mathcal{R}(A), \; \mathcal{N}(A), \; \mathcal{R}(A^T), \; \mathcal{N}(A^T)$.
3. What is the dimension of the subspace $S = \text{Span}\{\mathbf{u}, \mathbf{v}, \mathbf{w}, \mathbf{x}, \mathbf{y}\}$? Find a basis for $S$.
:::