# Methods of deriving a formula for a linear recurrence relation sequence
## The Sequence Vector Space
Let $\mathcal{S}=\{\{a_n\}_{n=1}^{\infty}\mid a_n\in\mathbb{R},\forall n\in\mathbb{N}\}$
$x=\{a_{n}\}_{n=1}^{\infty},y=\{b_n\}_{n=1}^{\infty},z=\{c_n\}_{n=1}^{\infty},\alpha,\beta\in\mathbb{R}$
Define $x+y=\{a_n+b_n\}_{n=1}^{\infty},\alpha x=\{\alpha a_n\}_{n=1}^{\infty}$
It's clear that $\mathcal{S}$ is a vector space over $\mathbb{R}$.
## $T$-th Order Linear recurrence relation sequence
**Definition:**
A $T$-th ordered homogeneous linear recurrence relation sequence, for convenience we call it linear sequence, $\{a_n\}_{n=1}^{\infty}$ is a sequence that satisfies:
Given a finite positive integer $t\in\mathbb{N}$, the terms of the sequence $a_n$ has the recursive definition:
$$
\begin{cases}a_k\in\mathbb{R},1\le k\le t\\a_{n+t}=\displaystyle\sum_{k=1}^{t}c_ka_{n+t-k},n\ge t\end{cases}
$$
where $\forall 1\le i \le t, c_i\in\mathbb{R}$ and $t\in\mathbb{N}$, $c_t\not=0$.
We denote it as
\begin{pmatrix}a_1&a_2&\cdots&a_t\\c_1&c_2&\cdots&c_t\end{pmatrix}
The reason why $c_t\not=0$ is that if $c_t$ were $0$, then the recursive definition would become:
$$
a_{n+t}=c_1a_{n+t-1}+c_2a_{n+t-2}+c_3a_{n+t-3}+\dots+c_{t-1}a_{n+1}
$$
By shifting the index, we would obtain:
$$
a_{n+(t-1)}=c_1a_{n+t-2}+\dots+c_{t-1}a_n
$$
Which means that the sequence could be defined with only $t-1$ initial numbers, thus this definition is not well-defined.
Therefore it is required for $c_t\not=0$.
**Definition:**
Given scalars $c_i\in\mathbb{R},i=1,2,\dots,t$ where $t\in\mathbb{N}$, $c_t\not=0$, write the set:$\{\begin{pmatrix}a_1&a_2&\cdots&a_t\\c_1&c_2&\cdots&c_t\end{pmatrix}\mid a_k\in\mathbb{R}, k=1,2,\dots,t\}$ as $\mathcal{LS}(c_1,\dots,c_t)$.
For any given scalars $c_i\in\mathbb{R},i=1,2,\dots,t$ where $t\in\mathbb{N}$, $c_t\not=0$, we have the following 2 theorems:
**Theorem 1:**
$\mathcal{LS}(c_1,\dots,c_t)$ is a subspace of $\mathcal{S}$
**Theorem 2:**
$\mathcal{LS}(c_1,\dots,c_t)$ is isomorphic to $\mathbb{R}^t$
i.e., a linear sequence can be determined by its starting $t$ terms.
Now we can derive the following 2 methods to derive a formula for a linear sequence.
## Method 1: Find a Geometric Sequence Basis
Let $c_i\in\mathbb{R},i=1,2,\dots,t$ where $t\in\mathbb{N}$, $c_t\not=0$
**Definition:**
The geometric polynomial $\mathbf{G}(r)$ corresponding to $\mathcal{LS}(c_1,\dots,c_t)$ is defined as:
$$
\mathbf{G}(r):=\sum_{k=1}^tc_kr^{t-k}-r^t
$$
**Theorem 3:**
Suppose the geometric polynomial $\mathbf{G}(r)$ corresponding to $\mathcal{LS}(c_1,\dots,c_t)$ has $t$ distinct real roots $\alpha_1,\alpha_2,\dots,\alpha_t$.
Write $(\mathbf{g}_i)^T:=(\alpha_i,{\alpha_i^2},\dots,\alpha_{i}^t)\in\mathbb{R}^t$ and $\mathbf{a}=(a_1,a_2,\dots,a_t)\in\mathbb{R}^t$ ,where $v^T$ means the transpose of $v$ for $v\in\mathbb{R}^t$.
Then $a_n=(\begin{pmatrix}\mathbf{g_1}&\mathbf{g_2}&\cdots&\mathbf{g_t}\end{pmatrix}^{-1}\mathbf{a}^T)\cdot\begin{pmatrix}\alpha_1^n\\\alpha_2^n\\\vdots\\\alpha_t^n\end{pmatrix}$.
Note that $c_t\not=0$, so $c_tr^0=c_t\not=0$. The geometric polynomial $\mathbf{G}(0)\not=0$
::: spoiler Proof
We claim that $\beta=\{\begin{pmatrix}\mathbf{g_i}^T\\\mathbf{c}\end{pmatrix}\mid i=1,2,3,\dots,t\}$ is a basis of $\mathcal{LS}(\mathbf{c})$.
We can just prove the linear independency, and apply the theorem: linearly independent sets of a finite-dimensional vector space has a number of vectors that is no more than its dimension.
Since $\mathcal{LS}(\mathbf{c})$ is isomorphic to $\mathbb{R}^t$, and $\mathbb{R}^t$ is isomorphic to $\text{P}_{t-1}(\mathbb{R})$, the polynomial vector space, we can show that there exists a basis of $\text{P}_{t-1}(\mathbb{R})$ and an isomorphism between $\mathcal{LS}(\mathbf{c})$ and $\text{P}_{t-1}(\mathbb{R})$.
Take $\gamma:=\{f_i\in\text{P}_{t-1}(\mathbb{R})\mid i=1,2,3,\dots,t\}$ where $f_i$ is defined as
$$
f_i(x):=\prod_{1\le j\le t,i\not=j}\dfrac{(x-\alpha_j)}{(\alpha_i-\alpha_j)}
$$.
Suppose that $\displaystyle\sum_{i=1}^tr_if_i(x)=0$.
i.e., $\displaystyle\sum_{i=1}^tr_if_i(\alpha_j)=0,j=1,2,3,\dots,t$.
Since $f_i(\alpha_j)=\begin{cases}1,&\text{if }i=j\\0,&\text{if }i\not=j\end{cases}$, for fixed $j$, we have $0=r_jf_j(\alpha_j)=r_j=0$, then we let $j$ run through $1,2,\dots,t$, we can thus obtain the conclusion that $r_i=0$ for every $1\le i\le t$.
Hence $\gamma$ is a basis for $\text{P}_{t-1}(\mathbb{R})$, we now show that the linear map $T:\text{P}_{t-1}(\mathbb{R})\to\mathcal{LS}(\mathbf{c})$ defined as $T(f(x)):=\begin{pmatrix}f(\alpha_1)&f(\alpha_2)&\cdots&f(\alpha_t)\\c_1&c_2&\cdots&c_t\end{pmatrix}$ is an isomorphisim.
It can be easily shown that $T(\gamma)=\{\begin{pmatrix}e_i\\\mathbf{c}\end{pmatrix}\mid i=1,2,3,\dots,t\}$, where $(e_1,e_2,\dots,e_t)$ is the standard ordered basis for $\mathbb{R}^t$, defined as $e_i=(\delta_{1,i},\delta_{2,i},\dots,\delta_{t,i}),i=1,2,\dots,t$.
The $\delta_{i,j}$ is the Kronecker delta of $i,j$, $\delta_{i,j}:=\begin{cases}1,&\text{if }i=j\\0,&\text{if }i\not=j\end{cases}$.
Hence $T$ is an isomorphisim, we can conclude that $\beta$ is a basis for $\mathcal{LS}(\mathbf{c})$.
Consider the matrix equation
$$
\begin{pmatrix}\mathbf{g_1}&\mathbf{g_2}\cdots&\mathbf{g_t}\end{pmatrix}\mathbf{x}=\mathbf{a}^T
$$
Since $\beta$ is a basis for $\mathcal{LS}(\mathbf{c})$, $\begin{pmatrix}\mathbf{g_1}&\mathbf{g_2}\cdots&\mathbf{g_t}\end{pmatrix}$ is invertible.
Hence
$$
\mathbf{x}=\begin{pmatrix}\mathbf{g_1}&\mathbf{g_2}\cdots&\mathbf{g_t}\end{pmatrix}^{-1}\mathbf{a}^T
$$
This means that we find a linear combination of some geometric sequences such that their first $t$ terms adds up to $\mathbf{a}$.
Thus the $n$-th term of the sequence can be written as:
$$
a_n=\sum_{1\le i\le t}x_i\alpha_i^n=\mathbf{x}\cdot\begin{pmatrix}\alpha_1^n\\\alpha_2^n\\\vdots\\\alpha_t^n\end{pmatrix}=\begin{pmatrix}\mathbf{g_1}&\mathbf{g_2}\cdots&\mathbf{g_t}\end{pmatrix}^{-1}\mathbf{a}^T\cdot\begin{pmatrix}\alpha_1^n\\\alpha_2^n\\\vdots\\\alpha_t^n\end{pmatrix}
$$
:::
Moreover, because the matrix $\begin{pmatrix}\mathbf{g_1}&\mathbf{g_2}&\cdots&\mathbf{g_t}\end{pmatrix}$ is actually a Vandermonde matrix if we factor out $\displaystyle\prod_{1\le i\le t}\alpha_i$, the inverse of it can actually be explictly written down.
### Examples
The fibonacci sequence , defined by
$$
\{F_n\}_{n=1}^{\infty}=\begin{cases}F_1=F_2=1\\F_{n+2}=F_{n+1}+F_n,\forall n\in\mathbb{N}\end{cases}
$$
Prove that $F_n=\dfrac{1}{\sqrt{5}}((\dfrac{1+\sqrt{5}}{2})^n-(\dfrac{1-\sqrt{5}}{2})^n),\forall n\in\mathbb{N}$
:::spoiler Proof
There exist unique $\alpha,\beta\in\mathbb{R}$, such that
$\{F_n\}=\alpha\{x_n\}_{n=1}^{\infty}+\beta\{y_n\}_{n=1}^{\infty},\forall n\in\mathbb{N}$
$\begin{cases}F_1=1=\alpha x_1+\beta y_1=\alpha+\beta
\\F_n=\alpha(\dfrac{1+\sqrt{5}}{2})^{n-1}+\beta(\dfrac{1-\sqrt{5}}{2})^{n-1}\end{cases}$
$F_n=\alpha(\dfrac{1+\sqrt{5}}{2})^{n-1}+(1-\alpha)(\dfrac{1-\sqrt{5}}{2})^{n-1}$
$=\alpha((\dfrac{1+\sqrt{5}}{2})^{n-1}-(\dfrac{1-\sqrt{5}}{2})^{n-1})+(\dfrac{1-\sqrt{5}}{2})^{n-1}$
$n=2,F_2=1=\alpha((\dfrac{1+\sqrt{5}}{2})-(\dfrac{1-\sqrt{5}}{2}))+\dfrac{1-\sqrt{5}}{2}
\\=\sqrt{5}\alpha+\dfrac{1}{2}-\dfrac{\sqrt{5}}{2}$
$\implies\alpha=\dfrac{1}{\sqrt{5}}(\dfrac{\sqrt{5}}{2}+\dfrac{1}{2})$
$\implies\beta=1-\alpha=1-\dfrac{1}{\sqrt{5}}(\dfrac{\sqrt{5}}{2}+\dfrac{1}{2})
\\=\dfrac{2\sqrt{5}-\sqrt{5}-1}{2\sqrt{5}}=\dfrac{1}{\sqrt{5}}(\dfrac{\sqrt{5}-1}{2})=-\dfrac{1}{\sqrt{5}}(\dfrac{1-\sqrt{5}}{2})$
$F_n=\dfrac{1}{\sqrt{5}}(\dfrac{\sqrt{5}}{2}+\dfrac{1}{2})(\dfrac{1+\sqrt{5}}{2})^{n-1}-\dfrac{1}{\sqrt{5}}(\dfrac{1-\sqrt{5}}{2})(\dfrac{1-\sqrt{5}}{2})^{n-1}$
$=\dfrac{1}{\sqrt{5}}((\dfrac{1+\sqrt{5}}{2})^{n}-(\dfrac{1-\sqrt{5}}{2})^{n}),\forall n\in\mathbb{N}$
:::
## Method 2: Diagonalization