# Homework 5 ntu_b05202054 何信佑 https://hackmd.io/@ulynx/SJ7YESnuL ###### tags: `微分方程特論` 5.6(p.410) 15,25 5.7(p.423) 1,9,21,31,37 ## Section 5.6 ### Problem 15 Find the general solution of the system $$\mathbf{x}'=\underbrace{\begin{pmatrix}-2&-9&0\\1&4&0\\1&3&1\end{pmatrix}}_{\mathsf{A}}\mathbf{x}.\tag{1}$$ #### Solution The secular equation of $\mathsf{A}$ is $$\begin{align}0&=\begin{vmatrix}-2-\lambda&-9&0\\1&4-\lambda&0\\1&3&1-\lambda\end{vmatrix}\\&=(-2-\lambda)(4-\lambda)(1-\lambda)+9(1-\lambda)\\&=(\lambda^2-2\lambda+1)(1-\lambda)\\&=(\lambda-1)^3\end{align}$$ Thus $\mathsf{A}$ has the eigenvalue $\lambda=1$ of multiplicity $k=3$. The eigenvector equation is $$(\mathsf{A}-\mathsf{I})\mathbf{v}=\begin{pmatrix}-3&-9&0\\1&3&0\\1&3&0\end{pmatrix}\begin{pmatrix}\alpha\\\beta\\\gamma\end{pmatrix}=\begin{pmatrix}0\\0\\0\end{pmatrix},\tag{2}$$ and the nonzero vector $\mathbf{v}=\begin{pmatrix}\alpha&\beta&\gamma\end{pmatrix}^T$ is an eigenvector if and only if $\alpha+3\beta=0$ or $\alpha=-3\beta$, and $\gamma$ is arbitrary. If $\gamma=1$, then we may choose $\alpha=\beta=0$; this gives the eigenvector ==$\mathbf{v}_1=\begin{pmatrix}0&0&1\end{pmatrix}^T$==. If $\gamma=0$, and we choose $\beta=1$, then $\alpha=-3$; this gives the other eigenvector ==$\mathbf{v}_2=\begin{pmatrix}-3&1&0\end{pmatrix}^T$==. We can only find $p=2$ linearly independent eigenvectors associated with the eigenvector $\lambda=1$ of multiplicity $k=3$. So the defect of $\lambda=1$ is $d=k-p=1$. The other independent solution is $\mathbf{x}_3=(\mathbf{v}_1t+\mathbf{v}_3)e^{\lambda t}$, where $\mathbf{v}_3$ satisfies $(\mathsf{A}-\mathsf{I})^2\mathbf{v}_3=\mathbf{0}$, or $$\begin{pmatrix}0&0&0\\0&0&0\\0&0&0\end{pmatrix}\begin{pmatrix}\alpha\\\beta\\\gamma\end{pmatrix}=\begin{pmatrix}0\\0\\0\end{pmatrix}.\tag{3}$$ Thus any nonzero vector meets such requirement. We may choose ==$\mathbf{v}_3=\begin{pmatrix}1&0&0\end{pmatrix}^T$==. Therefore, the general solution of System (1) is $$\boxed{\mathbf{x}(t)=c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t)+c_3\mathbf{x}_3(t)},$$ where the linearly independent solutions are $$\boxed{\begin{align}\mathbf{x}_1(t)&=\mathbf{v}_1e^t=\begin{pmatrix}0\\0\\1\end{pmatrix}e^t,\\\mathbf{x}_2(t)&=\mathbf{v}_2e^t=\begin{pmatrix}-3\\1\\0\end{pmatrix}e^t,\\\mathbf{x}_3(t)&=(\mathbf{v}_1t+\mathbf{v}_3)e^t=\begin{pmatrix}1\\0\\t\end{pmatrix}e^t,\end{align}}\tag{4}$$ and $c_1,c_2,c_3$ being arbitrary real constants. $\blacksquare$ ### Problem 25 The eigenvalues of the coefficient matrix $\mathsf{A}$ are given in the following system $$\mathbf{x}'=\underbrace{\begin{pmatrix}-2&17&4\\-1&6&1\\0&1&2\end{pmatrix}}_{\mathsf{A}}\mathbf{x};\quad \lambda=2,2,2.\tag{1}$$ Find its general solution. #### Solution The eigenvalue $\lambda=2$ of $\mathsf{A}$ is of multiplicity $k=3$. The eigenvector equation is $$(\mathsf{A}-2\mathsf{I})\mathbf{v}=\begin{pmatrix}-4&17&4\\-1&2&1\\0&1&0\end{pmatrix}\begin{pmatrix}\alpha\\\beta\\\gamma\end{pmatrix}=\begin{pmatrix}0\\0\\0\end{pmatrix}.\tag{2}$$The row echelon form of $\mathsf{A}-2\mathsf{I}$ is $$\begin{pmatrix}1&0&-1\\0&1&0\\0&0&0\end{pmatrix}$$ This means $\beta=0$ and $\alpha=\gamma$, with $\gamma$ being arbitrary. As we choose $\gamma=1$, the equation (2) can only give *one* nonzero eigenvector ==$\mathbf{v}_1=\begin{pmatrix}1&0&1\end{pmatrix}^T$==, leading to the first linearly independent solution $\mathbf{x}_1(t)=\mathbf{v}_1e^{2t}$. The defect of the eigenvalue $\lambda=2$ is $d=2$. We assume there are other linearly independent solutions $\mathbf{x}_2(t)=(\mathbf{v}_1(t)t+\mathbf{v}_2)e^{2t}$ and $\mathbf{x}_3(t)=(\frac{1}{2}\mathbf{v}_1(t)t^2+\mathbf{v}_2t+\mathbf{v}_3)e^{2t}$, where $\mathbf{v}_2$ satisfies $$(\mathsf{A}-2\mathsf{I})^2\mathbf{v}_2=\begin{pmatrix}-1&4&1\\0&0&0\\-1&4&1\end{pmatrix}\mathbf{v}_2=\begin{pmatrix}0\\0\\0\end{pmatrix},\tag{3}$$ and $\mathbf{v}_3$ satisfies $$(\mathsf{A}-2\mathsf{I})^3\mathbf{v}_3=\begin{pmatrix}0&0&0\\0&0&0\\0&0&0\end{pmatrix}\mathbf{v}_3=\begin{pmatrix}0\\0\\0\end{pmatrix}.\tag{4}$$ Equation (4) suggests any nonzero eigenvector $\mathbf{v}_3$ is suitable. We may choose ==$\mathbf{v}_3=\begin{pmatrix}1&0&0\end{pmatrix}^T$==, and we calculate ==$\mathbf{v}_2=(\mathsf{A}-2\mathsf{I})\mathbf{v}_3=\begin{pmatrix}-4&-1&0\end{pmatrix}^T$==. Besides, not surprisingly, we find that $(\mathsf{A}-2\mathsf{I})\mathbf{v}_2=\begin{pmatrix}-1&0&-1\end{pmatrix}^T$, which only differs by a coefficient from $\mathbf{v}_1$ found previously. Therefore, the general solution of System (1) is $$\boxed{\mathbf{x}(t)=c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t)+c_3\mathbf{x}_3(t)},$$ where the linearly independent solutions are $$\boxed{\begin{align}\mathbf{x}_1(t)&=\mathbf{v}_1e^{2t}=\begin{pmatrix}1\\0\\1\end{pmatrix}e^{2t},\\\mathbf{x}_2(t)&=(\mathbf{v}_1t+\mathbf{v}_2)e^{2t}=\begin{pmatrix}t-4\\-1\\t\end{pmatrix}e^{2t},\\\mathbf{x}_3(t)&=\left(\dfrac{1}{2}\mathbf{v}_1t^2+\mathbf{v}_2t+\mathbf{v}_3\right)e^{2t}=\begin{pmatrix}\frac{1}{2}t^2-4t+1\\-t\\\frac{1}{2}t^2\end{pmatrix}e^{2t},\end{align}}\tag{5}$$ and $c_1,c_2,c_3$ being arbitrary real constants. $\blacksquare$ ## Section 5.7 ### Problem 1 Find a fundamental matrix of the system $$\mathbf{x}'=\underbrace{\begin{pmatrix}2&1\\1&2\end{pmatrix}}_\mathsf{A}\mathbf{x},\tag{1}$$ then apply Eq. (8) in the textbook, $$\mathbf{x}(t)=\mathsf{\Phi}(t)\mathsf{\Phi}(0)^{-1}\mathbf{x}(0),$$ to find a solution satisfying the given initial condition $$\mathbf{x}(0)=\begin{pmatrix}3\\-2\end{pmatrix}.\tag{2}$$ #### Solution The secular equation of $\mathsf{A}$ is $$\begin{align}0&=\begin{vmatrix}2-\lambda&1\\1&2-\lambda\end{vmatrix}\\&=(2-\lambda)(2-\lambda)-1\\&=\lambda^2-4\lambda-3\\&=(\lambda-1)(\lambda-3).\end{align}$$ Thus the eigenvalues of $\mathsf{A}$ are $\lambda_1=1$ and$\lambda_2=3$. For eigenvalue $\lambda_1=1$, the eigenvector equation is $$\begin{pmatrix}1&1\\1&1\end{pmatrix}\mathbf{v}=\mathbf{0}.\tag{3}$$ An appropriate option for eigenvector can be $\mathbf{v}_1=\begin{pmatrix}1&-1\end{pmatrix}^T$, so the first linearly independent solution is ==$\mathbf{x}_1=\mathbf{v}_1e^t=\begin{pmatrix}e^t&-e^t\end{pmatrix}^T$==. For eigenvalue $\lambda_2=3$, the eigenvector equation is $$\begin{pmatrix}-1&1\\1&-1\end{pmatrix}\mathbf{v}=\mathbf{0}.\tag{4}$$ An appropriate option for eigenvector can be $\mathbf{v}_2=\begin{pmatrix}1&11\end{pmatrix}^T$, so the first linearly independent solution is ==$\mathbf{x}_2=\mathbf{v}_2e^{3t}=\begin{pmatrix}e^{3t}&e^{3t}\end{pmatrix}^T$==. A fundamental matrix of System (1), according to Eq. (2) in the textbook, is $$\mathsf{\Phi}(t)=\begin{pmatrix}|&|\\\mathbf{x}_1&\mathbf{x}_2\\|&|\end{pmatrix}=\boxed{\begin{pmatrix}e^t&e^{3t}\\-e^t&e^{3t}\end{pmatrix}}.\tag{5}$$ The particular solution of System (1) that satisfies the intial condtion (2) is given by $$\begin{align}\mathbf{x}(t)&=\mathsf{\Phi}(t)\mathsf{\Phi}(0)^{-1}\mathbf{x}(0)\\&=\begin{pmatrix}e^t&e^{3t}\\-e^t&e^{3t}\end{pmatrix}\begin{pmatrix}1&1\\-1&1\end{pmatrix}^{-1}\begin{pmatrix}3\\-2\end{pmatrix}\\&=\frac{1}{2}\begin{pmatrix}e^t&e^{3t}\\-e^t&e^{3t}\end{pmatrix}\begin{pmatrix}1&-1\\1&1\end{pmatrix}\begin{pmatrix}3\\-2\end{pmatrix}\\&=\frac{1}{2}\begin{pmatrix}e^t&e^{3t}\\-e^t&e^{3t}\end{pmatrix}\begin{pmatrix}5\\1\end{pmatrix}\\&=\boxed{\begin{pmatrix}\frac{1}{2}(5e^t+e^{3t})\\\frac{1}{2}(-5e^t+e^{3t})\end{pmatrix}}.\;\blacksquare\end{align}$$ ### Problem 9 Compute the matrix exponential $e^{\mathsf{A}t}$ for the system $$\left\{\begin{array}{l}x_1'=5x_1-4x_2\\x_2'=2x_1-x_2\end{array}\right.,\tag{1}$$ where $\mathsf{A}$ denotes the coefficient matrix of this system. #### Solution The coefficient matrix of System (1) is $$\mathsf{A}=\begin{pmatrix}5&-4\\2&-1\end{pmatrix}.\tag{2}$$ Although the exponential matrix is defined by Expression (24) in the textbook as $$e^{\mathsf{A}t}=\mathsf{I}+\mathsf{A}t+\mathsf{A}^2\frac{t^2}{2!}+\cdots+\mathsf{A}^n\frac{t^n}{n!}+\cdots,$$ it is impractical to determining $e^{\mathsf{A}t}$ with this expression because typically it has infinite terms. Instead, we shall apply the Theorem 3 of Sec. 5.7 in the textbook. First we find the eigenvalues of $\mathsf{A}$ with $$\begin{align}0&=\begin{vmatrix}5-\lambda&-4\\2&-1-\lambda\end{vmatrix}\\&=(\lambda-5)(\lambda+1)+8\\&=\lambda^2-4\lambda+3\\&=(\lambda-1)(\lambda-3).\end{align}$$ They are $\lambda_1=1$ and $\lambda_2=3$. For the eigenvalue $\lambda_1=1$, the eigenvalue equation $\begin{pmatrix}4&-4\\2&-2\end{pmatrix}\mathbf{v}=\mathbf{0}$ suggests $\mathbf{v}_1=\begin{pmatrix}1&1\end{pmatrix}^T$ be chosen as the eigenvector. So one of the two linearly independent solutions of System (1) is ==$\mathbf{x}_1(t)=\mathbf{v}_1e^{t}=\begin{pmatrix}1&1\end{pmatrix}^Te^{t}$==. For the eigenvalue $\lambda_2=3$, the eigenvalue equation $\begin{pmatrix}2&-4\\2&-4\end{pmatrix}\mathbf{v}=\mathbf{0}$ suggests $\mathbf{v}_2=\begin{pmatrix}2&1\end{pmatrix}^T$ be chosen as the eigenvector. So the other of two linearly independent solutions of System (1) is ==$\mathbf{x}_2(t)=\mathbf{v}_2e^{3t}=\begin{pmatrix}2&1\end{pmatrix}^Te^{3t}$==. A fundamental matrix of System (1) is then $$\mathsf{\Phi}(t)=\begin{pmatrix}e^t&2e^{3t}\\e^t&e^{3t}\end{pmatrix}. \tag{3}$$ Eventually, Theorem 3 of Sec. 5.7 in the textbook enables us to write $$\begin{align}e^{\mathsf{A}t}&=\mathsf{\Phi}(t)\mathsf{\Phi}(0)^{-1}\\&=\begin{pmatrix}e^t&2e^{3t}\\e^t&e^{3t}\end{pmatrix}\begin{pmatrix}1&2\\1&1\end{pmatrix}^{-1}\\&=-\begin{pmatrix}e^t&2e^{3t}\\e^t&e^{3t}\end{pmatrix}\begin{pmatrix}1&-2\\-1&1\end{pmatrix}\\&=\boxed{\begin{pmatrix}-e^t+2e^{3t}&2e^t-2e^{3t}\\-e^t+e^{3t}&2e^t-e^{3t}\end{pmatrix}}.\;\blacksquare\end{align}$$ ### Problem 21 Show that the matrix $$\mathsf{A}=\begin{pmatrix}1&-1\\1&-1\end{pmatrix}$$ is **nilpotent** and then use this fact to find (as in Example 3) the matrix exponential $e^{\mathsf{A}t}$. #### Solution A matrix $\mathsf{A}$ is called **nilpotent** if $\mathsf{A}^n=\mathsf{O}$ for some positive interger $n$. So we calculate $$\begin{align}\mathsf{A}^1&=\begin{pmatrix}1&-1\\1&-1\end{pmatrix},\\\mathsf{A}^2&=\begin{pmatrix}1&-1\\1&-1\end{pmatrix}^2=\begin{pmatrix}1&-1\\1&-1\end{pmatrix}\begin{pmatrix}1&-1\\1&-1\end{pmatrix}=\begin{pmatrix}0&0\\0&0\end{pmatrix}=\mathsf{O}_{2\times 2}.\end{align}$$ Evidently, the matrix $\mathsf{A}$ is nilpotent. Due to the nilpotency of $\mathsf{A}$, the exponential matrix $e^{\mathsf{A}t}$ terminates after a finite number of terms, as seen in $$\require{cancel}\begin{align}e^{\mathsf{A}t}&=\mathsf{I}+\mathsf{A}t+\mathsf{A}^2\frac{t^2}{2!}+\cdots+\mathsf{A}^n\frac{t^n}{n!}+\cdots\\&=\mathsf{I}+\mathsf{A}t+\underbrace{\mathsf{A}^2}_{=\mathsf{O}_{2\times 2}}\left(\sum^\infty_{k=0}\mathsf{A}^k\frac{t^{k+2}}{(2+k)!}\right)\\&=\begin{pmatrix}1&0\\0&1\end{pmatrix}+\begin{pmatrix}1&-1\\1&-1\end{pmatrix}t\\&=\boxed{\begin{pmatrix}1+t&-t\\t&1-t\end{pmatrix}}.\;\blacksquare\end{align}$$ ### Problem 31 Suppose that the $n\times n$ matrices $\mathsf{A}$ and $\mathsf{B}$ **commute**; that is, that $\mathsf{A}\mathsf{B}=\mathsf{B}\mathsf{A}$. Prove that $e^{\mathsf{A}+\mathsf{B}}=e^\mathsf{A}e^\mathsf{B}$. (*Suggestion*: Group the terms in the product of the two series on the right-hand side to obtain the series on the left.) #### Solution By definition, the expoentials of $n\times n$ matrices $\mathsf{A}$, $\mathsf{B}$, and $\mathsf{A}+\mathsf{B}$ are $$e^\mathsf{A}=\sum^\infty_{k=0}\dfrac{\mathsf{A}^k}{k!},\quad e^\mathsf{B}=\sum^\infty_{m=0}\dfrac{\mathsf{B}^m}{m!},\quad\text{and}\quad e^{\mathsf{A}+\mathsf{B}}=\sum^\infty_{n=0}\dfrac{(\mathsf{A}+\mathsf{B})^n}{n!},\tag{1}$$ with a convenient convention that $\mathsf{X}^0:=\mathsf{I}_{n\times n}$ for an $n\times n$ matrix $\mathsf{X}$. It is seen without difficulty that $e^\mathsf{A}$ and $e^\mathsf{B}$ are $n\times n$ matrices as well. Hence we can calculate their product by writing $$\begin{align}e^\mathsf{A}e^\mathsf{B}&=\left(\sum^\infty_{k=0}\dfrac{\mathsf{A}^k}{k!}\right)\left(\sum^\infty_{m=0}\dfrac{\mathsf{B}^m}{m!}\right)=\sum^\infty_{k=0}\sum^\infty_{m=0}\dfrac{\mathsf{A}^k\mathsf{B}^m}{k!m!}\\&=\sum^\infty_{k=0}\sum^\infty_{m=0}\dfrac{1}{(k+m)!}\dfrac{(k+m)!}{k!m!}\mathsf{A}^k\mathsf{B}^m\\&=\sum^\infty_{n=0}\sum^n_{k=0}\dfrac{1}{n!}\dfrac{n!}{k!(n-k)!}\mathsf{A}^k\mathsf{B}^{(n-k)}\\&=\sum^\infty_{n=0}\dfrac{1}{n!}\underbrace{\left[\sum^n_{k=0}\dfrac{n!}{k!(n-k)!}\mathsf{A}^k\mathsf{B}^{(n-k)}\right]}_{(\mathsf{A}+\mathsf{B})^n}\\&=\sum^\infty_{n=0}\dfrac{(\mathsf{A}+\mathsf{B})^n}{n!}=e^{\mathsf{A}+\mathsf{B}}.\end{align}$$ - In the third line, we introduced a new index $n=k+m$, so that $m=n-k$. To make the multiplication possible, the indices was adjusted to run over all terms. - In the fourth line, we recognized that the square bracket is just the binomial expansion of $(\mathsf{A}+\mathsf{B})^n$. But this is valid only if $\mathsf{A}\mathsf{B}=\mathsf{B}\mathsf{A}$; otherwise, the binomial expansion cannot hold, even in the simplest case: $$(\mathsf{A}+\mathsf{B})^2=\mathsf{A}^2+\mathsf{A}\mathsf{B}+\mathsf{B}\mathsf{A}+\mathsf{B}^2\neq\mathsf{A}^2+2\mathsf{A}\mathsf{B}+\mathsf{B}^2\quad\text{if}\quad\mathsf{A}\mathsf{B}\neq\mathsf{B}\mathsf{A}.$$ - In the fifth line, the third expression in Expression (1) appears, resulting in $$e^\mathsf{A}e^\mathsf{B}=e^{\mathsf{A}+\mathsf{B}}.$$ Interestingly, even if the matrices $\mathsf{A}$ and $\mathsf{B}$ do not commute, it is still possible to express $e^\mathsf{A}e^\mathsf{B}$ as some exponential of matrix series by means of the [Baker–Campbell–Hausdorff formula](https://en.wikipedia.org/wiki/Baker%E2%80%93Campbell%E2%80%93Hausdorff_formula): $$e^\mathsf{A}e^\mathsf{B}=e^\mathsf{C},$$ where $\mathsf{C}=\mathsf{A}+\mathsf{B}+\frac{1}{2}[\mathsf{A},\mathsf{B}]+\frac{1}{12}[\mathsf{A},[\mathsf{A},\mathsf{B}]]+\cdots$, the remaining terms being all iterated commutators involving $\mathsf{A}$ and $\mathsf{B}$. $\blacksquare$ ### Problem 37 Apply Theorem 3 of Sec. 5.7 to calculate the matrix exponential $e^{\mathsf{A}t}$ for the matrix $$\mathsf{A}=\begin{pmatrix}2&3&4\\0&1&3\\0&0&1\end{pmatrix}.\tag{1}$$ #### Solution Theorem 3 of Sec. 5.7 states that $e^{\mathsf{A}t}=\mathsf{\Phi}(t)\mathsf{\Phi}(0)^{-1}$ provided that $\mathsf{\Phi}=\begin{pmatrix}\mathbf{x}_1(t)&\mathbf{x}_2(t)&\cdots&\mathbf{x}_n(t)\end{pmatrix}_{n\times n}$ is a fundamental matrix of the system $\mathbf{x}'=\mathsf{A}\mathbf{x}$, with $\mathbf{x}_i\;(i=1,2,\ldots,n)$ being the $i$-th linearly indpendent solution of $\mathbf{x}'=\mathsf{A}\mathbf{x}$. To find such a fundamental matrix, we solve the eigenvalue problem $\mathsf{A}\mathbf{v}=\lambda\mathbf{v}$ with the secular equation $|\mathsf{A}-\lambda\mathsf{I}|=0$: $$\begin{align}0&=\begin{pmatrix}2-\lambda&3&4\\0&1-\lambda&3\\0&0&1-\lambda\end{pmatrix}\\&=(2-\lambda)(1-\lambda)(1-\lambda)\end{align}$$ Thus the eigenvalues of $\mathsf{A}$ are: - $\lambda_1=2$, of multiplicity $k=1$; and - $\lambda_2=1$, of multiplicity $k=2$. For the eigenvalue $\lambda_1=2$, the eigenvalue equation $$(\mathsf{A}-2\mathsf{I})\mathbf{v}=\begin{pmatrix}0&3&4\\0&-1&3\\0&0&-1\end{pmatrix}\mathbf{v}=\mathbf{0}\tag{2}$$ The row echelon form of $\mathsf{A}-2\mathsf{I}$ $$\begin{pmatrix}0&1&0\\0&0&1\\0&0&0\end{pmatrix}$$ suggests $\mathbf{v}_1=\begin{pmatrix}1&0&0\end{pmatrix}^T$ be chosen as the eigenvector. So the first linearly independent solution of System (1) is ==$\mathbf{x}_1(t)=\mathbf{v}_1e^{2t}=\begin{pmatrix}1&0&0\end{pmatrix}^Te^{2t}$==. For the eigenvalue $\lambda_2=1$, the eigenvalue equation $$(\mathsf{A}-\mathsf{I})\mathbf{v}=\begin{pmatrix}1&3&4\\0&0&3\\0&0&0\end{pmatrix}\mathbf{v}=\mathbf{0}\tag{3}$$ The row echelon form of $\mathsf{A}-2\mathsf{I}$ $$\begin{pmatrix}1&3&0\\0&0&1\\0&0&0\end{pmatrix}$$ suggests $\mathbf{v}_2=\begin{pmatrix}-3&1&0\end{pmatrix}^T$ be chosen as the eigenvector. So the second linearly independent solution of System (1) is ==$\mathbf{x}_2(t)=\mathbf{v}_2e^{t}=\begin{pmatrix}-3&1&0\end{pmatrix}^Te^{t}$==. Yet another linearly independent solution is $\mathbf{x}_3=(\mathbf{v}_2t+\mathbf{v}_3)e^t$, where $\mathbf{v}_3$ satisfies $$(\mathsf{A}-\mathsf{I})\mathbf{v}_3=\mathbf{v}_2$$ or $$\begin{pmatrix}1&3&4\\0&0&3\\0&0&0\end{pmatrix}\mathbf{v}_3=\begin{pmatrix}-3\\1\\0\end{pmatrix}.\tag{4}$$ The row echelon form of its augmented matrix is $$\left( \begin{array}{ccc|c}1&3&0&-\frac{13}{3}\\0&0&1&\frac{1}{3}\\0&0&0&0\end{array} \right).$$ So $\mathbf{v}_3=\begin{pmatrix}-\frac{22}{3}&1&\frac{1}{3}\end{pmatrix}^T$ and ==$\mathbf{x}_3=\begin{pmatrix}-3t-\frac{22}{3}&t+1&\frac{1}{3}\end{pmatrix}^Te^{t}$==. Therefore, $$\mathsf{\Phi}(t)=\begin{pmatrix}|&|&|\\\mathbf{x}_1&\mathbf{x}_2&\mathbf{x}_3\\|&|&|\end{pmatrix}=\begin{pmatrix}e^{2t}&-3e^{t}&-3te^{t}-\frac{22}{3}e^{t}\\0&e^{t}&te^{t}+e^{t}\\0&0&\frac{1}{3}e^{t}\end{pmatrix}.\tag{5}$$ $$\begin{align}e^{\mathsf{A}t}&=\mathsf{\Phi}(t)\mathsf{\Phi}(0)^{-1}\\&=\begin{pmatrix}e^{2t}&-3e^{t}&-3te^{t}-\frac{22}{3}e^{t}\\0&e^{t}&te^{t}+e^{t}\\0&0&\frac{1}{3}e^{t}\end{pmatrix}\begin{pmatrix}1&-3&-\frac{22}{3}\\0&1&1\\0&0&\frac{1}{3}\end{pmatrix}^{-1}\\&=\begin{pmatrix}e^{2t}&-3e^{t}&-3te^{t}-\frac{22}{3}e^{t}\\0&e^{t}&te^{t}+e^{t}\\0&0&\frac{1}{3}e^{t}\end{pmatrix}\begin{pmatrix}1&3&13\\0&1&-3\\0&0&3\end{pmatrix}\\&=\boxed{\begin{pmatrix}e^{2t}&3e^{2t}-3e^{t}&13e^{2t}-9te^{t}-13e^{t}\\0&e^{t}&3te^{t}\\0&0&e^{t}\end{pmatrix}},\end{align}$$ where the inverse of $\mathsf{\Phi}(0)$ was calculated by means of its **adjugate matrix**, or transpose of its cofactor matrix: $$\begin{align}\mathsf{\Phi}(0)^{-1}&=\dfrac{1}{\det \mathsf{\Phi}(0)}\text{adj}\,\mathsf{\Phi}(0)\\&=\dfrac{1}{1/3}\begin{pmatrix}\frac{1}{3}&0&0\\1&\frac{1}{3}&0\\\frac{13}{3}&-1&1\end{pmatrix}^T\\&=\begin{pmatrix}1&3&13\\0&1&-3\\0&0&3\end{pmatrix}.\;\blacksquare\end{align}$$