---
tags: mth302
---
# Miniproject 8
**Initial due date: Sunday, April 9 at 11:59pm ET**
## Overview
Our final miniproject reaches back into linear algebra to look at *diagonalizable* matrices and their uses in solving systems of differential equations.
**Prerequisites:** You'll need to be able to solve basic systems of differential equations and find the eigenvalues and eigenvectors for a small matrix. You'll also need a basic comfort level with concepts of linear independence and matrix arithmetic from earlier in the course.
## Background
*This entire problem comes from Section 3.9.1 in your textbook. Here is a rephrased version of the introduction to that section.*
Some systems of differential equations are particularly easy to solve without using eigen "stuff" at all. Here is an example:
$$\begin{align*}
\frac{dx}{dt} &= 3x \\
\frac{dy}{dt} &= -2y
\end{align*}$$
This is easy because there is no $y$ in the $dx/dt$ equation and no $x$ in the $dy/dt$ equation. When this happens, we say that the system is **uncoupled** (or **decoupled**). This is not really a "system" because the independent variables don't interact; it's just two basic DE's, and easy ones at that, since they are linear and homogeneous. Our previous work allows is to solve these in one step (each):
$$x(t) = C_1e^{3t} \quad y(t) = C_2e^{-2t}$$
But, if we were to take the "system" above and write it in matrix form, we would have
$$\mathbf{x}'(t) = \begin{bmatrix} 3 & 0 \\ 0 & -2 \end{bmatrix}\mathbf{x}(t)$$
This is an example of a **diagonal matrix**: A matrix where all of the entries not on the main diagonal (going from the top-left to the bottom-right) are zero. Here is another example, this time a $4 \times 4$ diagonal matrix:
$$\begin{bmatrix} 3 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 5 \end{bmatrix}$$
(There can be zeroes on the diagonal; but there *must* be zeroes everywhere that is *not* on the diagonal.)
You can check that the $2 \times 2$ matrix that defined our system has eigenvalues of $\lambda_1 = 3$ and $\lambda_2 = -2$ --- just the entries on the diagonal. And an eigenvector corresponding to $\lambda_1$ is $[1,0]^T$, and an eigenvector corresponding to $\lambda_2$ is $[0,1]^T$, so if we were to set up the straight-line solutions and then the general solution to the system, we'd get $x(t) = C_1e^{3t}$ and $y(t) = C_2e^{-2t}$, same as when we did *not* use any matrices or eigenpairs.
So **solving a decoupled system, where the matrix is diagonal, is super easy**. If we could make every system decoupled, that would be amazing! Unfortunately we can't always do that. But in many cases we can get close, using a technique called **diagonalization** to rewrite the matrix that defines the system into something that is very close to a diagonal matrix. Diagonalization is a standard simplifying trick used throughout applications of linear algebra, and here we'll see how it can be used to solve systems of DEs.
## Assignment
1. Consider the matrix
$$A = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix}$$
which is definitely not diagonal. Use a computer to find the eigenvalue/eigenvector pairs. Are the eigenvectors linearly dependent or linearly independent? Explain your reasoning fully.
2. Create a new matrix $D$ that is a $2 \times 2$ diagonal matrix whose diagonal entries are $\lambda_1$ and $\lambda_2$, the eigenvalues of $A$. Also create a $2 \times 2$ matrix $P$ whose columns are $\mathbf{v}_1$ and $\mathbf{v}_2$, the eigenvectors corresponding to $\lambda_1$ and $\lambda_2$. Show by direct computation that $AP = PD$.
3. Now let's generalize what we just saw. Let $A$ be a $2 \times 2$ matrix that has two *real-valued, linearly independent* eigenvectors $\mathbf{v}_1$ and $\mathbf{v}_2$. (In other words, assume that there are no complex numbers in the eigenvectors, and assume linear independence. If one of those assumptions fails, we'll deal with that later.) Suppose $\lambda_1$ is the real-number eigenvalue that corresponds to $\mathbf{v}_1$ and $\lambda_2$ is the real-number eigenvalue that corresponds to $\mathbf{v}_2$. As in the previous part, let $D$ and $P$ be the $2 \times 2$ diagonal matrices:
$$D = \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix} \qquad P = \left[ \mathbf{v}_1 \ \mathbf{v}_2 \right]$$
In specific, concrete terms (such as a direct computation that is explained briefly in English), explain the following:
(a) Why $AP = PD$
(b) Why $P$ is invertible
(c) Why $A = PDP^{-1}$
4. A real $2 \times 2$ matrix $A$ that has two real, linearly independent eigenvectors is called **diagonalizable** because we can "factor" it into $A = PDP^{-1}$ where $D$ is a diagonal matrix and $P$ is an invertible matrix; the $D$ matrix holds the eigenvalues on the diagonal and the $P$ matrix is made up of the eigenvectors (in the same order as the eigenvalues on the diagonal of $D$). In SymPy, generate a random $2 \times 2$ matrix (with entries between $-10$ and $10$). Is it diagonalizable? Explain why or why not. If it is not diagonalizable, generate another and keep doing this until you get a matrix that is diagonalizable. Then, *diagonalize* it by writing it in the form $A = PDP^{-1}$ as we've discussed here.
5. Diagonalization makes it much easier to solve systems of DE's, if you apply a trick. To see how this works in general, let $A$ be a $2 \times 2$ matrix that is diagonalizable. (Again, this means it has two real, linearly independent eigenvectors and so it can be written as $A = PDP^{-1}$ as discussed above.) Consider the system of DE's $\mathbf{x}' = A \mathbf{x}$. Let $\mathbf{y} = P^{-1}\mathbf{x}$.
(a) See the Notes section below for an explanation for why $\mathbf{x}' = P\mathbf{y}'$. Start with the system $\mathbf{x}' = A \mathbf{x}$ and use the substitution $\mathbf{y} = P^{-1}x$ and the fact that $A = PDP^{-1}$ to show that the original system $\mathbf{x}' = A \mathbf{x}$ is equivalent to the system $\mathbf{y}' = D \mathbf{y}$.
(b) Explain why the system $\mathbf{y}' = D \mathbf{y}$ is preferable to the system $\mathbf{x}' = A \mathbf{x}$.
6. Now apply the diagonalization trick from the previous item to solve the system $\mathbf{x}' = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix} \mathbf{x}$ as follows:
(a) Diagonalize $A= \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix}$ by finding matrices $D$ and $P$ such that $A = PDP^{-1}$. (Note: You've done the math work already.)
(b) Follow your work from the previous question to introduce a substitution that will convert $\mathbf{x}' = A \mathbf{x}$ into an equivalent system in the variable $\mathbf{y}$ that is uncoupled and in the form $\mathbf{y}' = D \mathbf{y}$.
(c) Solve the uncoupled system for $\mathbf{y}$. Remember this is easy.
(d) Determine the solution $\mathbf{x}$ to the original system by showing that $\mathbf{x} = P \mathbf{y}$ and using this substitution appropriately.
## Notes on this Miniproject
In part 5, we introduce a trick by substituting $\mathbf{y} = P^{-1} \mathbf{x}$. We claim that with this substitution, $\mathbf{x}' = P \mathbf{y}'$. This is actually a more general result:
:::info
**Claim**: If $M$ is a $2 \times 2$ matrix whose entries are real numbers only (not functions), and $\mathbf{x}(t)$ is any vector function, then
$$\frac{d}{dt} \left[ M \mathbf{x} \right] = M \mathbf{x}'$$
This is the matrix-vector equivalent of the old-fashioned "constant multiple rule" from Calculus 1 that says $\frac{d}{dx}[k f(x)] = k f'(x)$ where $k$ is any constant.
**Proof**: Suppose $\mathbf{x}(t) = \begin{bmatrix} f(t) \\ g(t) \end{bmatrix}$ and $M = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$. Then if you multiply these two, you get
$$M \mathbf{x} = \begin{bmatrix} af(t) + bg(t)\\ c f(t) + d g(t) \end{bmatrix}$$
If we then take the derivative of both sides, we have $\frac{d}{dt}[M \mathbf{x}]$ on the left and the following on the right:
$$\begin{bmatrix} af'(t) + bg'(t)\\ c f'(t) + d g'(t) \end{bmatrix}$$
This is because of the constant multiple rule in Calculus 1. But notice this is the same as
$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} f'(t) \\ g'(t) \end{bmatrix}$$
which is equal to $M \mathbf{x}'$.
:::
So, if $\mathbf{y} = P^{-1} \mathbf{x}$, then multiply by $P$ on the left of both sides to get $P \mathbf{y} = \mathbf{x}$. Now take derivatives and "pull out the constant" (the matrix $P$) to get $\mathbf{x}' = P \mathbf{y}'$.
### Formatting and special items for grading
Please review the section on Miniprojects in the document [Standards For Student Work in MTH 302](https://github.com/RobertTalbert/linalg-diffeq/blob/main/course-docs/standards-for-student-work.md#standards-for-miniprojects) before attempting to write up your submission. Note that *all* Miniprojects:
- **Must be typewritten**. If any portion of the submission has handwritten work or drawings, it will be marked *Incomplete* and returned without further comment.
- **Must represent a good-faith effort at a complete, correct, clearly communicated, and professionally presented solution.** Omissions, partial work, work that is poorly organized or sloppily presented, or work that has numerous errors will be marked *Incomplete* and returned without further comment.
- **Must include clear verbal explanations of your work when indicated, not just math or code**. You can tell when verbal explanations are required because the problems say something like "Explain your reasoning".
Your work here is being evaluated *partially* on whether your math and code are correct; but just as much on whether your reasoning is correct and clearly expressed. Make sure to pay close attention to both.
This Miniproject **must be done in a Jupyter notebook using SymPy or another computer tool to carry out all mathematical calculations**. [A sample notebook, demonstrating the solution to a Calculus problem, can be found here](https://github.com/RobertTalbert/linalg-diffeq/blob/main/tutorials/Example_of_solution_in_a_notebook.ipynb). Study this first before writing up your work.
And please review the requirements above for including your code.
### How to submit
You will submit your work on Blackboard in the *Miniproject 8* folder under *Assignments > Miniprojects*. But you will *not* upload a PDF for Miniprojects. Instead you will **share a link that allows me (Talbert) to comment on your work**. [As explained in one of the Jupyter and Colab tutorials](https://gvsu.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=ef5c0e24-5c1d-437f-be05-af730108b6d8), the process goes like this:
1. In the notebook, click "Share" in the upper right.
2. **Do not share with me by entering my email.** Instead, go to *General Access*, and in the pulldown menu select "Anyone with the link", then set the permissions to "Commenter".
3. Then click "Copy Link".
4. **On Blackboard**, go to the *Assignments* area, then *Miniprojects*. Select Miniproject 5.
5. Under **Assignment Submission**, where it says *Text Submission*, click "Write Submission".
6. **Paste the link to your notebook in the text area that appears.**
7. Then click "Submit" to submit your work.
I will then evaluate your work using the link. Specific comments will be left on the notebook itself. General comments will be left on Blackboard.

Initial due date: Sunday, April 9 at 11:59pm ET Overview This miniproject will teach you about the Runge-Kutta method, a standard numerical solution technique for differential equations. Prerequisites: A strong grasp of Euler's Method for single DE's is needed. You will also need to be comfortable using a spreadsheet. Miniproject 6 (Euler's Method for systems) is also recommended. Background A description of the Runge-Kutta method along with an example is given in this tutorial. Read it carefully and make sure you can work along with the example before proceeding.

3/29/2023Initial due date: Sunday, April 9 at 11:59pm ET Overview This miniproject introduces a version of Euler's Method as a numerical solution technique for systems. Prerequisites: You will need to be comfortable with using Euler's method for single differential equations. You'll also benefit from some familiarity with spreadsheets or Python in order to automate the calculations. Background This tutorial gives you the background you need for this assignment. Please read it and make sure you understand the concepts and the example: https://github.com/RobertTalbert/linalg-diffeq/blob/main/assignments/Euler's_Method_for_Systems.ipynb

3/22/2023Initial due date: Sunday, March 26 at 11:59pm ET Overview Eigenvalues of a matrix are incredibly useful and important for many applications. (Some of these applications are in Miniprojects 1-3.) But computing eigenvalues of a matrix, even of relatively small size, can be difficult or impossible to do exactly. So we need numerical approximation methods for most practical uses of eigenvalues. This miniproject will teach you one such method. Prerequisites: You'll need to know what an eigenvalue and eigenvector for a matrix are, and how to find these using SymPy. You'll also need to know how to multiply matrices and vectors. Background Complete the following warmup exercises first. These don't go in your writeup. They are just here to teach you some terminology you'll need in the main assignment.

3/3/2023Initial due date: Sunday, March 12 at 11:59pm ET Overview In this miniproject, you'll apply concepts from linear algebra to study discrete dynamical systems. These are related to the idea of Markov chains that was the subject of Miniproject 1 and are the linear algebra analogue of systems of differential equations which we will study later. Prerequisites: You'll need to know what an eigenvalue and eigenvector for a matrix are, and how to find these using SymPy. This miniproject also requires some knowledge of matrix-vector multiplication. Background Complete the following before beginning this miniproject. These are not part of your writeup, but you'll need the knowledge before you can understand the tasks in the assignment.

2/21/2023
Published on ** HackMD**