# Differential Equations
## Introduction
In this course, we will explore **differential equations**, which are essential for describing *deterministic models* in science and engineering. We'll learn how to analyze and solve them, shifting our focus from tedious analytical methods to powerful computational tools.
### Big Ideas
* **Models & Equations:** Many real-world phenomena are described by systems of differential equations.
* **Analytical Solutions:** We'll learn to solve some special cases of equations by hand, like first-order separable and second-order constant coefficient equations.
* **Computational Power:** We'll use *SciPy* to find approximate solutions to any system of differential equations. This lets us focus on simulating models rather than just analytical problem-solving.
* **First-Order Systems:** Any higher-order system can be rewritten as a first-order system. This is crucial because numerical methods are designed to solve first-order systems.
### Learning Goals
By the end of this course, you will be able to:
* Classify differential equations by their order, dimension, and type (separable, linear, autonomous, homogeneous).
* Apply analytical methods to solve specific types of equations.
* Transform higher-order systems into first-order systems.
* Compute and visualize numerical solutions for differential equations.
## Scalar Equations
```python
import numpy as np
import scipy.integrate as spi
import matplotlib.pyplot as plt
```
### Classification
A **(scalar) differential equation is an equation** involving an unknown function $y(t)$ and its derivatives: $y',y'',y''',\dots$. Since there are many types of equations, the first step is always to classify it.
* The **order** is the highest derivative of the unknown function $y(t)$ in the equation.
* $y' = 1 - y$ is a first-order equation.
* $y'' + ty' + y = 1$ is a second-order equation.
* A first-order equation is **separable** if it can be written in the form $y' = f(t) g(y)$.
* $y' = t \cos(y)$ is separable.
* $y' = t + y$ is not separable.
* An equation of order $n$ is **linear** if it can be written as $$a_n(t) y^{(n)} + a_{n-1}(t) y^{(n-1)} + \cdots + a_1(t) y' + a_0(t) y = f(t),$$ where the coefficients are functions of the independent variable $t$.
* $t^2 y'' + t y' + y = \cos(t)$ is a second-order linear equation.
* $y' = ty^2$ is a first-order nonlinear equation.
* A linear differential equation of order $n$ is **homogeneous** if $y(t)=0$ is a solution. This means it must be in the form $$a_n(t) y^{(n)} + a_{n-1}(t) y^{(n-1)} + \cdots + a_1(t) y' + a_0(t) y = 0$$
* A first-order equation is **autonomous** if it can be written as $y' = f(y)$. Autonomous equations are also separable.
* $y' = 1 - y$ is autonomous.
* $y' = t + y$ is not autonomous.
### Analytical Methods
#### First-Order Separable Equations
To solve a first-order separable equation of the form $y' = f(t)g(y)$, you can use the **method of separation of variables**. Simply divide both sides by $g(y)$ and integrate:
$$
\int \frac{dy}{g(y)} = \int f(t) dt
$$
:::info
**Example** Consider the equation $y'=−ty$.
\begin{align}
&\int \frac{dy}{y} = -\int t dt \\ \Rightarrow& \quad \ln |y| = -\frac{t^2}{2} + C \\
\Rightarrow& \quad y(t) = k e^{-t^2/2}
\end{align}
Let's plot the solution for different initial values $y(0) = k$.
```python=
t = np.linspace(0,5,100)
for y0 in range(-3,4):
y = y0*np.exp(-t**2/2)
plt.plot(t,y,'b')
plt.grid(True)
plt.show()
```

:::
#### Second-Order Homogeneous Equations with Constant Coefficients
A **second-order homogeneous equation with constant coefficients** is of the form:
$$
ay'' + by' + cy = 0, \quad a,b,c \in \mathbb{R}
$$
The solution depends on the roots of the **characteristic polynomial**, $p(s) = as^2 + bs + c$. The roots are given by the quadratic formula:
$$
r_1 = \frac{-b + \sqrt{b^2 - 4ac}}{2a} \quad r_2 = \frac{-b - \sqrt{b^2 - 4ac}}{2a}
$$
* **Real and distinct roots ($r_1 \ne r_2$):** $$y(t) = C\_1 e^{r\_1 t} + C\_2 e^{r\_2 t}$$
* **Real and repeated roots ($r = r_1 = r_2$):** $$y(t) = C\_1 e^{r t} + C\_2 t e^{r t}$$
* **Complex conjugate roots ($r = \alpha \pm i \beta$):** $$y(t) = e^{\alpha t}(C_1 \cos(\beta t) + C_2 \sin(\beta t))$$
### Numerical Methods
#### Euler's Method
**Euler's method** is the simplest numerical method for approximating solutions to differential equations. For a first-order equation $y' = f(t,y)$, with initial condition $y(t_0) = y_0$, the solution is approximated by the recursive formula:
$$
y_{n+1} = y_n + h f(t_n,y_n)
$$
where $t_n = t_0 + nh$ for a chosen step size $h$.
:::info
**Example** Let's implement a Python function for Euler's method for a scalar equation:
```python
def odeEuler(f,t,y0):
y = np.zeros(len(t))
y[0] = y0
for n in range(0,len(t)-1):
y[n+1] = y[n] + f(t[n],y[n])*(t[n+1] - t[n])
return y
```
To approximate the solution of $y' = -ty$, with $y(0)=1$, over the interval $[0,4]$ and compare it to the exact solution.
```python=
t_exact = np.linspace(0,4,100)
y_exact = np.exp(-t_exact**2/2)
plt.plot(t_exact,y_exact,'b')
f = lambda t,y: -t*y
y0 = 1
t = np.linspace(0,4,21)
y = odeEuler(f,t,y0)
plt.plot(t,y,'r.-')
plt.grid(True), plt.legend(['Exact','Euler'])
plt.show()
```

:::
#### Numerical Solutions with `SciPy`
Euler's method is simple to understand but not accurate enough for practical use. The function `scipy.integrate.odeint` uses a higher-order method to approximate solutions and is what we use for real problems. The `odeint` function:
* Automatically determines the step size at each step to guarantee a certain level of accuracy. The input array `t` only specifies the values where a solution should be returned.
* Expects $y$ as the first variable in the right-hand side function $f(y,t)$ in the equation $y' = f(y,t)$.
:::info
**Example** Let's use `scipy.integrate.odeint` to approximate the solution of $y' = -ty$, $y(0)=1$, and compare to the exact solution.
```python=
t_exact = np.linspace(0,4,100)
y_exact = np.exp(-t_exact**2/2)
plt.plot(t_exact,y_exact,'b')
f = lambda y,t: -t*y
y0 = 1
t = np.linspace(0,4,21)
y = spi.odeint(f,y0,t)
plt.plot(t,y,'r.')
plt.grid(True), plt.legend(['Exact','odeint'])
plt.show()
```

:::
## Systems of Equations
```python
import numpy as np
import scipy.integrate as spi
import scipy.linalg as la
import matplotlib.pyplot as plt
```
### Classification
A **system of differential equations** is a set of equations that involve several unknown functions of the same independent variable, such as $y_0(t),\dots,y_{d-1}(t)$. We classify systems based on their properties.
* **Dimension:** The number of unknown functions in the system is its dimension, $d$.
* **Order:** The highest derivative of any unknown function in the system determines its order.
* **Linear:** A system is linear if each of its equations is linear.
* **Autonomous:** A system is autonomous if each equation is autonomous.
:::info
**Example**
* The system: \begin{align} x' &= x - xy \\ y' &= xy - y \end{align} is a first-order, 2-dimensional, autonomous, and nonlinear system.
* In contrast, the system: \begin{align} x'' &= x - y \\ y' &= x - t \end{align} is a second-order, 2-dimensional, nonautonomous, and linear system.
:::
### First-Order Transformation
Numerical methods are designed to solve first-order systems. Fortunately, any higher-order system can be converted into an equivalent first-order system by introducing new variables. The general form of a first-order, $d$-dimensional system is:
$$
\frac{d \mathbf{u}}{dt} = \mathbf{f}(t,\mathbf{u})
$$
where the vector $\mathbf{u}(t)$ contains the unknown functions and their derivatives up to order $d-1$.
#### Procedure
1. For each unknown function $y$ of order $n$, introduce new variables for it and its derivatives: $$u_0 = y, u_1 = y', \dots, u_{n-1} = y^{(n-1)}$$
2. Rewrite the original equations in terms of these new variables.
:::info
**Example** To convert the second-order equation $ay'' + by' + cy = f(t)$ into a first-order system, we introduce two new variables, $u_0 = y$ and $u_1 = y'$. This gives us the system:
\begin{align}
u_0' &= u_1 \\
u_1' &= \frac{1}{a} \left( f(t) - bu_1 - cu_0 \right)
\end{align}
:::
:::info
**Example** Now consider a system with two second-order equations:
\begin{align*}
x'' &= 1 - y \\
y'' &= 1 - x
\end{align*}
We introduce four new variables, $u_0 = x, u_1 = x', u_2 = y, u_3 = y'$, to get the first-order system:
\begin{align}
u_0' &= u_1 \\
u_1' &= 1 - u_2 \\
u_2' &= u_3 \\
u_3' &= 1 - u_0
\end{align}
:::
### Euler's Method for Systems
To apply Euler's method to a system, first transform it into a first-order system, then apply the method to each equation simultaneously. Using vector notation, the recursive formula becomes:
$$
\mathbf{u}_{n+1} = \mathbf{u}_n + h \, \mathbf{f}(t_n,\mathbf{u}_n)
$$
where $\mathbf{u}_n$ is the vector of unknown functions at step $n$.
Here's a Python function to implement Euler's method for systems:
```python
def odeEulerSys(f,t,u0):
u = np.zeros([len(t),len(u0)])
u[0,:] = u0
for n in range(0,len(t)-1):
u[n+1,:] = u[n,:] + f(t[n],u[n,:])*(t[n+1] - t[n])
return u
```
:::info
**Example:** Let's use `odeEulerSys` to approximate the solution of the equation $$y'' + y + 1 = 0$$ with initial conditions $y(0)=1$ and $y'(0) = 0$.
After transforming it into a first-order system, we can plot the numerical approximation and compare it to the exact solution
$$
y(t) = e^{-t/2} \cos \left( \frac{\sqrt{3}}{2} t \right) + \frac{1}{\sqrt{3}} e^{-t/2} \sin \left( \frac{\sqrt{3}}{2} t \right)
$$
```python=
a = 1; b = 1; c = 1; tf = 10;
f = lambda t,u: np.array([u[1],(-b*u[1]-c*u[0])/a])
t = np.arange(0,tf,0.05)
u0 = [1,0]
u = odeEulerSys(f,t,u0)
plt.plot(t,u[:,0],'.')
T = np.linspace(0,tf,tf*50)
Y = np.exp(-T/2)*(np.cos(np.sqrt(3)/2*T) + 1/np.sqrt(3)*np.sin(np.sqrt(3)/2*T))
plt.plot(T,Y), plt.grid(True)
plt.show()
```

:::
### Numerical Solutions with `SciPy`
While Euler's method is easy to grasp, it's not accurate enough for practical use. We typically use the function `scipy.integrate.odeint`, which employs higher-order methods like Runge-Kutta, to get accurate solutions.
#### Procedure
1. Write the system as a first-order system $\mathbf{u}' = \mathbf{f}(\mathbf{u},t)$, with initial values $\mathbf{u}_0 = \mathbf{u}(t_0)$.
2. Define a Python function `f(u,t)` for the right-hand side $\mathbf{f}(\mathbf{u},t)$, noting that `u` is the first parameter and `t` is the second.
3. Create a NumPy array `t` of time values.
4. Define an initial value vector `u0`.
5. Compute the solution with `u = spi.odeint(f,u0,t)`. The output `u` is a matrix where columns represent the values of each function and rows are the solution vectors at different times.
:::info
**Example** Let's use `scipy.integrate.odeint` to approximate the solution of $y'' + y + 1 = 0$, with initial values $y(0)=1$ and $y'(0) = 0$.
```python=
a = 1; b = 1; c = 1; tf = 10;
f = lambda u,t: np.array([u[1],(-b*u[1]-c*u[0])/a])
t = np.arange(0,tf,0.1)
u0 = [1,0]
u = spi.odeint(f,u0,t)
plt.plot(t,u[:,0],'.')
T = np.linspace(0,tf,tf*50)
Y = np.exp(-T/2)*(np.cos(np.sqrt(3)/2*T) + 1/np.sqrt(3)*np.sin(np.sqrt(3)/2*T))
plt.plot(T,Y), plt.grid(True)
plt.show()
```

:::
### Linear Stability Analysis
For a first-order, autonomous system
$$
\frac{d \mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}),
$$
a **critical point** $\mathbf{c}$ is a point where $\mathbf{f}(\mathbf{c}) = \mathbf{0}$. Critical points correspond to **equilibrium solutions** where the system is in a steady state.
We can classify critical points using the [linearization theorem](https://en.wikipedia.org/wiki/Hartman%E2%80%93Grobman_theorem), which analyzes the eigenvalues $\lambda$ of the **Jacobian matrix** $\mathbf{J}_{\mathbf{c}}$ evaluated at the critical point:
$$
\mathbf{J}_{\mathbf{c}} =
\begin{bmatrix}
\frac{\partial f_0}{\partial x_0} & \cdots & \frac{\partial f_0}{\partial x_{d-1}} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_{d-1}}{\partial x_0} & \cdots & \frac{\partial f_{d-1}}{\partial x_{d-1}}
\end{bmatrix} _{\mathbf{x} = \mathbf{c}}
$$
* **Attractor:** If the real part of every eigenvalue is negative ($\mathrm{Re}(\lambda) < 0$), solutions near $\mathbf{c}$ converge to it.
* **Repeller:** If the real part of every eigenvalue is positive ($\mathrm{Re}(\lambda) > 0$), solutions diverge from $\mathbf{c}$.
* **Unstable:** If some eigenvalues have positive real parts and others have negative real parts, the critical point is unstable.
:::info
**Example** Let's find and classify the critical points of the system:
\begin{align}
x' &= 1 - y^2 \\
y' &= x - y
\end{align}
* The critical points are $(1,1)$ and $(-1,-1)$.
* The Jacobian matrix is: $$\mathbf{J} = \begin{bmatrix} 0 & -2y \\ 1 & -1 \end{bmatrix}$$
* For the critical point $(1,1)$, the eigenvalues of the Jacobian are $[−0.5+1.32j,−0.5−1.32j]$. Since the real part is negative, $(1,1)$ is an **attractor**.
```python=
J1 = np.array([[0.,-2.],[1.,-1.]])
evals1,evecs1 = la.eig(J1)
print(evals1)
```
```
[-0.5+1.32287566j -0.5-1.32287566j]
```
* For $(−1,−1)$, the eigenvalues are $[1+0j,−2+0j]$. Since one eigenvalue has a positive real part and the other has a negative real part, $(−1,−1)$ is **unstable**.
```python=+
J2 = np.array([[0.,2.],[1.,-1.]])
evals2,evecs2 = la.eig(J2)
print(evals2)
```
```
[ 1.+0.j -2.+0.j]
```
* Let's plot solutions:
```python=+
f = lambda u,t: np.array([1 - u[1]**2,u[0] - u[1]])
t = np.linspace(0,1,100)
for _ in range(200):
u0 = 6*np.random.rand(2) - 3
u = spi.odeint(f,u0,t)
plt.plot(u[:,0],u[:,1],c='b',alpha=0.2)
plt.axis([-3,3,-3,3]), plt.grid(True)
plt.show()
```

:::