# Linear Systems Theory
###### tags: `Linear System Theory` `604`
###### instructor: Professor R. Janaswamy
###### ref.: [Differential Equations](https://ece.umass.edu/sites/default/files/Ordinary_Differential_Equations%5B296%5D_0.pdf)
###### ref2.: [Laplace Transforms](https://ece.umass.edu/sites/default/files/LaplaceTransforms%5B297%5D.pdf)
1/26
===
### Reviewing basic linear algebra
### Topic 1:How to determind two vectors are linearly independent?
Given vector $\vec{X}$ and $\vec{Y}$, then set equation $\alpha$$\vec{X}$+$\beta$$\vec{Y}$ = 0.
If $\alpha$ and $\beta$ equal to zero, then we say vector $\vec{X}$ and $\vec{Y}$ are linearly independent.
Given $\vec{X_1}$, $\vec{X_2}$, ... $\vec{X_n}$ (n $\in \mathbb{R}$).And set the equation to:
$\alpha_1$$\vec{X_1}$+ $\alpha_2\vec{X_2}$ + ... $\alpha_n\vec{X_n}$ =0.
If every n that make $\alpha_n$ = 0, then we say every $\vec{X_n}$ are linearly independent.
### Topic 2: norm
$|| x ||$ > 0, then any constant a(which is non-zero) makes $||\alpha x||$ = $||\alpha|| \times ||x||$
Also, $||x + y|| \leq ||x|| + ||y||$
#### euclidean norm
Given a column vector $\vec{X}$, $||x||$ = $\sqrt{\vec{X^T}\vec{X}}$
#### Inner product between two vectors X, Y
inner product defined as $X^TY$ = $\sum_{i=1}^{n}X_iY_i$
$||X^TY|| \leq ||X||||Y||$
known as [Cauchy Schwarz inequality.](https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality)
#### norm of an $n \times n$ matrix
$||A|| = max \{
\dfrac{||Ax||}{||x||}
\}$
,which A is a `nxn` vector, and $\vec{x}$ is a `nx1` column vector
$A_{nn} = \begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}
\end{bmatrix}$
**[Ref. example-10 on page9&10](https://learn.lboro.ac.uk/archive/olmp/olmp_resources/pages/workbooks_1_50_jan2008/Workbook30/30_4_mtrx_norms.pdf)**
$max|a_{ij}| \leq |A| \leq n \times max|a_{ij}|$
# 1/28
## Eigenvalues and Eigenvectors
### right eigenvector `p`
given A as a `nxn` matrix,the A*p = $\lambda$* p(which p called as right eigen vector`n*1, and it can be complex`, and $\lambda$ called as eigenvalue )
(A - $\lambda$*I)p = 0, which `I` is called as Identity matrix(nxn)
Definition: eigenvalue and eigenvectors appear as complex conjugate($\vec{p}$ is complex conjugate of p)
if A is non-singular(which its det(A)!=0), the eigenvectors will be linearly independent; n number of those`(?)`
M = [ $\vec{p}_1 \vec{p}_2 ... \vec{p}_n$ ],
M is non-singular, and all $\vec{p}_n$ are independent.
$M^{-1}$ exist
**$M^{-1}AM = dig(\lambda_1 \ \lambda_2 ... \lambda_n)$, if the eigenvale are distinct**
Also, it's called as **similarity Transformation**
Given $A_{n\times n}$, and an invertible $p_{n \times n}$
If A is singular, we cannot always expect n linearly independent eigenvector.
### Left Eigenvectors `q`
Given `q` is a `1*n` matrix(which is called left eigenvector) and A is a `n*n` martix, then we have:
q*A = $\lambda$*q -> q(A-$\lambda I$ ) = 0
***Eigenvalue will be the same(eighter doing right eigenvector or left eigenvector methods )***
det(A-$\lambda I$) = 0, will get a polynomial of degree `n` in $\lambda$.
**Figure of 1/28_1: characteristic polynomial.**
### $a_{n}A^{n}+a_{n-1}A^{n-1} + \cdots + a_{1}A + a_{0}I = 0$
[Cayley-Hamilton theorm](https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton_theorem): Every matrix satisifes its own characteristic polynormial.
$A^{k} = AA ... AA$, each A is a $n \times n$ matrix
example: $a_3A^{3}+A_2A^{2}+a_1A+A_0I = 0$
**Rievew the def. of eigenvalue and eigenvector**
$g_i \cdot p_j$ = 0 (inner product), which i$\neq$j.
With this property, we could say:
A$p_j$ = $\lambda_jp_j$
key: eigenvalue are unique. However, eigenvector is NOT unique(can be scaled by constant `k`, which would still fit)
# 1/31
$Ax=y$ which y is called as collection of all $y$ is known as range space.
1. rank of A defined as dimention of the range space. Or number of linearly independent vectors in the range space.(noted as =: $r > 0$).
Also, equals to number of independent columns(rows) of A.
2. null space defined as $Ax=0$, collection of all x ...(TBD)
3. $r+v = n$, which imply $r \leq n$
### Solvability of $Ax=b$(given $b \neq 0$, and outside to the null space)
Equation1. has a solution if only if $b^{T}X = 0$, for all X in the null space of $A$.
4. Given $A_{m \times n}$ and $B_{n \times p}$, $rank(A) + rank(B) - n \leq min \{rank(A), rank(B)\}$
5. $||A_{n \times n}|| = max \{ \frac{||AX||}{||X||} \}$, norm of A
**review definition of $Trace(A)$ and $dot(A)$**
# 2/2
### finishing the example from last lecture.
$A = \begin{bmatrix}
1 & 0 & -2 \\
0 & 0 & 0 \\
-2 & 0 & 4
\end{bmatrix} = A^T$
, which $det(A)=0$ and left eigenvector equals to right eigenvector($q = p^T$)
$\vec{e_1} = \begin{bmatrix}
0 \\
1 \\
0 \\
\end{bmatrix}$ $\vec{e_2} = \begin{bmatrix}
2 \\
0 \\
1 \\
\end{bmatrix}$ $\vec{e_3} = \begin{bmatrix}
1 \\
0 \\
-2 \\
\end{bmatrix}$
### key:$\vec{e_1}, \vec{e_2}$ come from $\lambda = 0$, and $\vec{e_3}$ comes from $\lambda = 5$
Then $\vec{e_1}, \vec{e_2}$, and $\vec{e_3}$ form a basis, also $\vec{e_1} \perp \vec{e_2} \perp \vec{e_3}$
## Note: default basis
$\vec{b_1} = \begin{bmatrix}
1 \\
0 \\
0 \\
\end{bmatrix}$ $\vec{b_2} = \begin{bmatrix}
0 \\
1 \\
0 \\
\end{bmatrix}$ $\vec{b_3} = \begin{bmatrix}
0 \\
0 \\
1 \\
\end{bmatrix}$, given $X= \begin{bmatrix}
x_1 \\
x_2 \\
x_3 \end{bmatrix} = x_1 \vec{b_1} + x_2 \vec{b_2} + x_3 \vec{b_3}$
$P = \begin{bmatrix}
\vec{e_1} \ \vec{e_2} \ \vec{e_3} \end{bmatrix} = \begin{bmatrix}
0 & 2 & 1 \\
1 & 0 & 0 \\
0 & 1 & -2 \end{bmatrix}$
$P^{-1} = \frac{1}{5} \begin{bmatrix}
0 & 5 & 0\\
2 & 0 & 1\\
1 & 0 & -2 \end{bmatrix}$
$D=P^{-1}AP = \begin{bmatrix}
0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 5 \end{bmatrix} = \begin{bmatrix}
\lambda1 & 0 & 0 \\
0 & \lambda2 & 0 \\
0 & 0 & \lambda3 \end{bmatrix}$
$Ax = b$, by change of variable($z=p^{-1}x$)
= $APz = b$
= $P^{-1}APz = P^{-1}b$
= $Dz = P^{-1}b = c$, which c is a $n \times 1$ matrix
( $\because D = P^{-1}AP$ )
**The key here is to decompose the matrix from a `nxn` matrix into `nx1` matrix, which would make the calculation easier**
### Norm of A
By definition: $||A|| = \sqrt{max \{ eigenvalue \ of (A^TA) \}} = max \{ \frac{||AX||}{||X||} \}$
eigenvalue of $(A^2) = [eigenvalue of (A)]^2 = [0, 0, 25]$
Thus, $||A|| = 5$
characteristic $\lambda^3 - 5\lambda^2 = 0$
Cayley-Hamilton $A^3 - 5A^2 = 0$, which `0` means a $3 \times 3$ matrix here.
$A^2 = \begin{bmatrix}
5 & 0 & -10 \\
0 & 0 & 0 \\
-10 & 0 & 20 \end{bmatrix}$ $A^3 = \begin{bmatrix}
25 & 0 &-50 \\
0 & 0 & 0 \\
-50 & 0 & 100 \end{bmatrix}$
Thus, $A^3 = 5A^2$
$A^4 = 5A^3 = 25A^2$
$A^5 = AA^4 = 25A^3 = 125A^2 \cdots A^k = 5^{k-2}A^2, k \geq 2$
# 2/7
## Example 4.
given the following unknow function:
$\frac{d^{n}}{dt^n} + \sum_{p= 1}^{n} a_{(n-p)} \frac{d^{n-p}}{dt^{n-p}} y(t) = b_0(t)u(t)$, which $t \in [ t_0, +\infty \}$
Define state variable $x_k(t) = \frac{d^{k-1}}{dt^{k-1}}$
$x_1(t) = y(t)$
$x_2(t) = dy/dt$
$\vdots$
$x_n(t) = d^{n-1} / dt^{n-1} y(t)$
$\dot{x_{n}} = - \sum_{p=1}^{n}a_{n-p}x+ b_0(t)u(t)$
$\dot{X} = \begin{bmatrix}
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 &\cdots & 0 \\
\cdot \\
0 & \cdots & \cdots & 0 & 1\\
-a_0 & -a_1 & \cdots & \cdots & -a_n-1
\end{bmatrix} X+ \begin{bmatrix}
0 \\
\cdots \\
0 \\
b_0(t)
\end{bmatrix}U(t)$, which first matrix called $A(t)$, second matrix called $B(t)$
**y's matrix is below.
$\begin{bmatrix}
x_1(t_0) \\
x_2(t_0) \\
\vdots \\
x_n(t_0) \\
\end{bmatrix} = \begin{bmatrix}
y(t_0) \\
\frac{dy}{dt}(t_0) \\
\vdots \\
\frac{d^{n-1}y}{dt^{n-1}}(t_0) \\
\end{bmatrix}$
### Converting a non-linear equations and Linearization
Given $\dot{X}(t) = f [ x(t); u(t); t ]$, which f is a nx1 matrix, x is a nx1 matrix, and u is a mx1 matrix
$x(t_0) = x_0$
k_th term $\dot{X_k} = f[x_1, x_2 ... u_1 ... u_ m, t] , k = 1 ... n$
$x_k(t_0) = x_0k$
Say the solution is know for some inputs
$\tilde{u_1}, \tilde{u_2}, \dots \tilde{u_m}$ (nominal inputs), and corresponding mutual condition $\tilde{x_{01}}, \tilde{u_{02}}, \dots \tilde{u_{0n}}$ (nominal initial conditions)
### Q. 1 How does the non-linear system behave **close** to the nominal solution?
Let $u(t) = \tilde{u}(t) + u_\delta(t)$
$x(t_0) = \tilde{x_0} + x_{0\delta}$
Where we are making a assumption here: $x(t) = \tilde{x}(t) + x_\delta(t)$
$||u_{\delta}(t)|| \ll ||\tilde{u}(t)||$
$||x_{0\delta}|| \ll ||\tilde{x_0}||$
#### reminder: $\ll \ \approx \ \leq0.1$
Assumption: $x(t) = \tilde{x}(t) + x_{\delta}(t)$
$||x_{\delta}(t)|| \ll \tilde{x}(t), t \in (t_0, T)$
Eq1 = $\begin{cases} \dot{\tilde{x} }(t) + \dot{x_{\delta}}(t) = f( \tilde{x} + x_\delta, \tilde{u}+u_\delta; t) \\
\tilde{x_0} + x_{0\delta} = x_0 \\
\end{cases}$
Eq2 = $\begin{cases}\dot{\tilde{x_0}} = f(\tilde{x}; \tilde{u}; t) \\
\tilde{x}(t_0) = \tilde{x_0} \\
\end{cases}$
Eq1 - Eq2 => $\dot{X} = f(\tilde{X} + X_\delta; \tilde{u}+ u_\delta; t)-f(\tilde{X}; \tilde{u}; t)$
$x_\delta(t_0) = x_0 - \tilde{x}_0$
$f_k(\tilde{X} + X_\delta; \tilde{u}+ u_\delta; t)-f_k(\tilde{X}; \tilde{u}; t)$, which k = 1, 2 ... n.
$= \frac{\partial{f_k}}{\partial{x_1}}|_{\tilde{x}}x_{\delta 1}+\frac{\partial{f_k}}{\partial{x_2}}|_{\tilde{x}}x_{\delta 2}+
\cdots + \frac{\partial{f_k}}{\partial{x_n}}|_{\tilde{x}}x_{\delta n}+
\frac{\partial{f_k}}{\partial{u_1}}|_{\tilde{u}}x_{\delta 1}+
\cdots + \frac{\partial{f_k}}{\partial{u_m}}|_{\tilde{u}}x_{\delta m} + h.o.t$
### reminder: what's h.o.t?
Finally, we have $\dot{x}_{\delta}$, which is a nx1 matrix =>
$\dot{x}_{\delta} = \begin{bmatrix}
\frac{\partial{f_1}}{\partial{x_1}} & \frac{\partial{f_1}}{\partial{x_2}} & \cdots & \frac{\partial{f_1}}{\partial{x_n}} \\
\vdots & \vdots & \vdots & \vdots \\
\frac{\partial{f_n}}{\partial{x_1}} &
\frac{\partial{f_n}}{\partial{x_2}} & \cdots & \frac{\partial{f_n}}{\partial{x_n}} \\
\end{bmatrix} x_{\delta} + \begin{bmatrix}
\frac{\partial{f_1}}{\partial{u_1}} & \frac{\partial{f_1}}{\partial{u_2}} & \cdots & \frac{\partial{f_1}}{\partial{u_n}} \\
\vdots & \vdots & \vdots & \vdots \\
\frac{\partial{f_n}}{\partial{u_1}} &
\frac{\partial{f_n}}{\partial{u_2}} & \cdots & \frac{\partial{f_n}}{\partial{u_m}} \\
\end{bmatrix}u_{\delta}$
#### **Reminder: $x_\delta$ is a nx1 matrix, second matrix is a nxm matrix, and $u_\delta$ is a mx1 matrix**
#### Same thing in initial condition :
$\dot{x_\delta} = matrix \times x_\delta + matrix \times u_\delta$, which matrix filled with partial div. Also, it's called as `Jacobiam matrix`.
First part of equation is a linear system $A(t)$
2/24 office hour:
$\dot{x_1} = f_1(x_1, x_2)$; $x_1 = \tilde{x_1} + x_{\delta1}$
$\dot{x_2} = f_2(x_2, x_2)$; $x_2 = \tilde{x_2} + x_{\delta2}$
$\begin{bmatrix}
f_1 \\
f_2 \\
\end{bmatrix} = f(\tilde{x_1}, \tilde{x_2}) + \frac{\partial{f}}{\partial{x_1}}|_{\tilde{x_1} \tilde{x_2}}x_{\delta 1} + \frac{\partial{f}}{\partial{x_2}}|_{\tilde{x_1} \tilde{x_2}}x_{\delta 2}$
2/14
===
$\vdots$
$\vdots$
$\vdots$
2/16
===
Linearizing the non-linear system could make the system more accurately.
Outline
---
* linear discrete system
1. summer
```mermaid
flowchart LR
input(("input1"))--" + "-->summer(summer)
input_2(("input2"))--" - "-->summer
summer-->output(("output"))
```
**input1 = $x_1(t)$; input2 = $x_2(t)$; output = $x_1(t) - x_2(t)$**
3. time-varying amplifiers
```mermaid
flowchart LR
input(("input"))-->amplifier(("#alpha;(t)"))
amplifier-->output(("output"))
```
**intput = $x_1(t)$**
**$\alpha(t)$ scalar can be +vector or -vector "**
**output = $\alpha(t)x_1(t)$**
5. Integrator $\int_{t_0}^{t} \dot{x_1}d$
```mermaid
flowchart LR
input(("input")) --> integrator(("integrator"))
intput2(("InitialCondition")) --> integrator
integrator --> output(("output"))
```
**intput = $\dot{x_1}(t)$, and InitialCondition = $x_1(t_0)$**
**Output = $\int_{t_0}^{t} \dot{x_1}(t)d\sigma + x_1(t_0) = x_1(t)$**
$t_0 = x_1(t) - x_1(t_0)$
Example
---
Given $\frac{d^{2}y}{dt^2} + \alpha_1(t)\frac{dy}{dt}+\alpha_0(t)y(t) = b_0(t)u(t)$
Define: $\begin{cases}x_1(t) = y(t) \\ x_2(t) = \dot{y(t)} = \frac{dy}{dt}\end{cases} \Rightarrow \dot{x_1}(t) = x_2(t)$
$\dot{x_2}(t) = \frac{d^2y}{dt^2} = -a_1(t)x_2(t) -a_0(t)x_1(t) + b_0u(t)$
Thus, $\begin{bmatrix}
\dot{x_1} \\
\dot{x_2} \\
\end{bmatrix} = \begin{bmatrix}
0 & 1 \\
-a_0 & -a_1 \\
\end{bmatrix}\begin{bmatrix}
x_1 \\
x_2 \\
\end{bmatrix} + \begin{bmatrix}
0 \\
b_0 \\
\end{bmatrix} u(t)$
**flow of the example system: $u(t)$ through a time-varying amplifier $b_0(t)$, and the output of amplifier will sent to a summer(+), with other 2 amplifier $\alpha_1(t)$(-) and $\alpha_0(t)$(-). After this `summer`, $\dot{x_2}$ will produced. .. so on and so on.
Discrete Linear system, Chapter 3
---
Time variable k = 0, 1, 2 ... k
state equation: $x(k+1) = A(k)x(k) + B(k)u(k)$, x would be a `nx1` matrix, A(k) would be a `nxn` matrix, X(k) would be a `nx1` matrix
Output equation: $y(k) = c(k)x(k) + D(k)u(k)$
1. Zero input linear system : $u(k) \equiv 0$; system driven by initial state conditions $x(l)$
2. Zero state linear system : $x(l) \equiv 0$; system driven by initial state conditions $u(k)$.($x(l)$means that initial state at time $l$)
Example: Zero-input discrete time linear system. Part(1)
---
Given, $x(k+1) = A(k)x(k)$. What's $x(k)$? for $k \ge l$.
Solution:
$for \ k = l: x(l+1) = A(l)x(l)$
$for \ k = l+1: x(l+2) = A(l+1)x(l+1)$
What we get:
---
$x(k) = [ A(k-1)A(k-2) \cdots A(l) ] x(l)$, which A(...) are `nxn` matrix, and $[...]$ is called as `Transition Matrix` $\Phi{(k;l)}$, `nxn` matrix.
Properties of the Transition Matrix
---
$\Phi{(l; l)} = I$, which I is a `nxn` identity matrix.
$\Phi{(k+1; l)} = A(k)\Phi(k; l)$
With this properties, $x(k)$ can be re-writed as $x(k) = \Phi{(k; l)}x(l)$
Question: Is it possible to construct the transition matrix in a different?
---
```
Answer: Yes
```
We can construct a fundemental matrix $X(k)_{n \times n} = [x^{(1)}(k), x^{(2)(k)}, \cdots, x^{(n)}(k)]$, each `x(k)` is linearly indenpendent solution of `part(1)`. for n independent inital conditions.
Thus, $\Phi{(k; l)} = X(k)X^{-1}(l)$
With this idea, we can avoid the multiple multiply
2/18
===
Recalling from last lecture
---
$x(k+1) = A(k)x(k) + B(k)u(k)$, which $B(k)u(k) = 0, if it's a zero input case$.
### $x(k) = \Phi{(k; l)}x(l)$: zero input solution with initial conditionspecificed at time `l`.
Now, re-write the $x(k)$
---
$x(k) = \Phi{(k; 0)}x(0) + \sum_{l=0}^{k-1} \Phi{(k; l+1)} B(l)u(l) + \sum^{k-1}_{l=0}H(k;l)u(l)$, $\sum^{k-1}_{l=0}H(k;l)u(l)$ is a time-varying input response.
Also, $x(k+1) = A(k)x(k)$ is called as homogeneous case, means no-input.
Example: Homogeneous system
---
$\begin{bmatrix}
x_1(k+1) \\
x_2(k+1) \\
\end{bmatrix} = \begin{bmatrix}
1 & k+1 \\
0 & 1 \\
\end{bmatrix}\begin{bmatrix}
x_1(k) \\
x_2(k) \\
\end{bmatrix}$
Given, $\begin{bmatrix}
1 & k+1 \\
0 & 1 \\
\end{bmatrix}$ = A(k) matrix. What's $\Phi{(k;l)}$?
Sol:
---
* $\Phi{(k; l)} = X(k)X^{-1}(l)$
* $\Phi{(k; l)} = A(k-1)A(k-2) \cdots A(l+1)A(l)$
$A(k-1)A(k-2) \cdots A(l+1)A(l) =
\begin{bmatrix}
1 & k+1 \\
0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & k-1 \\
0 & 1 \\
\end{bmatrix}
\cdots
\begin{bmatrix}
1 & l+2 \\
0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & l+1 \\
0 & 1 \\
\end{bmatrix}$
$= \begin{bmatrix}
1 & k+1 \\
0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & k-1 \\
0 & 1 \\
\end{bmatrix}
\cdots
\begin{bmatrix}
1 & l+3 \\
0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & 2l+3 \\
0 & 1 \\
\end{bmatrix}$
With $\begin{bmatrix}
1 & l+3 \\
0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & 2l+3 \\
0 & 1 \\
\end{bmatrix}$ forming the matrix $\begin{bmatrix}
1 & 3l+6 \\
0 & 1 \\
\end{bmatrix}$.
Also, $\begin{bmatrix}
1 & l+4 \\
0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
1 & 3l+6 \\
0 & 1 \\
\end{bmatrix}$ forming the matrix $\begin{bmatrix}
1 & 4l+10 \\
0 & 1 \\
\end{bmatrix}$
$l+1 \\
2l+3 \\
$
## Missing part
Method-2
---
$x_1(k+1) = x_1(k)+(k+1)x_2(k)$
$x_2(k+1) = x_2(k)$ -> $x_2(k) = x_2(0)$
Thus, $x_1(k+1) = k_1(k) + (k+1)x_2(0)$
$x_1(1) = x_1(0)+x_2(0)$
$x_1(2) = x_1(1)+2x_2(0) = x_1(0)+3x_2(0)$
$\vdots$
$x_1(k) = x_1(0) + \frac{k(k+1)}{2}x_2(0)$
With this final function, we can write it in matrix term.
$\begin{bmatrix}
x_1(k) \\
x_2(k) \\
\end{bmatrix} = \begin{bmatrix}
1 & \frac{k(k+1)}{2} \\
0 & 1 \\
\end{bmatrix}\begin{bmatrix}
x_1(0) \\
x_2(0) \\
\end{bmatrix}$
Choice-1
---
With $x_1(0)=1 \ and \ x_2(0)=0$,
$x^{(1)}(k)=\begin{bmatrix}
1 \\
0 \\
\end{bmatrix}$.
Choice-2
---
With $x_1(0)=0 \ and \ x_2(0)=1$,
$x^{(2)}(k)=\begin{bmatrix}
\frac{k(k+1)}{2} \\
1 \\
\end{bmatrix}$.
Thus, $X(k) = \begin{bmatrix}
1 &\frac{k(k+1)}{2} \\
0 & 1 \\
\end{bmatrix}$
### Conclusion: Both way will get the same answer
$\Phi{(k;l)} = X(k)X^{-1}(l)$
2/22:
===
Example from textbook P.105, with reference *"DiscreteTimeExample.pdf"*
---
> Assumption:
> * Each individual marries only once.
> * Each couple has exactly one son and one daughter.
> * Number of males = number of females
> $x_i(k)$ number of man in class $i$ in generation $k$
### Transistion matrix:
$\Phi{(k;l)} = A(k-2)A(k-1) \cdots A(l)$, with k-1 factors.
>If $l = 0$, $\Phi{(k;0)} = A^k$
$\Phi{(k;0)} = I + kB + \frac{k(k-1)}{2}B^2$
.
.
.
.
~~~
The key is to get the Transition matrix
~~~
Continuous Time system:
---
### With zero-input continuous time case:
$\dot{X(t)} = A(t)X(t) \ ; t > t_0$
$X_0(t_0) = X_I$
Transition matrix would be:
$\Phi{(t; t_0)}_{n \times n}$
$X(t) = \Phi{(t; t_0)} X(t_0)$
### Scalar case:
$\dot{x(t)} = a(t)x(t), \ t > 0$
$x(0) = x_0$
.
.
. figure of 2/22 -1
Finally, the transistion matrix is $\Phi{(t; 0)} = e^{\int^{t}_{\tau = 0}a(t)dt}$
### case: 1, if $\Phi{(0;0)}$
Then $\Phi{(0;0)} = 1
### Solution by successive Approximation:(Iterative series solution ) `(1)`
$\dot{x(t)} = A(t)x(t)$
$d\dot{x(t)} = A(t)x(t)dt$
### Zero order solution:
$x_0(t) = x_I, for \ all \ t \geq t_0$
### First order solution is obtained by inserting $x_0(t)$ on the right hand side of `(1)` and integrating.
$x_1(t) - x_I = \int_{t_0}^tA(\tau)x_0(\tau)d\tau$
Also, the function can be re-written as $x_1(t) = x_I + [ \int_{t_0}^tA(\tau)d\tau]x_I$
$\vdots$
2/28
---
.
.
.
#### By $0 = {A_0}^n + \sum_{k=0}^{n-1} a_k {A_0}^k$ **Characteristics equation**
= $\sum_{k=0}^{n-1} \alpha_k (t-t_0){A_0}^k$
$\dot{\alpha}(t) = matrix \times \alpha$, which $\alpha(0) = \begin{bmatrix}
1 \\
0 \\
\vdots \\
0 \\
\end{bmatrix}$, which $\dot{\alpha}(t)$ is a nx1 matrix
### Theorem: Suppose $f(\lambda)$ is an arbitrary function of a scalar variable $\lambda$ and $g(\lambda) is \ a (n-1)^{th}$ degree polynomial in $\lambda [ there \ are \ n \ contants \ involved]$.
### If $f(\lambda_i) = g(\lambda_i)$ for every eigenvalue $\lambda$ of an `nxn` matrix $A_0$, then **$f(A_0) = g(A_0)$**.
## Method-2
### In our case we seek $f(A_0) = e^{tA_0}$, which $e^{tA_0} = g(A_0)$.
---
### $x(t) = \Phi{(t; t_0)}x(t_0)$
### $L[x(t)] = \int_0^\infty x(t)e^{-st}dt$
### $L[\dot{x(t)}] = Sx(S) - x(0)$
## For $t_0 = 0$
$sX(s) - x(0) = A_0X(x)$
$[SI - A_0]X(s) = x(0)$
$X(s) = [SI - A_0]^{-1}x(0)$
Thus, $x(t) = L^{-1}X(s) = [L^{-1}(SI - A_0)]x(0) = e^{A_0t}x(0)$
## Method-3
### $e^{tA_0} = L^{-1}(SI - A_0)^{-1}$
---
Example:
---
$A_0 = \begin{bmatrix}
-3 & 1 \\
0 & -1 \\
\end{bmatrix}$, what is $e^{A_0t}$?
Sol:
---
$det(A_0 - \lambda I) = 0$
$0 = (\lambda + 3)(\lambda + 2)$, Thus $\lambda = -2, -3$
$f(A_0) = e^{A_0t}$; $f(\lambda) = e^{\lambda t}$
$g(\lambda) = a_0 + a_1 \lambda$, $a_0 = ?, a_1 = ?$
$f(\lambda_1) = e^{-3t} = a_0 -3a_1$
$f(\lambda_2) = e^{-2t} = a_0 -2a_1$
$g(A_0) = a_0I + a_1A_0$
**$f(A_0)= e^{A_0t} = a_0I + a_1A_0$**
(These will be function of time)
$a_1 = e^{-2t} - e^{-3t}$
$a_0 = e^{-2t} + 2a_1 = 3e^{-2t} -2e^{-3t}$
.
.
.
$e^{A_0t} = \begin{bmatrix}
e^{-3t} & e^{-2t}-e^{-3t} \\
0 & e^{-2t} \\
\end{bmatrix}$
### Thus, the transition matrix is:
$\Phi{(t; t_0)} = \begin{bmatrix}
e^{-3(t-t_0)} & e^{-2(t-t_0)}-e^{-3(t-t_0)} \\
0 & e^{-2(t-t_0)} \\
\end{bmatrix}$
## laplace Tranceform:
$SI - A_0 = \begin{bmatrix}
S+3 & -1 \\
0 & S+2 \\
\end{bmatrix}$
$(SI - A_0)^{-1} = \frac{1}{(S+3)(S+2)}\begin{bmatrix}
S+2 & 1 \\
0 & S+3 \\
\end{bmatrix}$
$L^{-1}(SI - A_0)^{-1} = \begin{bmatrix}
L^{-1}\frac{(S+2)}{(S+3)(S+2)} & L^{-1}\frac{1}{(S+3)(S+2)} \\
L^{-1}(0) & L^{-1}\frac{(S+3)}{(S+3)(S+2)} \\
\end{bmatrix} = \begin{bmatrix}
e^{-3t} & e^{-2t}-e^{-3t} \\
0 & e^{-2t} \\
\end{bmatrix}$
* partial fraction expension
3/2
===
Properties of Transition Matrix
---
$\dot{x}(t) = A(t)x(t)$, which $x(t_0) = x_I$
$x(t) = \Phi_A{(t; t_0)}x(t_0)$.
$\Phi_A{(t; \tau)} = I + \int_\tau^ta(\delta_1)d\delta_1 + \int_\tau^td\delta_1\int_\tau^{\delta_1}A(\delta_1)A(\delta_2)d\delta_2$
.
.
.
### Peamo-Baken series
#### Special case - 1 Linear Time InvariantLTI
$A(t) = A_0$, **constant**
Transition Matrix : $\Phi_{A_0}(t; \tau) = e^{(t-\tau)A_0}$
### $= \Phi_{A_0}(t-\tau; 0)$
#### Special case - 2 (Time Varying)
##### If for every $t, \tau$
$A(t) \int_\tau^tA(\delta)d\delta$
$= [ \int_\tau^tA(\delta)d\delta]A(t)$
#### Case 3:
when $t = \tau$
### $\Phi_A(t; t) = I$
#### Case 4:
$\Phi_A(t; \tau) = X(t)X^{-1}(\tau)$
## Case 5, Differential the Transistion matrix
$\frac{d}{dt}\Phi_A(t; \tau) = A(t)\Phi_A(t; \tau)$
$\frac{d}{d\tau}\Phi_A(t; \tau) = - \Phi_A(t, \tau)A(\tau)$
## Case 6, Property of Composition
$\Phi_A(t; \tau) = \Phi_A(t; \delta)\Phi_A(\delta; t)$
#### Addition
with $t = \tau$
$\Phi_A(t; \tau)\Phi_A(\delta; t) = I$
$\Phi_A(t; \delta)^{-1} = \Phi_A(\delta; t)$
#### Case 7
If $\dot{z}(t) = - A^T(t)z(t)$, adjoint problem
$\dot{x}(t) = A(t)x(t)$, Original problem
then, $\Phi_{-A^T}(t; \tau) = {\Phi_A}^T(\tau; t)$
#### Case 8
### $det(\Phi_A(t; \tau)) = e^{\int_\tau^tTr[A(\delta)]d\delta}$
#### Case 9, Similarity Transformation
$x(t) = P(t)z(t) \Longrightarrow z(t) = P^{-1}(t)x(t)$, keep in mind that `P(t)` should be invertible.
$\dot{z}(t) = F(t)z(t); F(t) = P^{-1}(t)A(t)P(t) - P^{-1}(t)\dot{P(t)}$
Then, $\Phi_F(t; \tau) = P^{-1}(t)\Phi_A(t;\tau)P(\tau)$
Choose $P(t) = \Phi_A(t; t_0)$ and property 9, then we can prove `*`
`*` $\dot{x}(t) = A(t)x(t) + B(t)u(t)$
Solution for`*` : $x(t) = \Phi(t; t_0)x(t_0) + \int_{t_0}^t\Phi(t; \delta)B(\delta)u(\delta)d\delta$
3/4
===
LTI
---
$\dot{x} = Ax+Bu$, which A and B are constant.
$y(t) = Cx+Du$, which C and D are constant.
#### with Laplace Transform
$sX(s) - x(0) = AX(s) = BU(s)$
$Y(s) = CX(s) +DU(s)$
#### Take initial condition to be zero
.
.
.
### Transform Function Matrix
$C(SI-A)^{-1}B+D$, which $(SI-A)^{-1} = L[\Phi(t; 0)]$
#### Used for discrete timess: z transform
#### Another way to look at Transistion matrix
$\sigma(k) = 1$, when k = 0;
$\sigma(k) = 0$, when k != 0;
#### recall, $X(s) = \int_{t=0}^\infty x(t)e^{-St}dt$
.
.
.
Inverse of Laplace transform
.
.
.
Properties
---
$x(k): x(0), x(1), x(2), \cdots$
$x(k-1): 0, x(0), x(1), \cdots$, delayed.
$x(k+1): x(1), x(2), x(3), \cdots$, advanced.
3/7
===
Three stability
---
1. Uniform stability
2. Uniform exponential stability
3. Uniform asymptotic stability
Mathematical Definition - Uniform stability
---
$\dot{x} = A(t)x(t)$, which $u(t) \equiv 0$
$x(t_0) = x_0$
The linear state equation (1) is uniformly stable **if** there exists a postive constant $\gamma \ge 1$ such that for any $t_0$ and $x_0$, the salutation of (1) satisfies:
$||x(t)|| \le \gamma ||x_0||$, which $t > t_0$
### Equivalent Definition
Given any positive constant $\epsilon$, then exists a positive constant $\delta$ such that, regardless of $t_0$, if $||x_0|| < \delta$, then the corresponding solution satisfies $||x(t)|| \le \epsilon$ $for \ all \ t \ge t_0$.
Thus, $\delta \le \epsilon$
Mathematical Definition - Uniform Exponential stability
---
The linear state equation (1) is uniformly exponentially stable **if** there exists a postive constant $\gamma \ge 1$, $\lambda$ such that for any $t_0$ and $x_0$ the solution of (1) satisfies:
$||x_0|| \le \gamma ||x_0||e^{-\lambda (t - t_0)}$, $t > t_0$
### Theorem_1:
System (1) is uniformly stable **if and only if** there exists a positive constant $\gamma$ such that $||\Phi(t, \tau)|| \le \gamma$; $for \ all \ \tau \ and \ t$
Since the solution of system (1):
$x(t) = \Phi(t; t_0)x_0 \Rightarrow ||x(t)|| \le ||\Phi(t; t_0)|| \times ||x_0||$
### Theorem_2:
The system (1) is uniformly exponentially stable if there exists postive constants $\gamma \ge 1$, $\lambda$ such that:
$||\Phi(t; t_0)|| \le \gamma e^{-\lambda(t-\tau)}$, $for \ all \ t, \tau$
Example:
---
$\dot{x}(t) = 2t(2sin(t) - 1)x(t)$, $x(t_0) = x_0$
Solution :
---
### Does the solution uniformly stable?
Choose $t_0 = 2 \pi k$, $k=0,1,2,3...$
$t= t_0+ \pi$
Since one single $\gamma$ won't serve the purpose, $x(t) = x_0e^{\pi(t- \pi)(4k+1)} \le \gamma$