---
title: 1. Finite difference approxiations
tags: Finite difference method
---
# 1. Finite difference approxiations
**Textbook**
1. Finite Difference Methods for Ordinary and Partial Differential Equations by *Randall J. LeVeque*
* [exercises and m-files](http://faculty.washington.edu/rjl/fdmbook/chapter1.html)
**Reading notes**
1. Rounding error
* [用電腦算極限](https://teshenglin.github.io/post/2019_limit_evaluate/)
* [用電腦算微分](https://teshenglin.github.io/post/2019_derivate_evaluate/)
---
#### Exercise 1
Let $u(x)= \sin(x)$, we try to approximate $u'(1) = \cos(1)$ using finite difference approximations. In the following table we show the error $E=|Du(x)-u'(x)|$ for various values of $h$ for each of the formula given in class.
$$
D_+ u(\bar{x}) = \frac{1}{h}\left(u(\bar{x}+h)-u(\bar{x})\right)
$$
$$
D_- u(\bar{x}) = \frac{1}{h}\left(u(\bar{x}) - u(\bar{x}-h) \right)
$$
$$
D_0 u(\bar{x}) = \frac{1}{2h}\left(u(\bar{x}+h) - u(\bar{x}-h)\right)
$$
$$
D_3 u(\bar{x}) = \frac{1}{6h}\left(2u(\bar{x}+h) + 3u(\bar{x}) - 6u(\bar{x}-h) + u(\bar{x}-2h)\right)
$$
| $h$ | $E_+$ | $E_-$ | $E_0$ | $E_3$ |
|---|---|---|---|---|
| $10^{-1}$ | $0.0429386$ | $0.0411384$ | $0.000900054$ | $6.82069e-5$ |
| $10^{-2}$| $0.00421632$ | $0.00419831$ | $9.00499e-6$ | $6.99413e-8$ |
| $10^{-3}$| $0.000420826$ | $0.000420645$ | $9.00505e-8$ | $7.0044e-11$ |
| $10^{-4}$| $4.20744e-5$ | $4.20726e-5$ | $9.00586e-10$ | $8.68194e-13$ |
| $10^{-5}$| $4.20736e-6$ | $4.20733e-6$ | $1.4738e-11$ | $2.20139e-11$ |
| $10^{-6}$| $4.20762e-7$ | $4.20805e-7$ | $2.16418e-11$ | $7.98495e-11$ |
| $10^{-7}$| $4.04327e-8$ | $4.15237e-8$ | $5.45511e-10$ | $3.1268e-10$ |
| $10^{-8}$| $5.45511e-10$ | $5.45511e-10$ | $5.45511e-10$ | $8.76771e-9$ |
| $10^{-9}$| $8.99525e-8$ | $2.92568e-8$ | $3.03478e-8$ | $3.03478e-8$ |
| $10^{-10}$| $2.92568e-8$ | $2.92568e-8$ | $2.92568e-8$ | $2.09162e-7$ |
| $10^{-11}$| $3.84395e-6$ | $1.14148e-5$ | $3.78544e-6$ | $1.71369e-5$ |
| $10^{-12}$| $0.000102968$ | $0.000141173$ | $1.91027e-5$ | $4.96203e-5$ |
| $10^{-13}$| $0.000263243$ | $0.000263243$ | $0.000263243$ | $0.000507384$ |
| $10^{-14}$| $0.00905231$ | $0.00657269$ | $0.00123981$ | $0.00461957$ |
| $10^{-15}$| $0.0403023$ | $0.0403023$ | $0.0403023$ | $0.0559273$ |
| $10^{-16}$| $0.540302$ | $0.459698$ | $0.0403023$ | $0.459698$ |

We can find that $D_0$ is Second order method, $D_3$ is 3th order method and E+, E- and the increased error of all 4 methods when h is small are first order.
$$
E_i = \vert D_i u - u' \vert = \Big\vert \frac{\varepsilon}{h} -cos(1) \Big\vert \\
= C \times h^k + \frac{\varepsilon}{h},\,\,\,
$$
where $k$ is the order of the method and $\varepsilon$ is the absolute error of computing the $\sin$ part in the method $\Big( \text{i.e. } \vert \sin(1+h)-\sin(h) \vert \,\text{ in forward difference method} . \Big)$
When $h \to 0,\,\,\frac{\varepsilon}{h} \to \infty.$ That is, the rounding error will show up and increase when $h$ becomes very small. It also shows that the rounding error is increasing in first order.
by julia code:
```
for i in (1:16)
E[i,1]=abs(sin(1+10.0^-i)*10^i-sin(1)*10^i-cos(1));
E[i,2]=abs(sin(1)*10.0^i-sin(1-10.0^-i)*10^i-cos(1));
E[i,3]=abs(sin(1+10.0^-i)*10^i/2-sin(1-10.0^-i)*10^i/2-cos(1));
E[i,4]=abs(sin(1+10.0^-i)*10^i/3+sin(1)*10.0^i/2-sin(1-10.0^-i)*10^i+sin(1-2*10.0^-i)*10^i/6-cos(1));
logE[i,1]=log10(E[i,1]);
logE[i,2]=log10(E[i,2]);
logE[i,3]=log10(E[i,3]);
logE[i,4]=log10(E[i,4]);
end
test=[-2.5 -6;-4.5 -9;-6.5 -12]
test2=[-1;-2;-3;-4;-5;-6;-7;-8;-7;-6;-5;-4;-3;-2;-1;0]
using Plots
x = 1:16;
y = 1:3;
plot(x,logE[:,1],label="E+", marker=(:circle,8),legend=:bottomright)
plot!(x,logE[:,2:4],label=["E-" "E0" "E3"], marker=:hexagon)
xlabel!("-log10(h)")
ylabel!("log10(E)")
plot!(x,test2,label="First order", marker=:utriangle)
plot!(y,test,label=[ "Second order" "3th order"], marker=:utriangle)
```
---
### A general approach to deriving the coefficients: Method of undetermined coefficients
#### Exercise 2: fdcoeffV.m
This code is
```
imput: x % row vector storing points where u is evaluated at
k % order of derivative we are to find
xbar % point of derivative we are to find
output: c % row vector storing the coef. of (D_{und}^k u)(x_n)
using the method of undetermined coefficients
n = length(x);
if k >= n
error('*** length(x) must be larger than k')
end
A = ones(n,n);
xrow = (x(:)-xbar)'; % displacements x-xbar as a row vector.
for i=2:n
A(i,:) = (xrow .^ (i-1)) ./ factorial(i-1);
end
b = zeros(n,1); % b is right hand side,
b(k+1) = 1; % so k'th derivative term remains
c = A\b; % solve n by n system for coefficients
c = c'; % row vector
```
In MATLAB, index starts from 1, so
if ```x = [1,2,3]``` then ```x[1]=1```
Also, the following lists the operations used in this code:
(1) ```x'``` means $x^T$
(2) ```x.^n``` means $\begin{pmatrix} x_1^n& x_2^n & ... & x_n^n\end{pmatrix}$ i.e. pointwise operation
(3) ```x(:)``` is the vectorized version of ```x```(which is a ```1xn``` matrix)
(4) ```x``` vector and ```xbar``` scalar then ```x-bar``` is interpreted as $x-\text{xbar}\vec{1}$
So the code is just solving the equation
$$ \begin{pmatrix}1&1&...&1\\ (x_1-\bar{x}) & (x_2-\bar{x}) &...&(x_n-\bar{x}) \\ (x_1-\bar{x})^2/2! & (x_2-\bar{x})^2/2! &...&(x_n-\bar{x})^2/2! \\ ... \\ (x_1-\bar{x})^{k-1}/(k-1)! & (x_2-\bar{x})^{k-1}/(k-1)! &...&(x_n-\bar{x})^{k-1}/(k-1)!\\ ... \\ (x_1-\bar{x})^{n-1}/(n-1)! & (x_2-\bar{x})^{n-1}/(n-1)! &...&(x_n-\bar{x})^{n-1}/(n-1)! \end{pmatrix}c=\begin{pmatrix} 0\\0\\0 \\ ... \\ 1 \\ ... \\ 0\end{pmatrix}$$
where $A$ is constructed row-by-row using the ```for i=2:n``` loop. Also notice that in $b$, the sole ```1``` is at the position of ```k+1```
The result `A\b` is an n-column vector, which needs to be transposed into an n-row vector hence the final ```c=c'```
---
#### Exercise 3
1. Use the method of undetermined coefficients to set up the $5 \times 5$ Vandermonde system that would determine a fourth-order accurate finite difference approximation to $u''(x)$ based on 5 equally spaced points,
$$
u''(x) = c_{-2} u(x-2h) + c_{-1} u(x-h) + c_0u(x) + c_1u(x+h) +
c_2u(x+2h) + O(h^4).
$$
++Ans++:
For each $i=-2,\dots,2$, the Taylor expansion of $u$ at $x+ih$ about $x$ is
$$
u(x+ih) = \sum_{k=0} \frac{(ih)^k}{k!}u^{(k)}(x)
$$
The approximation $D_x(h) \triangleq \sum\limits_{i=-2}^2 c_i u(x-ih)$ can be written as
\begin{align}
\sum_{i=-2}^2 c_i u(x-ih)
&= \sum_{i=-2}^2 c_i \sum_{k=0} \frac{(ih)^k}{k!}u^{(k)}(x)\\
&= \sum_{k=0}\left( \sum_{i=-2}^2 c_i \frac{(ih)^k}{k!} \right)u^{(k)}(x) \\
&= u(x)(\sum_i c_i) + u'(x)(\sum_i c_i(ih)) + u''(x)(\sum_i c_i\frac{(ih)^2}{2}) \\
+& u^{(3)}(x)(\sum_i c_i\frac{(ih)^3}{3!}) + u^{(4)}(x)(\sum_i c_i\frac{(ih)^4}{4!})+\dots\\
&= u(x)(\sum_i c_i) + u'(x)(\sum_i c_ii)h + u''(x)(\sum_i c_i\frac{i^2}{2})h^2\\
+& u^{(3)}(x)(\sum_i c_i\frac{i^3}{3!})h^3 + u^{(4)}(x)(\sum_i c_i\frac{i^4}{4!})h^4+\dots\\
\end{align}
To make $D_x(h)$ a fourth-order accurate finite difference approximation to $u''(x)$, we need to consider the following constraints:
\begin{align}
&\sum_{i=-2}^2 c_i = \sum_{i=-2}^2 c_i i = \sum_{i=-2}^2 c_i i^3 = 0,\\
&\sum_{i=-2}^2 c_i i^2 = \frac{2}{h^2},\\
&\sum_{i=-2}^2 c_i i^4 \in \mathbf{R},
\end{align}
which is euqivalent to
$$
\begin{pmatrix}
1 & 1 & 1 & 1 & 1\\
-2 & -1 & 0 & 1 & 2\\
(-2)^2 & (-1)^2 & 0 & 1^2 & 2^2\\
(-2)^3 & (-1)^3 & 0 & 1^3 & 2^3\\ (-2)^4 & (-1)^4 & 0 & 1^4 & 2^4\\
\end{pmatrix}
\begin{pmatrix}
c_{-2} \\ c_{-1} \\ c_0 \\ c_1 \\ c_2
\end{pmatrix}=
\begin{pmatrix}
0\\0\\\frac{2}{h^2}\\0\\a
\end{pmatrix},\quad \forall a \in \mathbf{R}
$$
To verify the results, consider the case that $u(x)=\sin x$ and $x=1$. Then our goal is to estimate the truncation error $|D(h)-u''(1)| = |\sum_{i=-2}^{2} c_i \sin(1+ih)+\sin(1)|$. The parameter $a$ ranges from $10^{-5}$ to $10^5$.
The corresponding Matlab code is
```
v = [-2,-1,0,1,2];
V = [v.^0;v.^1;v.^2;v.^3;v.^4];
Exp = 0:15;
A = 10.^(-5:5);
for j=1:length(A)
a = A(j);
err = zeros(1,16);
for i=1:length(Exp)
h = 10^(-Exp(i));
b = [0;0;2/h^2;0;a];
coefs = V\b;
u = sin([1-2*h;1-h;1;1+h;1+2*h]);
Du = dot(coefs,u);
err(i) = abs(Du+sin(1));
end
plot(Exp,log10(err))
hold on
end
```
The picture below draws the curves of $|D(h)-u''(1)|$ with different parameter $a$.

As you can see, the accuracy of truncation error is $O(h^4)$.
2. Compute the coefficients using the matlab code **fdstencil.m** available from the website, and check that they satisfy the system you determined in part (a).
++Ans++:
**fdstencil($k,j$)** computes stencil coefficients for finite difference approximation of $k$'th order derivative on a uniform grid. Print the stencil and dominant terms in the truncation error.
$j$ is a vector of indices of grid points, values $u(x0 + j(i)*h)$ are used, where $x0$ is an arbitrary grid point, $h$ the mesh spacing and $i=1$~$length(h)$.
Since we want to compute the coefficients of 2nd order derivative using points $u(x-2h), u(x-h),u(x),u(x+h),u(x+2h)$, we set $k=2, j=-2:2$.
The result of **fdstencil(2,-2:2)** is
The derivative u^(2) of u at x0 is approximated by
1/h^2 * [
-8.333333333333333e-02 * u(x0-2*h) +
1.333333333333333e+00 * u(x0-1*h) +
-2.500000000000000e+00 * u(x0) +
1.333333333333333e+00 * u(x0+1*h) +
-8.333333333333333e-02 * u(x0+2*h) ]
For smooth u,
Error = 0 * h^3*u^(5) + -0.0111111 * h^4*u^(6) + ...
Since
$$
\frac{1}{h^2}\begin{pmatrix}
1 & 1 & 1 & 1 & 1\\
-2h & -h & 0 & h & 2h\\
(-2h)^2 & (-h)^2 & 0 & h^2 & (2h)^2\\
(-2h)^3 & (-h)^3 & 0 & h^3 & (2h)^3\\ (-2h)^4 & (-h)^4 & 0 & h^4 & (2h)^4\\
\end{pmatrix}
\begin{pmatrix}
-8.333333333333333 \times 10^{-2} \\ 1.333333333333333 \\ -2.500000000000000 \\ 1.333333333333333 \\ -8.333333333333333 \times 10^{-2}
\end{pmatrix}=
\begin{pmatrix}
0\\0\\2\\0\\0
\end{pmatrix}
$$
the coefficients computed by **fdstencil(2,-2:2)** satisfy the system in part(a).
3. Test this finite difference formula to approximate $u''(1)$ for $u(x) = \sin(2x)$ with values of $h$ from the array **hvals = logspace(-1, -4, 13)**. Make a table of the error vs.\ $h$ for several values of $h$ and compare against the predicted error from the leading term of the expression printed by **fdstencil**. You may want to look at the m-file **chap1example1.m** for guidance on how to make such a table.
Also produce a log-log plot of the absolute value of the error vs. $h$.
You should observe the predicted accuracy for larger values of $h$. For smaller values, numerical cancellation in computing the linear combination of $u$ values impacts the accuracy observed.
$Err_c=|\frac{u(x-h) -2u(x) +u(x+h)}{h^2}-u''(x)|$
$Err_v=|c_{-2} u(x-2h) + c_{-1} u(x-h) + c_0u(x) + c_1u(x+h) +
c_2u(x+2h)-u''(x)|$
$Err_{predict}$ is the predicted error from the matlab code **fdstencil.m**
| $h$ | $Err_c$ | $Err_v$ | $Err_{predict}$ |
| ---------- |:-------------- |:---------- |:--------------- |
| $0.100000$ | $1.210781e-02$ | $6.443065e-05$ | $6.466109e-05$ |
| $0.056234$ | $3.832318e-03$ | $6.458817e-06$ | $6.466109e-06$ |
| $0.031623$ | $1.212235e-03$ | $6.463808e-07$ | $6.466109e-07$ |
| $0.017783$ | $3.833773e-04$ | $6.465302e-08$ | $6.466109e-08$ |
| $0.010000$ | $1.212380e-04$ | $6.462467e-09$ | $6.466109e-09$ |
| $0.005623$ | $3.833919e-05$ | $6.322654e-10$ | $6.466109e-10$ |
| $0.003162$ | $1.212397e-05$ | $6.314194e-11$ | $6.466109e-11$ |
| $0.001778$ | $3.833901e-06$ | $7.053202e-11$ | $6.466109e-12$ |
| $0.001000$ | $1.212471e-06$ | $3.365388e-10$ | $6.466109e-13$ |
| $0.000562$ | $3.837344e-07$ | $3.507674e-10$ | $6.466109e-14$ |
| $0.000316$ | $1.200116e-07$ | $6.553787e-09$ | $6.466109e-15$ |
| $0.000178$ | $3.616190e-08$ | $5.968102e-09$ | $6.466109e-16$ |
| $0.000100$ | $2.327996e-09$ | $2.327996e-09$ | $6.466109e-17$ |

Python code:
```
import numpy as np
import pandas as pd
x=1
h=np.logspace(-1,-4,13)
u=np.sin(2*x)#原函數
utr=-4*np.sin(2*x) #微分真值
Err=[]
#from fdstencil.m
c =[-0.083333333333333 , 1.333333333333333 , -2.500000000000000 , 1.333333333333333 , -0.083333333333333]
Err_predict =-0.0111111 *h**4*(-64*np.sin(2*x))
for i in range(0,len(h)):
Df_c=(np.sin(2*(x+h[i]))+np.sin(2*(x-h[i]))-2*u)/((h[i])**2)
us=[np.sin(2*(x-2*h[i])),np.sin(2*(x-h[i])),np.sin(2*x),np.sin(2*(x+h[i])),np.sin(2*(x+2*h[i]))]
Df_v=np.dot(c,us)/(h[i])**2
Err.append([h[i],np.abs(Df_c-utr),np.abs(Df_v-utr),np.abs(Err_predict[i])])
result= pd.DataFrame(Err, columns = ["h","Err_c","Err_v","Err_predict"]) # 指定欄標籤名稱
print(result)
```
```
import matplotlib.pyplot as plt
E_c=[]
E_v=[]
hvals=[]
for i in range(0,len(h)):
E_c.append(Err[i][1])
E_v.append(Err[i][2])
hvals.append(h[i])
plt.plot(np.log10(hvals),np.log10(E_c),label='Err_c ')
plt.plot(np.log10(hvals),np.log10(E_v),'o-',label='Err_v ')
plt.plot(np.log10(hvals),np.log10(Err_predict),label='Err_predict')
plt.plot(np.log10(hvals),np.log10(np.array(hvals)**2)+1,label='h^2')
plt.plot(np.log10(hvals),np.log10(np.array(hvals)**4)+1,label='h^4')
plt.xlabel("log(h)")
plt.ylabel("log(Error)")
plt.legend()
plt.savefig("loglog.png")
```