--- title: 2. Boundary value problems tags: Finite difference method --- # Boundary value problems **Textbook** 1. Finite Difference Methods for Ordinary and Partial Differential Equations by *Randall J. LeVeque* * [exercises and m-files](http://faculty.washington.edu/rjl/fdmbook/chapter2.html) --- ### Exercise 1 Consider the boundary value problem: $$ u'' = e^{\sin(x)}, \quad u'(0)=1, \quad u(1)=0. $$ Using method 1, 2 and 3 to solve the problem and check the accuracy of your solutions. **Solution.** #### Method 1 First devide the interval $[0,1]$ into $N+1$ pieces, so we insert $N$ points in $(0,1)$. * Partition: $x_0=0, \\ x_{N+1}=1,\\x_i=ih$ with $h=\dfrac{1}{N+1}$ and $F_i=e^{\sin(x_i)}$, $i=1,...,N$. $U_{N+1}=u(x_{N+1})=u(1)=0.$ * Unknows: $U_i=u(x_i)$ for $i=0,1,...,N$ Since we don't know the true value of $u(0)$, we take the first order derivative $u'(0)$ instead. So $u'(0)\approx\dfrac{U_1-U_0}{h}=\sigma\quad\Rightarrow\quad \dfrac{1}{h^2}(-hU_0+hU_1)=\sigma$. The corresponding system is $$ \frac{1}{h^2} \begin{pmatrix} -h & h & & & \\ 1 & -2 & 1 & & \\ & 1 & -2 & 1 & \\ & & & \ddots & \\ & & & 1 & -2\\ \end{pmatrix} \begin{pmatrix} U_0 \\ U_1 \\ \vdots \\ \vdots \\ U_N \end{pmatrix}= \begin{pmatrix} \sigma\\F_1\\ F_2\\ \vdots \\F_N-\frac{\beta}{h^2} \end{pmatrix}. $$ But if the tridiagonal matrix is not symmetric, it did not assure to have real eigenvalues. Since $\dfrac{1}{h^2}(-hU_0+hU_1)=\sigma$, use $\dfrac{1}{h^2}(-U_0+U_1)=\dfrac{\sigma}{h}$ instead. Then the system becomes $$\frac{1}{h^2} \begin{pmatrix} -1 & 1 & & & \\ 1 & -2 & 1 & & \\ & 1 & -2 & 1 & \\ & & & \ddots & \\ & & & 1 & -2\\ \end{pmatrix} \begin{pmatrix} U_0 \\ U_1 \\ \vdots \\ \vdots \\ U_N \end{pmatrix}= \begin{pmatrix} 1/h\\F_1\\ F_2\\ \vdots \\F_N-\frac{\beta}{h^2} \end{pmatrix}. $$ In this exercise, $\sigma=1$ and $\beta=0$. To Solve this system, we use Thomas's algorithm. Code:(C++) ```C++ const double SMALL = 1.0E-30; long double f(long double input){ long double output; output = sinl(input); output = expl(output); return output; } bool tdma( const vector<double> &a, const vector<double> &b, const vector<double> &c, const vector<double> &d, vector<double> &X ) /* Solving a tri-diagonal system by using the Thomas Algorithm (TDMA). a_i X_i-1 + b_i X_i + c_i X_i+1 = d_i, i = 0, n - 1 Since this is the n x n matrix equation, a[i], b[i], c[i] are the non-zero diagonals of the matrix and d[i] is the RHS. so a[0] and c[n-1] aren't used. Code Refrences: http://www.cplusplus.com/forum/beginner/247330/ */ { int n = d.size(); vector<double> P( n, 0 ); vector<double> Q( n, 0 ); X = P; // Forward pass int i = 0; double denominator = b[i]; P[i] = -c[i] / denominator; Q[i] = d[i] / denominator; for ( i = 1; i < n; i++ ) { denominator = b[i] + a[i] * P[i-1]; if ( abs( denominator ) < SMALL ) return false; P[i] = -c[i] / denominator; Q[i] = ( d[i] - a[i] * Q[i-1] ) / denominator; } // Backward pass X[n-1] = Q[n-1]; for (i=n-2;i>=0;i--) X[i]=P[i]*X[i+1]+Q[i]; return true; } //Output the vector void Peek(vector<double> vector){ for(int i=0;i<vector.size();i++){ if(i<vector.size()-1){ cout<<i<<" "<<vector[i]<<endl; } else cout<<i<<" "<<vector[i]; } } int main(void) { cout.precision(10); //More precisely int N; cout<<"Please enter N (an interger):"; cin>>N; long double h = 1.0/(N+1); vector<double> x(N+1, 0.0); //Create a vector for x_i vector<double> F(N+1, 0.0); //Create a vector for F_i vector<double> U(N+1, 0.0); //Create a vector for U_i for(int i=0;i<=N+1;i++){ //Devide [0,1] into N+1 pieces such that x_0 = 0 , x_(N+1) = 1 x[i]=i*h; } for(int i=0;i<=N+1;i++){ F[i]=f(x[i]); } F[0] = 1.0/h; vector<double> MainDiag(N+1,-2.0); //Creating the tridiagonal matrix vector<double> LowDiag(N,1.0); vector<double> HighDiag(N,1.0); MainDiag[0] = -1.0; tdma(LowDiag,MainDiag,HighDiag,F,U); //Solve the tridiagonal system by using Thomas's algorithm vector<double> U_star(N+1,0.0); for(int i=0;i<N+1;i++){ //Since LHS of equation has a 1/h^2 U_star[i] = pow(h,2.0) * U[i]; //So Let U*[i] = h^2 * U[i] } cout<<"U_star = ["<<endl; Peek(U_star); cout<<endl<<" ]"<<endl; } ``` Output: Vector of `U[i]`. Plot: Set $N=100,200,400,800$. ![Method1](https://i.imgur.com/ldakEah.png) Moreover, we can calculate the truncation error by using the formula $R=\dfrac{||U_{h/2}-U_{h/4}||}{||U_h-U_{h/2}||}=\dfrac{Ch^p+C(\frac{h}{2})^p}{C(\frac{h}{2})^p+C(\frac{h}{4})^p}\longrightarrow\dfrac{1}{2^p}$ if the truncation error is $O(h^p)$. Now we calculate the $R$ by using $N=100,200$ and $400$ in 1-norm, 2-norm and $\infty$-norm. $R_1=\dfrac{||U_{h/2}-U_{h/4}||_1}{||U_h-U_{h/2}||_1}=\dfrac{||U_{N=200}-U_{N=400}||_1}{||U_{N=100}-U_{200}||_1}=0.5071579,$ $R_2=\dfrac{||U_{h/2}-U_{h/4}||_2}{||U_h-U_{h/2}||_2}=\dfrac{||U_{N=200}-U_{N=400}||_2}{||U_{N=100}-U_{200}||_2}=0.5071939,$ $R_{\infty}=\dfrac{||U_{h/2}-U_{h/4}||_{\infty}}{||U_h-U_{h/2}||_{\infty}} =\dfrac{||U_{N=200}-U_{N=400}||_{\infty}}{||U_{N=100}-U_{200}||_{\infty}} =0.5077866$. Since $R\approx0.5=1/2$, the trucation error by using method 1 is $O(h)$. @RyanHsieh N=100 與 N=200 的 h 似乎不是剛好兩倍的關西? #### Method2 (__Ghost Method__) * Partition: $x_0=0, \\ x_{N+1}=1,\\x_i=ih$ with $h=\dfrac{1}{N+1}$ and $F_i=e^{\sin(x_i)}$, $i=1,...,N$. $U_{N+1}=u(x_{N+1})=u(1)=0.$ * Unknows: $U_i=u(x_i)$ for $i=0,1,...,N$ * Find a **Ghost point** $U_{-1}$ so that we can approximate $u'(0)$ more precisely. Since $u'(0)=\sigma\approx\dfrac{U_1-U_{-1}}{h^2}\quad\Rightarrow\quad U_{-1}=U_1-2h\sigma$. Now $u''=f\quad\Rightarrow\quad\dfrac{U_1-2U_0+U_{-1}}{h^2}=F_0.$ Substituting $U_{-1}=U_1-2h\sigma$ into the previous equation gives $F_0=\dfrac{U_1-2U_0+(U_1-2h\sigma)}{h^2}\quad\Rightarrow\dfrac{2U_1-2U_0}{h^2}=F_0+\dfrac{2\sigma}{h}\\ \Rightarrow \dfrac{1}{h^2}(-U_0+U_1)=\dfrac{F_0}{2}+\dfrac{\sigma}{h}.$ Hence the system becomes $$\frac{1}{h^2} \begin{pmatrix} -1 & 1 & & & \\ 1 & -2 & 1 & & \\ & 1 & -2 & 1 & \\ & & & \ddots & \\ & & & 1 & -2\\ \end{pmatrix} \begin{pmatrix} U_0 \\ U_1 \\ \vdots \\ \vdots \\ U_N \end{pmatrix}= \begin{pmatrix} \dfrac{F_0}{2}+\dfrac{\sigma}{h}\\F_1\\ F_2\\ \vdots \\F_N-\dfrac{\beta}{h^2} \end{pmatrix}. $$ In this exercise, $\sigma=1$ and $\beta=0$. The code is same as the code of method except ```C++ F[0] = F[0]/2 + 1.0/h; ``` Output: Vector of `U[i]`. Plot: Set $N=100,200,400,800$. ![Method1](https://imgur.com/BrpYTn0.png) Now we calculate the $R$ by using $N=100,200$ and $400$ in 1-norm, 2-norm and $\infty$-norm. $R_1=\dfrac{||U_{h/2}-U_{h/4}||_1}{||U_h-U_{h/2}||_1}=0.5073046,$ $R_2=\dfrac{||U_{h/2}-U_{h/4}||_2}{||U_h-U_{h/2}||_2}=0.5072257,$ $R_{\infty}=\dfrac{||U_{h/2}-U_{h/4}||_{\infty}}{||U_h-U_{h/2}||_{\infty}}=0.5070567$. Since $R\approx0.5=1/2$, the trucation error by using method 2 is $O(h)$. * Method 3 (__Stegger Grid__) Set $x_0=-h/2 ,x_1=h/2,$ $x_{N+1}=(N+1/2)h=1$ with $h=\dfrac{1}{N+1/2}$. Now $U_0$ becomes the ghost point, since $\dfrac{U_1-U_0}{h}=\sigma$, we have $U_0=U_1-\sigma h$. $\Rightarrow u''(0)=\sigma\approx\dfrac{U_2-2U_1+U_0}{h^2}=F_1\quad\Rightarrow\quad \dfrac{U_2-U_1}{h^2}=F_1+\dfrac{\sigma}{h}$ Hence the system becomes $$\frac{1}{h^2} \begin{pmatrix} -1 & 1 & & & \\ 1 & -2 & 1 & & \\ & 1 & -2 & 1 & \\ & & & \ddots & \\ & & & 1 & -2\\ \end{pmatrix} \begin{pmatrix} U_1 \\ U_2 \\ \vdots \\ \vdots \\ U_N \end{pmatrix}= \begin{pmatrix} F_1+\frac{\sigma}{h}\\F_2\\ \vdots\\ \vdots \\F_N-\frac{\beta}{h^2} \end{pmatrix}. $$ In this exercise, $\sigma=1$ and $\beta=0$. Code: ```C++ int main(void) { cout.precision(10); //More precisely int N; cout<<"Please enter N (an interger):"; cin>>N; long double h = 1.0/(N+0.5); vector<double> x(N+1, 0.0); //Create a vector for x_i vector<double> F(N+1, 0.0); //Create a vector for F_i vector<double> U(N+1, 0.0); //Create a vector for U_i vector<double> G(N+1, 0.0); //Create a vector for G_i vector<double> T(N+1, 0.0); //Create a vector for T_i for(int i=0;i<=N+1;i++){ //Devide [0,1] into N+1 pieces such that x_0 = 0 , x_(N+1) = 1 x[i]=(i-0.5)*h; } for(int i=0;i<=N+1;i++){ F[i]=f(x[i]); } F[1] = F[1] + 1.0/h; for(int i=0;i<N+1;i++){ G[i]=F[i+1]; } vector<double> MainDiag(N+1,-2.0); //Creating the tridiagonal matrix vector<double> LowDiag(N,1.0); vector<double> HighDiag(N,1.0); MainDiag[0] = -1.0; tdma(LowDiag,MainDiag,HighDiag,G,T); //Solve the tridiagonal system by using Thomas's algorithm for(int i=1;i<N+1;i++){ //Since LFS of equation has a 1/h^2 U[i] = pow(h,2.0) * T[i-1]; //So Let U*[i] = h^2 * U[i] } cout<<"U= ["<<endl; Peek(U); cout<<endl<<" ]"<<endl; } ``` Output: Vector of `U[i]`. Plot: Set $N=100,200,400,800$. ![Method1](https://imgur.com/tIsIsLW.png) Now we calculate the $R$ by using $N=100,200$ and $400$ in 1-norm, 2-norm and $\infty$-norm. $R_1=\dfrac{||U_{h/2}-U_{h/4}||_1}{||U_h-U_{h/2}||_1}=0.5001927,$ $R_2=\dfrac{||U_{h/2}-U_{h/4}||_2}{||U_h-U_{h/2}||_2}=0.5007565,$ $R_{\infty}=\dfrac{||U_{h/2}-U_{h/4}||_{\infty}}{||U_h-U_{h/2}||_{\infty}}=0.5018442.$ Since $R\approx0.5=1/2$, the trucation error by using method 2 is $O(h)$. @RyanHsieh N=100 與 N=200 的 h 似乎不是兩倍的關係? --- ### Exercise 2 Consider the boundary value problem: $$ u'' = e^{\sin(x)}, \quad u'(0)=0, \quad u'(1)=1.631869608418051. $$ Solve the problem and check the accuracy of your solutions. ANS: CODE(Python): ```python import numpy as np #boundary condition sigma=0; ksi=1.631869608418051; #Equally divided x=[0,1] to 100 pieces n=100; x=np.linspace(0,1,n+1); #grid size h=x[1]-x[0]; A=np.eye(n+1); A=-2*A; for i in range(n): A[i][i+1]=1; A[i+1][i]=1; A[0][0]=-1; A[n][n]=-2; F=np.exp(np.sin(x)); F[0]=sigma/h; F[n]=F[n]-0/h**2; F=F.reshape(n+1, 1); A_inv = np.linalg.inv(A) U=h**2*A_inv.dot(F) #Plot import matplotlib.pyplot as plt plt.plot(x,U) plt.ylabel('U') plt.xlabel('x range is [0,1]') plt.show() ``` Result: ![](https://i.imgur.com/VU8avVo.png) This code solves Q2 by the method 3 of Q1. Let u(1)=0,the linear system is $$ \begin{pmatrix}-1&1&0&...&0\\ 1 & 2 & 1&...&0\\ &&...\\ 0&...&1&2&1\\0&...&0&1&-2 & \end{pmatrix}U =\begin{pmatrix} \frac{\sigma}{h}\\F_1\\...\\F_{N-1}\\F_N-\frac{u(1)}{h^2}\end{pmatrix}$$ The result is on the above,which x-axis is interval [0,1] cutted in to 100 pieces ,and y-axis is vector U. Notice that $(U[101]-U[100])/h=1.62527284\approx u'(1)=1.631869608418051$ 關於收斂性的討論: 此圖為n=100(藍線),n=200(橘線),n=400(綠線) 看起來仍然有些差異,所以將n乘上10倍做比較看看。 ![](https://i.imgur.com/lR4mDvr.png) 此圖為n=1000(藍線),n=2000(橘線),n=4000(綠線) 差異很小已經幾乎相疊在一起,因此這個方法看起來是收斂的。 ![](https://i.imgur.com/5zQWpSE.png) 關於誤差的討論: for $n_1$=100 $U_4[800]-U_3[400]=0.00204243$ for $n_2$=200 $U_3[400]-U_2[200]=0.0041108$ for $n_3$=400 $U_2[200]-U_1[100]=0.00828386$ where $U_i$ is the result of grid-size with 100,200,400,800,respectively and $h_i=1/n_i$. 此圖藍線為$U_2-U_1$, 橘線為$U_3-U_2$, 綠線為$U_4-U_3$. ![](https://i.imgur.com/ZkMKd01.png) 此圖藍線為$\frac{U_2-U_1}{U_3-U_2}$ 橘線為$\frac{U_3-U_2}{U_4-U_3}$ ![](https://i.imgur.com/GgD7RC7.png) Thus the truncation error is $O(h)$. --- ### Exercise 3 Consider the nonlinear boundary value problem: $$ u'' + \sin(u) = 0, \quad u(0)=1, \quad u(1)=1. $$ Solve the problem and check the accuracy of your solutions. ANS: ![](https://i.imgur.com/Fk3JeVe.jpg) ![](https://i.imgur.com/4c24mKm.jpg) ![](https://i.imgur.com/diLM1na.jpg) ```Matlab clc clear format long n = [8 16 32]; UU = zeros(7,3); for i = 1:3; N = n(i); h = 1/N; U = zeros(N-1,1); A = diag(ones(N-2,1),-1)-2*diag(ones(N-1,1))+diag(ones(N-2,1),1); TOL = 1e-16; Error = 1; while Error > TOL; Uold = U; g = h^2*sin(U) + [1;zeros(N-3,1);1]; F = A*U+g; JF = A + h^2*diag(cos(U)); U = U - JF\F; Error = norm(U-Uold,inf); end U2 = [1;U;1]; if i == 1; UU(:,i) = U2(1+(1:7)); plot(0:h:1,U2,'r'); hold on; elseif i == 2; UU(:,i) = U2(1+(1:7)*2); plot(0:h:1,U2,'bo-'); hold on; elseif i == 3; UU(:,i) = U2(1+(1:7)*4); plot(0:h:1,U2,'k*-'); hold off; end legend('h = 1/8','h = 1/16','h = 1/32') xlabel('x') ylabel('U') end E_1 = norm(UU(:,2)-UU(:,3),1)/norm(UU(:,1)-UU(:,2),1) E_2 = norm(UU(:,2)-UU(:,3),2)/norm(UU(:,1)-UU(:,2),2) E_inf = norm(UU(:,2)-UU(:,3),inf)/norm(UU(:,1)-UU(:,2),inf) ``` ![](https://i.imgur.com/25XpOAi.jpg) --- ### Exercise 4 Consider the linear boundary value problem: $$ \epsilon u'' + (1+\epsilon) u' + u = 0, \quad u(0)=0, \quad u(1)=1. $$ Solve the problem and check the accuracy of your solutions. Choose $\epsilon=0.01$. **Ans** There is an exact solution $u(x)=e^{1-x}-e^{1-100x}$ in the problem. $$\epsilon u'' + (1+\epsilon) u' + u = 0$$ $$A_{2}u = u'', A_{1}u = u'$$ $$(\epsilon A_{2} + (1+\epsilon) A_{1} +1)u = 0$$ $$Au = 0$$ $Au = 0$ can't solve $u$ so we add 1 in element of $u$. And we get $$u_{\Delta} = u +\left[\begin{matrix} 1 \\ 1 \\ \vdots \\ 1 \\\end{matrix}\right]$$ We use $u_{\Delta}$ to solve this problem. $$Au_{\Delta} = Au + A\left[\begin{matrix} 1 \\ 1 \\ \vdots \\ 1 \\\end{matrix}\right]$$ $$Au_{\Delta} = \left[\begin{matrix} 1 \\ 1 \\ \vdots \\ 1 \\\end{matrix}\right]$$ $$u_{\Delta} = A^{-1}\left[\begin{matrix} 1 \\ 1 \\ \vdots \\ 1 \\\end{matrix}\right]$$ $A$ is combination of $\epsilon A_{2} + (1+\epsilon) A_{1} +1$. Now we go deep into $A_{2}$ and $A_{1}$. We solve $u$ to get point position : We use central difference method to solve one order differential equation. And the problem solution isn't contain $U_{1}$ and $U_{N+1}$ bound. We get $A_{1}$ matrix is $$\frac{1}{2h}\left[\begin{matrix} 0 & 1 & 0 & \cdots &0 \\ -1 & 0 & 1 & \cdots &0 \\ 0 & -1 & 0 & \cdots &0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & -1 & 0 & 1 \\ 0 & 0 & 0 & -1 & 0 \\ \end{matrix} \right]$$ Every point can compute by $U'_{i} = \frac{U_{i+1}-U_{i-1}}{2h}$. But $U'_{2} = \frac{U_{2}-U_{0}}{2h}$ and $U'_{N} = \frac{U_{N}-U_{N-2}}{2h}$ two point $U_{0}$ and $U_{N}$ is bound. There are boundary value in the problem solution. $N$ is munber of mesh. $h$ is width of mesh. We use central difference method to solve two order differential equation. And the problem solution isn't contain $U_{0}$ and $U_{N}$ bound. We get $A_{2}$ matrix is $$\frac{1}{h^{2}}\left[\begin{matrix} -2 & 1 & 0 & \cdots &0 \\ 1 & -2 & 1 & \cdots &0 \\ 0 & 1 & -2 & \cdots &0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 1 & -2 \\ \end{matrix} \right]$$ Every point can compute by $U''_{i} = \frac{U_{i+1}-2U_{i}+U_{i-1}}{2h}$. But $U''_{2} = \frac{U_{2}-2U_{1}+U_{0}}{2h}$ and $U''_{N} = \frac{U_{N}-2U_{N-1}+U_{N-2}}{2h}$ two point $U_{0}$ and $U_{N}$ is bound. There are boundary value in the problem solution. Two order differential equation need boundary value to get solve. So we add bound $U_{0} = u(0) = 0$ and $U_{N} = u(1) = 1$ give by problem. $$Au_{\Delta} = \left[\begin{matrix} 1 \\ 1 \\ \vdots \\ 1 \\\end{matrix}\right]$$ $$[\epsilon A_{2} + (1+\epsilon) A_{1} +1]u_{\Delta} = \left[\begin{matrix} f_{2} \\ f_{3} \\ f_{4} \\ \vdots \\ f_{N-1} \\ f_{N} \\\end{matrix}\right] = \left[\begin{matrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \\ 1 \\\end{matrix}\right]$$ $$\{ \epsilon \frac{1}{h^{2}} \left[\begin{matrix} -2 & 1 & 0 & \cdots &0 \\ 1 & -2 & 1 & \cdots &0 \\ 0 & 1 & -2 & \cdots &0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 1 & -2 \\ \end{matrix} \right] + (1 + \epsilon) \frac{1}{2h} \left[\begin{matrix} 0 & 1 & 0 & \cdots &0 \\ -1 & 0 & 1 & \cdots &0 \\ 0 & -1 & 0 & \cdots &0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & -1 & 0 & 1 \\ 0 & 0 & 0 & -1 & 0 \\ \end{matrix} \right] + \left[\begin{matrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \\ 1 \\\end{matrix}\right]\}u_{\Delta} = \left[\begin{matrix} f_{2} \\ f_{3} \\ f_{4} \\ \vdots \\ f_{N-1} \\ f_{N} \\\end{matrix}\right]$$ But $f_{2}$ and $f_{N}$ is equal in equation so $f_{2}$ need to add bound $(1+\epsilon)\frac{U_{\Delta 1}}{2h}-\epsilon\frac{U_{\Delta 1}}{h^{2}}$, $f_{N}$ need to add bound $-(1+\epsilon)\frac{U_{\Delta N+1}}{2h}-\epsilon\frac{U_{\Delta N+1}}{h^{2}}$ We can use $A^{-1}\left[\begin{matrix} 1+(1+\epsilon)\frac{U_{\Delta 1}}{2h}-\epsilon\frac{U_{\Delta 1}}{h^{2}} \\ 1 \\ \vdots \\ 1 \\ 1-(1+\epsilon)\frac{U_{\Delta N+1}}{2h}-\epsilon\frac{U_{\Delta N+1}}{h^{2}} \\\end{matrix}\right]$ to solve $u_{\Delta}$ and substract 1 in element of $u_{\Delta}$ to get $u_{2}$ to $u_{N}$. Use Matlab to implement algorithm : ```Matlab clear; bound_head=0; bound_end=1; epsilon = 0.01; N_a=[1000 2000 4000 8000]; for i = 1:4 N = N_a(i); a0=ones(1,N-1); a1=ones(1,N-2); u=zeros(N+1,1); u(1)=bound_head; u(N+1)=bound_end; A1=-diag(a1,-1)+diag(a1,1); A2=diag(a1,-1)-2*diag(a0)+diag(a1,1); h=1/N; A=epsilon*h^-2*A2+(1+epsilon)/2*h^-1*A1+diag(a0); f=a0'; f(1)=a0(1)+(1+epsilon)/2*h^-1*(bound_head+1)-epsilon*h^-2*(bound_head+1); f(N-1)=a0(N-1)-(1+epsilon)/2*h^-1*(bound_end+1)-epsilon*h^-2*(bound_end+1); u_delta = A^-1*f; u(2:N)=u_delta-a0'; x=0:h:1; U = exp(1-x)-exp(1-100*x); error = U'-u; norm(error,2); hold on plot(x,u); hold off end xlabel('x'); ylabel('u(x)') title('Compare different mesh'); legend('N=1000','N=2000','N=4000','N=8000'); figure; plot(x,abs(error)); xlabel('x'); ylabel('|U-u|') title('error of U and u in N=8000'); ``` Plot Numeric solution in different mesh: ![](https://i.imgur.com/LZNrND8.png) Plot error of Numeric solution: ![](https://i.imgur.com/NTfgEHg.png) --- ### Exercise 5 Consider the linear boundary value problem: $$ \epsilon u'' - u' = 0, \quad u(0)=0, \quad u(1)=1. $$ Use non-uniform grid to solve the problem and check the accuracy of your solutions. Choose $\epsilon=0.01$. **Solution** (Code with Julia) We have the exact solution $u(x)=\frac{e^{(\frac{x}{\varepsilon})}-1}{e^{(\frac{1}{\varepsilon})}-1}.$ We try different step size $h$ and devide the domain by adding two ghost points $x_0$ and $x_{N+1}\,$,$x=[x_0=\frac{-h}{2},x_1=\frac{h}{2},\cdots,x_N=1-\frac{h}{2},x_{N+1}=1+\frac{h}{2}]^T.$ ```julia= using LinearAlgebra using Plots using LaTeXStrings ### eps*u''-u'=0, u(0)=0, u(1)=1 ## parameters epsilon = 0.01; h = [0.01, 0.01/2, 0.01/4, 0.01/8]; N = [Int64(1/h[k]) for k = 1:length(h)]; ## data points x and respect exact solution for each h x = zeros(length(h),N[length(N)]); u = zeros(length(h),N[length(N)]); for k = 1:length(h) # data points x (uniform grid) x[k,1:N[k]] = [h[k]/2:h[k]:1-h[k]/2;]; # real solution u(x) = (exp(x/eps)-1)/(exp(1/eps)-1) u[k,1:N[k]] = [(exp(x[k,i]/epsilon)-1)/(exp(1/epsilon)-1) for i in 1:N[k]]; end ``` Approximate $U_0$ and $U_{N+1}\,$: $$ 0 = u(0) \approx \frac{U_0+U_1}{2} \Rightarrow U_0 = -U_1 \\ 1 = u(1) \approx \frac{U_N+U_{N+1}}{2} \Rightarrow U_{N+1} = 2-U_N $$ **First attempt :** We want to have a second order method, so using central difference to approximate first derivative of $u$ . $$ \frac{\varepsilon}{h^2}(U_{i+1}-2U_i+U_{i-1})-\frac{1}{2h}(U_{i+1}-U_{i-1})=F_i=0 $$ For $i=1$, $$ \frac{\varepsilon}{h^2}(U_2-2U_1+U_0)-\frac{1}{2h}(U_2-U_0)\\=\frac{\varepsilon}{h^2}(U_2-3U_1)-\frac{1}{2h}(U_2+U_1)=0 $$ For $i=N$, $$ \frac{\varepsilon}{h^2}(U_{N+1}-2U_N+U_{N-1})-\frac{1}{2h}(U_{N+1}-U_{N-1})\\ =\frac{\varepsilon}{h^2}(2-3U_N+U_{N-1})-\frac{1}{2h}(2-U_N-U_{N-1})=0 $$ Thus, we get the following system : $(A_c\,U_c=F_c)$ $$ \begin{pmatrix} \frac{-3\varepsilon}{h^2}-\frac{1}{2h} & \frac{\varepsilon}{h^2}-\frac{1}{2h} & 0 & \cdots & 0 \\ \frac{\varepsilon}{h^2}+\frac{1}{2h} & \frac{-2\varepsilon}{h^2} & \frac{\varepsilon}{h^2}-\frac{1}{2h} & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & \frac{\varepsilon}{h^2}+\frac{1}{2h} & \frac{-2\varepsilon}{h^2} & \frac{\varepsilon}{h^2}-\frac{1}{2h} \\ 0 & \cdots & 0 & \frac{\varepsilon}{h^2}+\frac{1}{2h} & \frac{-3\varepsilon}{h^2}+\frac{1}{2h} & \end{pmatrix}U= \begin{pmatrix} 0\\ 0\\ \vdots\\ 0\\ \frac{-2\varepsilon}{h^2}+\frac{1}{h} \end{pmatrix} $$ Solving the system by Thomas algorithm. ```julia= #### approximate u' by central difference ### Try different h UU_c = zeros(length(h),N[length(N)]); for k = 1:length(h) ## construct A and F du_c = repeat([epsilon/(h[k]*h[k])-1/(2*h[k])],N[k]-1); d_c = [-3*epsilon/(h[k]*h[k])-1/(2*h[k]); repeat([-2*epsilon/(h[k]*h[k])],N[k]-2); -3*epsilon/(h[k]*h[k])+1/(2*h[k]);]; dl_c = repeat([epsilon/(h[k]*h[k])+1/(2*h[k])],N[k]-1); F_c = [zeros(N[k]-1,1); -2*epsilon/(h[k]*h[k])+1/h[k];]; ## Using Thomas algorithm to solve AU=F # step 1 du_c_new = zeros(N[k]-1); F_c_new = zeros(N[k]); du_c_new[1] = du_c[1]/d_c[1]; F_c_new[1] = F_c[1]/d_c[1]; for i = 2:N[k]-1 du_c_new[i] = du_c[i]/(d_c[i]-dl_c[i-1]*du_c_new[i-1]); F_c_new[i] = (F_c[i]-dl_c[i-1]*F_c_new[i-1])/(d_c[i]-dl_c[i-1]*du_c_new[i-1]); end F_c_new[N[k]] = (F_c[N[k]]-dl_c[N[k]-1]*F_c_new[N[k]-1])/(d_c[N[k]]-dl_c[N[k]-1]*du_c_new[N[k]-1]); # step 2 U_c = zeros(N[k],1); U_c[N[k]] = F_c_new[N[k]]; for j = (N[k]-1):-1:1 U_c[j] = F_c_new[j]-du_c_new[j]*U_c[j+1]; end ## save approximate solution UU_c[k,1:length(U_c)] = U_c; end ``` And we can get the approximate solution $U_c$ ( when $h=0.01$ ) : ![](https://i.imgur.com/JzDI8yJ.png) **Another attempt :** Using backward and forward difference method to solve the equation. Backward method : $$ \frac{\varepsilon}{h^2}(U_{i+1}-2U_i+U_{i-1})-\frac{1}{h}(U_i-U_{i-1})=F_i=0 $$ For $i=1$, $$ \frac{\varepsilon}{h^2}(U_2-2U_1+U_0)-\frac{1}{h}(U_1-U_0)\\=\frac{\varepsilon}{h^2}(U_2-3U_1)-\frac{1}{h}(2U_1)=0 $$ For $i=N$, $$ \frac{\varepsilon}{h^2}(U_{N+1}-2U_N+U_{N-1})-\frac{1}{h}(U_N-U_{N-1})\\ =\frac{\varepsilon}{h^2}(2-3U_N+U_{N-1})-\frac{1}{h}(U_N-U_{N-1})=0 $$ System : $(A_b\,U_b=F_b)$ $$ \begin{pmatrix} \frac{-3\varepsilon}{h^2}-\frac{2}{h} & \frac{\varepsilon}{h^2} & 0 & \cdots & 0 \\ \frac{\varepsilon}{h^2}+\frac{1}{h} & \frac{-2\varepsilon}{h^2}-\frac{1}{h} & \frac{\varepsilon}{h^2} & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & \frac{\varepsilon}{h^2}+\frac{1}{h} & \frac{-2\varepsilon}{h^2}-\frac{1}{h} & \frac{\varepsilon}{h^2} \\ 0 & \cdots & 0 & \frac{\varepsilon}{h^2}+\frac{1}{h} & \frac{-3\varepsilon}{h^2}-\frac{1}{h} & \end{pmatrix}U= \begin{pmatrix} 0\\ 0\\ \vdots\\ 0\\ \frac{-2\varepsilon}{h^2} \end{pmatrix} $$ Code and result : ```julia= #### approximate u' by backward difference ### Try different h UU_b = zeros(length(h),N[length(N)]); for k = 1:length(h) ## construct A and F du_b = repeat([epsilon/(h[k]*h[k])],N[k]-1); d_b = [-3*epsilon/(h[k]*h[k])-2/h[k]; repeat([-2*epsilon/(h[k]*h[k])-1/h[k]],N[k]-2); -3*epsilon/(h[k]*h[k])-1/h[k];]; dl_b = repeat([epsilon/(h[k]*h[k])+1/h[k]],N[k]-1); F_b = [zeros(N[k]-1,1); -2*epsilon/(h[k]*h[k]);]; ## Using Thomas algorithm to solve AU=F # step 1 du_b_new = zeros(N[k]-1); F_b_new = zeros(N[k]); du_b_new[1] = du_b[1]/d_b[1]; F_b_new[1] = F_b[1]/d_b[1]; for i = 2:N[k]-1 du_b_new[i] = du_b[i]/(d_b[i]-dl_b[i-1]*du_b_new[i-1]); F_b_new[i] = (F_b[i]-dl_b[i-1]*F_b_new[i-1])/(d_b[i]-dl_b[i-1]*du_b_new[i-1]); end F_b_new[N[k]] = (F_b[N[k]]-dl_b[N[k]-1]*F_b_new[N[k]-1])/(d_b[N[k]]-dl_b[N[k]-1]*du_b_new[N[k]-1]); # step 2 U_b = zeros(N[k],1); U_b[N[k]] = F_b_new[N[k]]; for j = (N[k]-1):(-1):1 U_b[j] = F_b_new[j]-du_b_new[j]*U_b[j+1]; end ## save approximate solution UU_b[k,1:length(U_b)] = U_b; end ``` For $h=0.01$ : ![](https://i.imgur.com/D3Hg6Td.png) Forward method : $$ \frac{\varepsilon}{h^2}(U_{i+1}-2U_i+U_{i-1})-\frac{1}{h}(U_{i+1}-U_i)=F_i=0 $$ For $i=1$, $$ \frac{\varepsilon}{h^2}(U_2-2U_1+U_0)-\frac{1}{h}(U_2-U_1)\\=\frac{\varepsilon}{h^2}(U_2-3U_1)-\frac{1}{h}(U_2-U_1)=0 $$ For $i=N$, $$ \frac{\varepsilon}{h^2}(U_{N+1}-2U_N+U_{N-1})-\frac{1}{h}(U_{N+1}-U_N)\\ =\frac{\varepsilon}{h^2}(2-3U_N+U_{N-1})-\frac{1}{h}(2-2U_N)=0 $$ System : $(A_f\,U_f=F_f)$ $$ \begin{pmatrix} \frac{-3\varepsilon}{h^2}+\frac{1}{h} & \frac{\varepsilon}{h^2}-\frac{1}{h} & 0 & \cdots & 0 \\ \frac{\varepsilon}{h^2} & \frac{-2\varepsilon}{h^2}+\frac{1}{h} & \frac{\varepsilon}{h^2}-\frac{1}{h} & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & \frac{\varepsilon}{h^2} & \frac{-2\varepsilon}{h^2}+\frac{1}{h} & \frac{\varepsilon}{h^2}-\frac{1}{h} \\ 0 & \cdots & 0 & \frac{\varepsilon}{h^2} & \frac{-3\varepsilon}{h^2}+\frac{2}{h} \end{pmatrix}U= \begin{pmatrix} 0\\ 0\\ \vdots\\ 0\\ \frac{-2\varepsilon}{h^2}+\frac{2}{h} \end{pmatrix} $$ Code and result : ```julia= #### approximate u' by forward difference ### Try different h UU_f = zeros(length(h),N[length(N)]); for k = 1:length(h) ## construct A and F du_f = repeat([epsilon/(h[k]*h[k])-1/h[k]],N[k]-1); d_f = [-3*epsilon/(h[k]*h[k])+1/h[k]; repeat([-2*epsilon/(h[k]*h[k])+1/h[k]],N[k]-2); -3*epsilon/(h[k]*h[k])+2/h[k];]; dl_f = repeat([epsilon/(h[k]*h[k])],N[k]-1); F_f = [zeros(N[k]-1,1); -2*epsilon/(h[k]*h[k])+2/h[k];]; ## Using Thomas algorithm to solve AU=F # step 1 du_f_new = zeros(N[k]-1); F_f_new = zeros(N[k]); du_f_new[1] = du_f[1]/d_f[1]; F_f_new[1] = F_f[1]/d_f[1]; for i = 2:N[k]-1 du_f_new[i] = du_f[i]/(d_f[i]-dl_f[i-1]*du_f_new[i-1]); F_f_new[i] = (F_f[i]-dl_f[i-1]*F_f_new[i-1])/(d_f[i]-dl_f[i-1]*du_f_new[i-1]); end F_f_new[N[k]] = (F_f[N[k]]-dl_f[N[k]-1]*F_f_new[N[k]-1])/(d_f[N[k]]-dl_f[N[k]-1]*du_f_new[N[k]-1]); # step 2 U_f = zeros(N[k],1); U_f[N[k]] = F_f_new[N[k]]; for j = (N[k]-1):(-1):1 U_f[j] = F_f_new[j]-du_f_new[j]*U_f[j+1]; end ## save approximate solution UU_f[k,1:length(U_f)] = U_f; end ``` For $h=0.01$ : ![](https://i.imgur.com/i8IzK1z.png) Code for ploting : ```julia= ## choose the m-th step size in h m = 1; ## plot the exact solution plot([0;x[m,1:N[m]];1;],[0;u[m,1:N[m]];1;],label="exact", marker=:circle,legend=:topleft) ## choose the approximate solution for different method (comment out the other two by '#') plot!([0;x[m,1:N[m]];1;],[0;UU_c[m,1:N[m]];1;],label="appro with central", marker=:hexagon) plot!([0;x[m,1:N[m]];1;],[0;UU_b[m,1:N[m]];1;],label="appro with backward", marker=:utriangle) plot!([0;x[m,1:N[m]];1;],[0;UU_f[m,1:N[m]];1;],label="appro with forward", marker=:diamond) ## label of axes xlabel!("x") ylabel!("u(x)") ``` **Error Analysis :** Since we have the exact solution for the differential equation, we can check the order of our methods. $E_* = h^{0.5}\vert\vert u-U_* \vert\vert_2\,$, where "$*$" means different methods of approximating first derivative of $u$ we used above. (c = central, b = backward, f = foreard) | $h$ | $E_c$ | $E_b$ | $E_f$ | |---|---|---|---| |$0.01$|$0.012412248256619364$|$0.016450438745052955$|$0.06522722316024658$| |$\frac{0.01}{2}$|$0.003053663340819821$|$0.009883132492984063$|$0.0175681009123342$| |$\frac{0.01}{4}$|$0.0007602847444437904$|$0.005514090442135259$|$0.007258661787488097$| |$\frac{0.01}{8}$|$0.0001898750455770974$|$0.0029283911381194293$|$0.0033547320688701596$| ![](https://i.imgur.com/fAe2Vlu.png) @andy82332 看起來 E_c 不到 2 階, E_b 跟 E_f 也都不到 1 階, 感覺哪裡有問題 ```julia= ## Error for each method with all different h # Need to use the norm of "function" !! E_c = [norm(u[i,1:N[i]]-UU_c[i,1:N[i]])*h[i]^(0.5) for i = 1:length(h)]; E_b = [norm(u[i,1:N[i]]-UU_b[i,1:N[i]])*h[i]^(0.5) for i = 1:length(h)]; E_f = [norm(u[i,1:N[i]]-UU_f[i,1:N[i]])*h[i]^(0.5) for i = 1:length(h)]; ## Take 'log' of every h and error h1 = [log10(h[i]) for i = 1:length(h)]; E_c1 = [log10(E_c[i]) for i = 1:length(h)]; E_b1 = [log10(E_b[i]) for i = 1:length(h)]; E_f1 = [log10(E_f[i]) for i = 1:length(h)]; ## plot the log(h)-log(E) graph plot(h1,E_c1,label=L"E_c", marker=:circle,legend=:topleft) plot!(h1,E_b1,label=L"E_b", marker=:circle) plot!(h1,E_f1,label=L"E_f", marker=:circle) ## plot the line with differend order plot!([-2,-2.2,-2.4,-2.6,-2.8],[-1.6,-1.8,-2,-2.2,-2.4],label="m=1",linestyle=:dash) # m = 1 plot!([-2,-2.2,-2.4,-2.6,-2.8],[-2,-2.4,-2.8,-3.2,-3.6],label="m=2",linestyle=:dash) # m = 2 xlabel!(L"log_{10}h") ylabel!(L"log_{10}E") ``` **Non-uniform grid** $A=$ $$ \begin{pmatrix} \frac{-3\varepsilon}{bh^2}-\frac{1}{2bh} & \frac{\varepsilon}{bh^2}-\frac{1}{2bh} & 0 & \cdots &\cdots &\cdots & 0 \\ \frac{\varepsilon}{bh^2}+\frac{1}{2bh} & \frac{-2\varepsilon}{bh^2} & \frac{\varepsilon}{bh^2}-\frac{1}{2bh} & \ddots &\cdots &\cdots & 0 \\ 0 & \ddots & \ddots & \ddots & \ddots &\cdots &0 \\ \vdots &0&\frac{\varepsilon}{sh\,\cdot\, bh}+\frac{1}{2bh} &\frac{-2\varepsilon}{sh^2}+\frac{bh-sh}{sh\,\cdot\, bh}(\frac{\varepsilon}{h}+\frac{1}{2}) &\frac{\varepsilon}{sh^2}-\frac{1}{2sh} &0&\vdots \\ \vdots &\vdots &0 & \frac{\varepsilon}{sh^2}+\frac{1}{2sh} & \frac{-2\varepsilon}{sh^2} & \frac{\varepsilon}{sh^2}-\frac{1}{2sh} &0\\ \vdots&\cdots& \cdots&\ddots & \ddots & \ddots & \ddots \\ 0 & \cdots & \cdots & \cdots & 0 & \frac{\varepsilon}{sh^2}+\frac{1}{2sh} & \frac{-3\varepsilon}{sh^2}+\frac{1}{2sh} & \end{pmatrix}$$ $$ F= \begin{pmatrix} 0\\ 0\\ \vdots\\ 0\\ \frac{-2\varepsilon}{sh^2}+\frac{1}{sh} \end{pmatrix} $$