--- title: 11-4. Chebfun. Advection-Diffusion equation tags: Final reports --- **Reference** 1. Chebfun: Boundary layer problem: [BoundaryLayer](https://www.chebfun.org/examples/ode-linear/BoundaryLayer.html) 2. Chebfun: Advection-diffusion equation with a jump: [AdvDiffJump](http://www.chebfun.org/examples/ode-linear/AdvDiffJump.html) --- ## Advection-Diffusion Equation (沛聖) ### Mathematical Formulation Consider the adevection-diffusion equation $$ \partial_t u = 0.1 u_{xx} + u_x,\quad x\in[-3,3],\, t\in\mathbf{R}_{\geq 0} $$ with boundary conditions $$ \begin{cases} u_x(-3,t) &= 0 \quad \text{(Neumann)}\\ u(3,t) &= 0 \quad \text{(Dirichlet)} \end{cases} $$ and initial condition $$ u(x,0) = (\frac{1}{6}x+\frac{1}{2})^2 \sin(\pi x) $$ In order to use Chebyshev spectral method, we need to transform the domain $[-3,3]$ to $[-1,1]$. To this end, define \begin{align*} v: &[-1,1]\times \mathbf{R}_{\geq 0} \to \mathbf{R}\\ &\textrm{s.t. }\, v(x,t) = u(3x,t) \end{align*} The preceding formulation is then equivalent to $$ \begin{cases} \partial_t v = \frac{0.1}{9} v_{xx} + \frac{1}{3} v_x \\ v_x(-1,t) =0 \\ v(1,t) = 0 \\ v(x,0) = (\frac{1}{2}x+\frac{1}{2})^2 \sin(3\pi x) \end{cases},\, x\in [-1,1],\, t\in\mathbf{R}_{\geq 0} $$ ### Numerical Formulation Given $N\in\mathbf{N}$, consider the grid points $(x_i,t_n),\, 0\leq i \leq N,\, n\geq 0$ such that \begin{align*} t_n &= n \Delta t,\quad k\geq 0\\ x_i &= \cos(\frac{\pi i}{N}),\quad 0\leq i \leq N \quad \text{(Chebyshev-Gauss-Lobatto points)} \end{align*} Let $V_i^n \approx v(x_i,t_n)$ denote the numerical approximation at grid point $(x_i,t_n)$. Using Backward Euler scheme, the equation $$ \partial_t v = \frac{0.1}{9} v_{xx} + \frac{1}{3} v_x $$ can be approximated by \begin{align*} \frac{V^{n+1}-V^n}{\Delta t} &= \frac{0.1}{9} D^2 V^{n+1} + \frac{1}{3} D V^{n+1}\\ &= L V^{n+1} \end{align*} where 1. $D \in \mathbf{R}^{(N+1)\times(N+1)}$ is the Chebyshev differentiation matrix 1. $L = \frac{0.1}{9} D^2 + \frac{1}{3} D$ After some rearrangements, the preceding equation becomes the following linear system $$ (I - \Delta t L) V^{n+1} = V^n $$ #### Boundary Bordering The boundary conditions $$ \begin{cases} v_x(-1,t_n) = [DV^n]_N= 0\\ v(1,t_n) = V^n_0 =0 \end{cases},\, n \geq 0 $$ are imposed to the linear system $$(I - \Delta t L) V^{n+1} = V^n$$ by altering the first and the last row of the linear system so that it becomes $$ \left( \begin{array}{cc} 1 & \begin{matrix} 0 & \cdots &0 \end{matrix} \\ \hline &(I - \Delta t L)[1:N-1,:]\\ \hline &D[N,:] \end{array} \right) \begin{pmatrix} V_0^{n+1} \\ V_1^{n+1} \\ \vdots \\ V_{N-1}^{n+1} \\ V_N^{n+1} \end{pmatrix} = \begin{pmatrix} 0 \\ V_1^n \\ \vdots \\ V_{N-1}^n \\ 0 \end{pmatrix} $$ Or equivalently, $$ \left( \begin{array}{cc} (I - \Delta t L)[1:N-1,1:N]\\ \hline D[N,1:N] \end{array} \right) \begin{pmatrix} V_1^{n+1} \\ \vdots \\ V_{N-1}^{n+1} \\ V_N^{n+1} \end{pmatrix} = \begin{pmatrix} V_1^n \\ \vdots \\ V_{N-1}^n \\ 0 \end{pmatrix},\quad V^{n+1}_0 = 0 $$ #### Code ```python= # ========== # Method 1 # == Output == # D: Chebyshev derivative matrix # x: Chebyshev collocation points # ========== def chebDiffMtx(N): if N == 0: D = 0 x = 1 else: theta = np.linspace(0,np.pi,num=N+1,endpoint=True).reshape((N+1,1)) #x = np.cos(theta) x = -np.sin(theta-np.pi/2) c=np.vstack((2,np.ones((N-1,1)),2))*(-1)**np.arange(0,N+1).reshape((N+1,1)) X = np.tile(x,(1,N+1)) dX = X-X.T # off-diagonal entries D = (c@(1/c).T) / (dX+np.eye(N+1)) D = D-np.diag( np.sum(D,axis=1) ) return (D,x) def AdvDiffusion(N,dt,T): iter_max = int(T/dt) # Initialize a (N+1)x(iter_max+1) array to store v(x_i,t_n) V = np.zeros((N+1,iter_max+1)) # Create the coefficient matrix of the linear system [D,x] = chebDiffMtx(N) L = 0.1/9 *(D@D)+ 1/3 * D L = np.eye(N+1)-dt*L A = np.vstack( ( L[1:N,1:N+1] , D[N,1:N+1] ) ) # Create initial function v = 1/4 * ((x+1)**2)* np.sin(3*np.pi*x) V[:,0] = v.flatten() for k in range(1,iter_max+1): # Create the constant term vector of the linear system b = np.vstack( ( v[1:N] , 0 ) ) # Sovle the linear system v = solve(A,b) # Add V_0 = 0 to the solution v = np.vstack((0,v)) # Store v to V[:,k] V[:,k] = v.flatten() return V ``` ### Numerical Results ![](https://i.imgur.com/DovJLns.png) ![](https://i.imgur.com/AtNivm0.gif) ### Convergence Analysis Here we consider the case where $N=70,\,T=10,\, \Delta t \in \{0.05, 0.05/2, 0.05/4 \}$. To analyze the convergence of solution, we suppose that the solution obtained from `chebfun` in Matlab is the exact solution. What we needs to do first is to import the exact solution stored in `.mat` file to python ```python= import scipy.io # dt = 0.05 mat = scipy.io.loadmat('solution1.mat') V1_exact = mat['U'] # dt = 0.05/2 mat = scipy.io.loadmat('solution2.mat') V2_exact = mat['U'] # dt = 0.05/4 mat = scipy.io.loadmat('solution3.mat') V3_exact = mat['U'] ``` ```python= # Obtain solutions for different dt N = 70 dt1 = 0.05 dt2 = dt1/2 dt3 = dt1/4 T = 10 V1 = AdvDiffusion(N,dt1,T) V2 = AdvDiffusion(N,dt2,T) V3 = AdvDiffusion(N,dt3,T) ``` The error ( $\lVert V^n-V_{exact}^n \rVert_\infty$ ) and semilog error ( $\log\lVert V^n-V_{exact}^n \rVert_\infty$ ) with respect to different $\Delta t$ are given below ![](https://i.imgur.com/uOFezAG.png) The relative error ( $\frac{\lVert V^n-V_{exact}^n \rVert_\infty}{\lVert V_{exact}^n \rVert_\infty}$ ) and semilog relative error ( $\log\frac{\lVert V^n-V_{exact}^n \rVert_\infty}{\lVert V_{exact}^n \rVert_\infty}$ ) with respect to different $\Delta t$ are given below ![](https://i.imgur.com/IucTgVY.png) ## Burgers equation (昆毅) $$ u_t = -(u^2)'+0.02u'', \quad x\in[-1, 1], \quad t\in[0, 6], $$ with initial and boundary conditions $$ u(x,0) = (1-x^2)e^{-30(x+0.5)^2}, \quad u(-1, t)=u(1, t) =0. $$ **Solution** (Code with Julia) First, we construct differential matrix D to approximate $u_{x}$, but before we construct it, we compute the chebyshev coefficient of initial condition $u(x,0)$ to determine how many grid points N we need. (All we use is Chebyshev point of the second kind.) ``` #(code from 浩恩) # CHEBFFT2 Chebyshev differentiation via FFT in another way # If v is complex, delete "real" commands. # f = f(x_i) is a column vector function chebfft2(f) N = length(f)-1; if N==0 f_prime = 0; return f_prime end # extend the function and take scaled FFT f_extend = [f; f[N:-1:2]]; f_hat = fft(f_extend)/(2*N); # obtain the Chebyshev coefficients of f a = zeros(N+1,1); a[1] = real(f_hat[1]); a[2:N] = 2*real(f_hat[2:N]); a[N+1] = real(f_hat[N+1]); # compute the Chebyshev coefficients of f' b = zeros(N+1,1); b[N] = 2*N*a[N+1]; for i = N:-1:3 b[i-1] = 2*(i-1)*a[i] + b[i+1]; end b[1] = .5*(2*a[2]+b[3]); # compute the Fourier coefficient of f' f_prime_hat = zeros(2*N,1); f_prime_hat[1] = b[1]; f_prime_hat[2:N] = 0.5*b[2:N]; f_prime_hat[N+1] = b[N+1]; f_prime_hat[N+2:end] = 0.5*b[N:-1:2]; # take rescaled ifft to get extend f' f_prime_extend = real(ifft(f_prime_hat)*2*N); f_prime = f_prime_extend[1:N+1]; # get the unextended f' return f_prime, a, b end 0=100 (D,xtest)=cheb(N0); u =((-xtest.^2).+1).*exp.(-30*(xtest.+.5).^2) (uprime, C, Cprime) = chebfft2(u); plot(log10.(abs.(C)),label="coefficient of u_0") xlabel!("k") ylabel!("log10(u_k)") ``` ![](https://i.imgur.com/RR5bliB.png) It shows that N=100 is enough for $u(x,0)$, I know that when $t \neq 0, \, u(x,t)$ may need more grid points. So I need to make a auto adjust method or De-Aliasing, but I don't finish the program yet. What I have done is try $N=250$ and get almost same accuracy with $N=100$ Then we construct differential matrix with $N=100$ and combine the boundary condition to get the ode system of this problem: $$ \frac{\partial U}{\partial t} = -2U(\widetilde{D} U)+0.02(\widetilde{D}^2 U) $$ where $$ U=[U_0, \cdots , U_N]', \, U_j=u(x_j,t) \\ \text{ with } x_j=\cos(\frac{j \pi}{N}) \text{ , } j=0,1, \cdots ,N \\ \text{ (Chebyshev point of the second kind) } $$ and $$ \widetilde{D}= \begin{pmatrix} 0 \\ D(2:{N-1},:)\\ 0 \end{pmatrix} $$ ($\widetilde{D}$ has been imposed by boundary condition as below.) #### Boundary Bordering The boundary conditions $$ \begin{cases} u(-1,t) = U_N= 0\\ u(1,t) = U_0 =0 \end{cases} $$Or equivalently, $$ \begin{cases} u(-1,0) = U_N(t)= 0\\ u_t(-1,t) = \frac{\partial U_N(t)}{\partial t} =0\\ u(1,0) = U_0(t) =0\\ u_t(1,t) = \frac{\partial U_0(t)}{\partial t} =0 \end{cases} $$ are imposed to the differential matrix D by altering the first and the last row of the differential matrix D so that it becomes $$ \begin{pmatrix} \frac{d U_0(t)}{d t} \\ \vdots \\ \frac{d U_j(t)}{d t} \\ \vdots \\ \frac{d U_N(t)}{d t} \end{pmatrix}= -\begin{pmatrix} 0 \\ D(2:{N-1},:)\\ 0 \end{pmatrix} \begin{pmatrix} U_0 \\ U_1 \\ \vdots \\ U_{N-1} \\ U_N \end{pmatrix}^2+ 0.02\begin{pmatrix} 0 \\ D(2:{N-1},:)\\ 0 \end{pmatrix}^2 \begin{pmatrix} U_0 \\ U_1 \\ \vdots \\ U_{N-1} \\ U_N \end{pmatrix} $$ ``` #(code of differential matrix from 浩恩) function cheb(N) if N == 0 D = 0; x = 1; return end x = cos.(pi*(0:N)/N); # generate grid x_i = cos(i*pi/N) i=0~N c = [2; ones(N-1,1); 2].*(-1).^(0:N); # c_i*(-1)^i X = repeat(x,1,N+1); dX = X-X'; # compute x_i - x_j D = (c*(1 ./c)')./(dX+(I(N+1))); # off-diagonal entries D = D - diagm(vec(sum(D,dims=2))); # diagonal entries return D, x end ``` After we use spectral method to discrete the space $x$ and get an ode problem of time $t$, we can use differential equation solvers of julia to solve it. Method: Rodas4P - A 4th order A-stable stiffly stable Rosenbrock method. (adaptive timestep with abstol=1e-14,reltol=1e-14) ``` T = 6; # end time N=100 (D,x)=cheb(N); u0 = ((-x.^2).+1).*exp.(-30*(x.+.5).^2) # construct matrix D1=-D D1[1,:]=zeros(1,N+1) D1[end,:]=zeros(1,N+1) D2=0.02*D*D D2[1,:]=zeros(1,N+1) D2[end,:]=zeros(1,N+1) @time begin f(u,p,t)=2*u.*(D1*u)+D2*u tspan = (0.0,T) prob = ODEProblem(f,u0,tspan) usol = solve(prob,Rodas4P(),abstol=1e-14,reltol=1e-14) end ``` ![](https://i.imgur.com/FFhv1SQ.gif) ![](https://i.imgur.com/mFfyXjp.png) ![](https://i.imgur.com/opYXM0c.png) (N=100 is enough for u(x,T)) To check the accuracy of this solution (usol), let uexact=solution of chebfun with RelTol=2.22045e-14 and $e=||usol-uexact||$ : | e | N=100 | | -------- | -------- | -------- | |$L_1$ norm|3.052528302648485e-11 | |$L_2$ norm|2.8311098571767894e-11 | |$L_\infty$ norm|4.0291561753669214e-11 | So if we control the tolerance of both chebfun and odesolver of julia, we can get about $10^{-11}$ accuracy. ![](https://i.imgur.com/hdVnSuv.png) ![](https://i.imgur.com/y6Fvjrm.png) #### Simple case I also try a simple case $$ u_t = \frac{3}{\pi^2}u_{xx}+\frac{\sqrt{3}}{\pi}u_x, \quad x\in[0, 1], \quad t\ge 0, $$ with initial and boundary conditions $$ u(x,0) = \cos(\frac{\pi x}{2})e^{\frac{-\pi x}{2\sqrt{3}}}, \quad u(-1, t) = u(1,t)=0. $$ with exact solution $$ u(x,t)=\cos(\frac{\pi x}{2})e^{\frac{-\pi x}{2\sqrt{3}}}e^{-t} $$ and get | $L_2$ norm | N=30 | | -------- | -------- | -------- | |usol-uexact |1.1934290230047646e-15 | | uchebfun-uexact |3.705719024883476e-13 | | usol-uchebfun |3.7107987298718886e-13 | errors of this simple case is smaller than true case, but I think the reason is because this case is linear and the true case is not