Morris Huang

@morris8934

My Personal Website: https://physics-morris.github.io/

Joined on Mar 6, 2023

  • No external field Ising model on square lattice are $L \times L$ spin that can take 2 values, either $S_i=+1$ or $S_i=-1$. Each spin interact with neighbors and with external field. The Hamiltonian of the system is \begin{equation} H = -\sum_{<ij>} J_{ij}S_iS_j - \sum_i H_iS_i \end{equation} A particular solution for no external field are obtained by Lars Onsager where the second-order phase transition is happened at $\dfrac{J}{k_BT_c} = \dfrac{ln(1+\sqrt{2})}{2}$. With this in mind, we can use Metropolis algorithm to simulate Ising model with good accuracy. Following four figures show different MC steps, the spin of a $100 \times 100$ system with blue represent spin up and red represent spin down. Since the temperature happened around the critical temperature, the magnetization taken very long time to come to equilibrium. First of all we can measure the mean energy at different temperature after reaching equilibrium by using following formula. \begin{equation}
     Like  Bookmark
  • Non-reversal random walk are randomly walk except it cannot reverse it's previous step. Following two figures show different steps of NRRW, compare to previous figure it can walk further since it cannot go reverse its previous step. Then I measure the probability $P(R)$ as a function of $R$ after large enough steps. Different from reversal RW, the peak of probability is not happened at the origin. By taking plot semi-log we see that it doesn't look quite like a Gaussian distribution. We can verified this again by looking at the QQ plot which indicate a Gaussian only if it lay around the $45^\circ$ line. Then I measure $<|R|>$ and $<R^2>$, in particular $<R^2>$ are roughly $\sim$ $N^1$ at large $N$. Code program rw2 implicit none
     Like  Bookmark
  • 1D An ideal 1 dimension random walk with probability of $p$ stepping to right and $1-p$ to the left are represented by below figure of numbers of particle of first 100 steps. From this we can clearly guess(see) that the probability $P(x, N)$ at $N$ step at location $x$ are higher around the center and lower at the edge, just like a Gaussian distribution. Following three successive figures show different $p$, the relation between $N$ and $<x>$. And next we show the relation between $N$ and $<N^2>$ which has the maximum slope of $1$ when $p=0.5$. The probability of $p(x, N)$ are follow by central limit theorem which tell us that \begin{equation} \mu = (p-q)Na
     Like  Bookmark
  • To generate two independent Gaussian random variables $x$ and $y$ we use these equation \begin{equation} x = \sigma \sqrt{-2ln\xi_1}\cos(2\pi \xi_2) \quad,\quad y = \sigma \sqrt{-2ln\xi_1}\sin(2\pi \xi_2) \end{equation} This figure show total number $N=10^5$ Gaussian random numbers. We can plot the semi-log to again verified the result. For generating two independent Gaussian variables $x_1$ and $x_2$ to form a distribution $y=x_1+x_2$, we use same approach above.I then use KDE (kernel density estimation) to fit two Gaussian distribution.
     Like  Bookmark
  • Poisson's equation Poisson equation has the form of \begin{equation} \dfrac{\partial^2 u}{\partial x^2}+\dfrac{\partial^2 u}{\partial y^2} = -\dfrac{q}{K} \tag{1} \end{equation} In this particular case $u$ is the steady-state temperature. Given the boundary condition we can apply finite difference into this PDE and solve it numerically. The finite difference has the following relation. \begin{equation} w_{ij} = \dfrac{1}{2+2\left( \dfrac{h}{j}\right)^2} \left[ w_{i+1,j} + w_{i-1,j} + (\dfrac{h}{k})^2(w_{i,j+1}+w_{i,j-1})
     Like  Bookmark
  • Linear equation Linear equations try to solve following kind of equation \begin{equation} \textbf{A}\textbf{x} = \textbf{b} \tag{1} \end{equation} where $\textbf{A}$ is a NxN matrix, $\textbf{x}$ is a Nx1 and $\textbf{b}$ is a Nx1 column. There are two ways to solve it by using linear package, LAPACK. First is using the subroutine $dgesv$ which solve $\textbf{x}$ immediately. Second method is that we can find the inverse matrix of $\textbf{A}$ , then multiplied by $\textbf{b}$. Finding inverse matrix in LAPACK require two subroutine, $dgetrf$ and $dgetri$. Example 1 In the first example we first solve $\textbf{x}$ and then verified the result by multiply $\textbf{x}$ by the matrix $\textbf{A}$. I then solve this equation for $10^6$ times to compare its run time to another method. Finally I computer the error of solution by calculate the difference between $\textbf{A}\textbf{x}$ and
     Like  Bookmark
  • In this problem we can separate into four ODE which is \begin{equation} \begin{split} \dfrac{dx}{dt} &= v_x \ \dfrac{dy}{dt} &= v_y \ \dfrac{dv_x}{dt} &= 2\omega v_y + \omega^2x - \dfrac{GM_1(x+a_1)}{[(x+a_1)^2+y^2]^{3/2}} - \dfrac{GM_2(x-a_2)}{[(x-a_2)^2+y^2]^{3/2}} \ \dfrac{dv_y}{dt} &= -2\omega v_x + \omega^2y - \dfrac{GM_1y}{[(x+a_1)^2+y^2]^{3/2}}
     Like  Bookmark
  • In this proble we try to simulate soft-disk fluid that consist of small particle interact with each other through Lennard-Jones potential in which are repulsive at short distance and attractive at longer distance. \begin{equation} V\left(r\right)=4\epsilon\left[\left({\dfrac{\sigma}{r}}\right)^{12}-\left({\dfrac{\sigma}{r}}\right)^{6}\right] \tag{1} \end{equation} And the force is the gradient of the potential plus a minus sign \begin{equation} F\left(r\right)= \dfrac{24\epsilon}{r}\dfrac{\sigma}{r}^{6}[2\left({\dfrac{\sigma}{r}}\right)^{6}-1] \tag{2}
     Like  Bookmark
  • Package Installation I wrote a simple package that simulate dusty partilces. Gravity force Electrostatic force Thermophoresis force Neutral drag force
     Like  Bookmark
  • Electromanetics Formulation Example Hard Source Soft Source Numerical Boundary Condition
     Like  Bookmark
  • Using Newton's second law we can derived following equation of motion. \begin{equation} \ddot{\theta} + \gamma\dot{\theta} + \dfrac{g}{l}\sin{\theta} = \dfrac{F}{ml}\cos{\omega t} \tag{1} \end{equation} Or can be rewrite as normalize unit \begin{equation} \ddot{\theta} + \dfrac{1}{q}\dot{\theta} + \sin{\theta} = f\cos{\omega_D t} \tag{2} \end{equation}
     Like  Bookmark
  • There are many ways to evaluate the differentiation of a function numerically, one of which is three-point method. Due to its method which require three point to evaluate an derivative, first and last point required a different form of formula which given below. \begin{equation} f'(x_0) = \dfrac{1}{2h} [-3f(x_0)+4f(x_0+h)-f(x_0+2h)] + \dfrac{h^2}{3}f^{(3)}(\zeta_1) \tag{1} \end{equation}
     Like  Bookmark
  • For data analysis sake, I read a text file but only know it contain 3 columns with unknown lines. I first go through the data and then I allocate the memory for three array to store the column separately. Next, I need to preform least square fitting of the form of $y=P(x)=a_0+a_1x+a_2x^2$ by minimizing the weighted mean square error given by following equation. \begin{equation} S=\dfrac{1}{M}\sum_{i=1}^{M}\left(\dfrac{y_i - P(x_i)}{e_i}\right)^2 \tag{1} \end{equation} So in order to find the best cooeficient that minimize $S$, I differentiate by $a_0$, $a_1$ and $a_2$. \begin{equation} \dfrac{\partial s}{\partial a_0}=\dfrac{\partial s}{\partial a_1}=\dfrac{\partial s}{\partial a_2}=0 \tag{2}
     Like  Bookmark
  • This problem try to solve following integral and it's analytic solution also given, \begin{equation} \int_0^2 e^{2x}\sin{3x}dx = \dfrac{1}{13}\left( -3e^4\cos(6) + 2e^4\sin(6) + 3 \right) \tag{1} \end{equation} Trapeziodal method Trapezoidal method divide the interval into $n$ portion, and averaging the left and the right Riemann sums. \begin{equation} \int_a^b f(x)dx \approx \sum_{k=1}^{n} \dfrac{f(x_{k-1}) + f(x_k)}{2}h \tag{2}
     Like  Bookmark
  • Below program use simple iteration to find the root of a function of the form of \begin{equation} f(x) = x - rx\left(1+\dfrac{x}{4}-x^3\right) \tag{1} \end{equation} First, I use the following iterative equation. \begin{equation} x = rx\left(1+\dfrac{x}{4}-x^3\right) \tag{2}
     Like  Bookmark
  • When evaluating a polynomial function (1) in a computer program, the naivest way usually result in more computation complexcity. In order to speed up the process, Horner's scheme were used. \begin{equation} P(x) = a_0 + a_1 x + a_2 x^2 + ... + a_n x^n \tag{1} \end{equation} Rather than direct evaluating (1), Horner's scheme change (1) into (2) which can give the same result but a more efficient way. Naive way of evaluating polynomail requires at most $n$ additions and $(n^2+n)/2$ multiplications while Horner's scheme only require $n$ additions and $n$ multiplications. \begin{equation} P(x) = a_0 + x\left(a_1 + x\left(a_2 + x\left(a_3+...+x\left(a_{n-1}+xa_n\right)\right)\right)\right)
     Like  Bookmark
  • The equation of eccentric anomaly is given by \begin{equation} \psi - e\sin \psi = \omega t \tag{1} \end{equation} Using simple iteration which can be written as \begin{equation} \psi_{n+1} = \omega t + e \sin \psi_n \tag{2} \end{equation} , in which I iterate over and over until the difference between $\psi_{n+1}$ and $\psi_n$ is less than $\varepsilon$. In this case I let $\varepsilon=1.D0$.
     Like  Bookmark
  • Fibonocci numbers is given by \begin{equation} F_n = F_{n-1} + F_{n-2}\tag{1} \end{equation} where \begin{equation} F_1 = F_2 = 1\tag{2} \end{equation} So the first few numbers are \begin{equation*}
     Like  Bookmark
  • Roots for quadratic equation In this problem we try to solve a quadractic equation of the form of \begin{equation} ax^2 + bx + c = 0. \tag{1} \end{equation} We can factorize (1) to \begin{equation} (x+\frac{b}{2a})^2=\frac{b^2-4ac}{4a^2}. \tag{2}
     Like  Bookmark