# Possible existence of an glassy interface ###### tags: `PhD projects` ## General idea *Found in /Hera/glass/CofM analysis* We know that glassy systems show heterogeneous dynamical behaviour: Over time patches of fast and slow moving particles arise. <img src="https://i.imgur.com/RcBGAO8.jpg " alt="drawing" width="400" /> One of the questions is whether the different regions of fast and slow moving particles are two ends of a continuum, or whehter we are truly looking at different phases. One of the possible ways to answer this question is by looking for a possible boundary layer (since a boundary layer would mean that there are two phases). The problem with finding this (possibly existing) boundary layer is that we are looking at a highly disorderd system; i.e. we are looking for a disorderd boundary layer that lies in an disordered environment. In the following sections we will discuss various approaches to showing whether or not there is a boundary layer. In order to make a good analysis we use the Monte Carlo coordinates instead of the initial coordinates (since then thermal fluctuations are removed). ### Cage frustration :warning: *Some of the pictures in this analysis are mad with a slightly different cutoff. Since I thought the exponent would decay quickly enough, I took into account all particles in the box. However, as it turns out, the particles far away still have an influence. As a result particles at distances larger than half the box size still have a result, which is something you do not want. Although this will not affect the overall trend of the results, it is something that needs to be fixed. I will do that in the future* :warning: First of all we consider a method that looks at the cage frustration of each particle (we will discuss below how we measure this cage frustration) and show that the particles with the largest cage frustration lie at the boundary between fast en slow moving particles. For each particle we then measure the weighted center of mass using the following formula. $$ \mathbf{r}^\text{MC}_i =\sum_{i\neq j} \mathbf{r}^\text{MC}_{ij}\cdot\left(\frac{\sigma_j}{2}\right)^3\exp[-(r^\text{MC}_{ij}-\sigma_i/2)] $$ Afterwards we look at the vector between the weighted CofM and the average cage position. The lenght of this vector is what we take as a measure for the 'frustration' in the cage. We normalize all these vectors and then only consider the normalized vectors that are longer than 0.5. Note that this is a very arbitrary number; it is just a way to select the particles that have the largest frutratration. *:warning:This is one of the figures with a wrong cutoff*:warning:. <img src="https://i.imgur.com/Swbb17k.png " alt="drawing" width="350" /><img src="https://i.imgur.com/IwPRxls.png " alt="drawing" width="350" /> We want to compare how these selected particles relate to the fast and slow moving particles using the contour surface (only vectors with normalized vectors that are longer than 0.65) <img src="https://i.imgur.com/9Btla17.png " alt="drawing" width="230" /><img src="https://i.imgur.com/hDRX0P1.png " alt="drawing" width="230" /><img src="https://i.imgur.com/hwuOGwe.png " alt="drawing" width="240" /> As we can see, the frustrated particles clearly lie between the fast and slow moving region. Moreover, the vector between CofM and MC coordinates poinys from fast to slow moving areas (perpendicular to the surface). *:warning:This is one of the figures with a wrong cutoff*:warning:. <img src="https://i.imgur.com/iayLAv6.png " alt="drawing" width="230" /> ## Quantitative analysis BOPS of density *Found on Hera in the folder glass/BOPSofBOPS/, we use the code BOPS_of_density_weighted.c* ### Weighted density analysis *For this analysis we use the density analysis that can also be found in the cage improvement paper. We also include the discussion here. Below we show the correlation between this weighted density of both initial, quench and MC coordinates for the A particles in a HS system, where the density is computed with in a radius of 4.5 $\sigma_A$.* <img src="https://i.imgur.com/AwJzbWi.png" alt="drawing" width="500"/> *As we can see the correlation is very high, especiallly for times close to $\tau\alpha$. Since the absolute distance between initial and quench/MC coordinates is a very good predictor for the propensity in the caging regime, this means that we can now do a pretty decent prediction of the propensity in both the caging and the diffusive regime with only two coordinates (instead of the normally used $2000$-ish).* *Note that this correlation is only very high when we take into account particles from far away. If we take into account different shells to compute the correlation we see that the correlation increases when we include more shells* <img src="https://i.imgur.com/rSIPaHV.png" alt="drawing" width="500"/> *Moreover the high correlation around $\tau\alpha$ also tells us that the mobility of particles is mainly density driven. However, to compute this density you need to get rid of thermal noise (which is what the quench and MC method do).* *We also try to include higher order generations of density. This mainly increases the correlation at times in between the caging and the diffusive regime, however the increase is not very large.* <img src="https://i.imgur.com/QjV0tNg.png" alt="drawing" width="500"/> *Finally we compute the correlation between propensity and both the distance between initial snapshot and MC coordinates, as well as the MC density.* <img src="https://i.imgur.com/2TXB5vv.png" alt="drawing" width="500"/> #### Old BOPS Normally evaluate the BOPS by summing over all particles in a certain radius: \begin{equation} q_i^{(0)}(l, m, r, \delta) = \frac{1}{Z} \sum_{i\neq j} e^{-\frac{(r_{ij} -r)^2}{2\delta^2}}Y^m_l(\mathbf{r}_{ij}). \end{equation} where, \begin{equation} Z = \sum_{i\neq j} e^{-\frac{(r{ij} -r)^2}{2\delta^2}}. \end{equation} After which we average over all $m$ to get the actual bops. \begin{equation} q_i^{(0)}(l, r, \delta) = \sqrt{\frac{4\pi}{2l+1}\sum_{m =-l}^{m =l}|q_i^{(0)}(l, m, r, \delta)|^2}. \end{equation} #### New parameters The new BOPS that we suggest differ on different points from the original BOPS. First of all, to evaluate the new BOPS, we do not sum over particles, but rather over the density associated with these particles. This density is normalized using $$ \rho_i^{w, norm} = \frac{\rho_i^{w}- \bar{\rho^{w}}}{SD_{\rho_i^{w} }} $$ Moreover for the new BOPS, we only consider the $l=1$ BOP, as this BOP picks up on gradiental changes. Finally, instead of looking at various shells, we again weigh the function by using an exponentially decaying function. The new parameter is thus defined as \begin{equation} q_i^\rho(m)= \frac{1}{Z} \sum_{i\neq j} e^{-(r_{ij} -\sigma_i/2)}\rho_j^{w, norm}\cdot Y^m_1(\mathbf{r}_{ij}). \end{equation} where, \begin{equation} Z = \sum_{i\neq j} e^{-(r_{ij} -\sigma_i/2)}. \end{equation} Finally we again average over $m$ to obtain the final parameters \begin{equation} q_i^\rho = \sqrt{\frac{4\pi}{5}\sum_{m =-1}^{m =1}|q_i^\rho(m)|^2}. \end{equation} Note that this $q_i^\rho$ parameters measures the planar gradient in density. If there is a boundary we would expect the structure on both sides of boundary particles to be different, meaning that we expect high values of the BOPS of BOPS for surface particles. We compute the gradient of the particles using two different cutoffs. The first one considers all particles within a distance of half the box length to compute the gradient. The second one considers only particles within two shells of nearest neighbours (i.e. within $2.1 \sigma_A$). ### Within half the box length. We standardize the gradient (i.e. substract the mean and divide by de SD), and then say that a particle is part of the interface if it's gradient lies above $1$ (which means that it is above one 1SD of the mean). If we then plot the particles with the highest gradient, we see that these particles follow the surface very nicely <img src="https://i.imgur.com/w7c3JUG.png" alt="drawing" width="250"/><img src="https://i.imgur.com/aUahs1K.png" alt="drawing" width="250"/><img src="https://i.imgur.com/W54xeWM.png" alt="drawing" width="250"/> From this we seemingly can conclude that the particles with the highest gradient are positioned in between the fast and slow moving regions. Again strongly suggesting that there is a boundary. #### Also plot high and low density particles We divide the particles two lists where one list contains the particles with the highest density and the other list contains the particles with the lowest density. Afterwards we remove the interface particles from these list and the plot high (red) and low (blue density) together with interface particles. Then we do the same analysis, but instead of dividing aprticles based on density, we divide them based on propensity. Below we show the results (left density analysis, right propensity analysis). Note that the interface in both cases is based on density analysis. <img src="https://i.imgur.com/jVDZPc8.png" alt="drawing" width="300"/> <img src="https://i.imgur.com/oq01ho7.png" alt="drawing" width="300"/> #### Density and propensity histogram Below we show the histograms of the weighted densities (normalized between 0 and 1) and the propensities at the relaxation times for the different particle groups defined above <img src="https://i.imgur.com/9JjBzZ7.png" alt="drawing" width="370"/><img src="https://i.imgur.com/CS1YUfn.png" alt="drawing" width="340"/> <img src="https://i.imgur.com/J023aL7.png" alt="drawing" width="370"/><img src="https://i.imgur.com/EybdNRM.png" alt="drawing" width="340"/> Finally we consider the correlation between propensity and the different particle groups <img src="https://i.imgur.com/DncJycl.png" alt="drawing" width="340"/> As we can see the interface particles do a significant less job than the other particles. ### Within 2.1 shell We again look at the interface particles, but now the gradient is calculated using only two shells of particles. <img src="https://i.imgur.com/g1z0Vdr.png" alt="drawing" width="250"/><img src="https://i.imgur.com/0OHWeUf.png" alt="drawing" width="250"/><img src="https://i.imgur.com/snc9RP8.png" alt="drawing" width="250"/> As we can see this follow the boundary a lot less nice. #### Also plot high and low density particles We divide the particles two lists where one list contains the particles with the highest density and the other list contains the particles with the lowest density. Afterwards we remove the interface particles from these list and the plot high (red) and low (blue density) together with interface particles. Then we do the same analysis, but instead of dividing aprticles based on density, we divide them based on propensity. Below we show the results (left density analysis, right propensity analysis). Note that the interface in both cases is based on density analysis. <img src="https://i.imgur.com/dQ1t6v2.png" alt="drawing" width="300"/> <img src="https://i.imgur.com/Kam1GFe.png" alt="drawing" width="300"/> #### Density and propensity histogram Below we show the histograms of the weighted densities (normalized between 0 and 1) and the propensities at the relaxation times for the different particle groups defined above <img src="https://i.imgur.com/MGwSJto.png" alt="drawing" width="370"/><img src="https://i.imgur.com/V1FXUfO.png" alt="drawing" width="340"/> <img src="https://i.imgur.com/JQGwnYY.png" alt="drawing" width="370"/><img src="https://i.imgur.com/8EijEaQ.png" alt="drawing" width="340"/> ### Initial coordinates As a point of interest: we also tried looking at this interface analysis using the initial coordinates. This turns out to give more or less the same results, however the correlation between interface particles and fast and slow moving areas is a lot less strong. Finally we consider the correlation between propensity and the different particle groups <img src="https://i.imgur.com/RUnLlMl.png" alt="drawing" width="340"/> As we can see the interface particles do a significant less job than the other particles. ## Considering larger systems *Found on HERA in the folder glass_largersystem/* Since the systems that we have looked at until now are relatively small ($N=2000$), it is hard to get a clear picture of what the interface looks like. Moreover we would like to see how the interface moves over time. To do so, we do a large simulation with 100.000 particles. Instead of computing the propensity (which would cost a lot of computational power), we run a single simulation of one snapshot and then use BOPS to predict what the surface will look like over time. #### System parameters We consider a HS system at $\eta= 0.58$, meaning that $\tau_\alpha \approx 10^4$. In total we want to measure for 100 $\tau_\alpha$ and every 0.1 $\tau_\alpha$ we write out coordinates. ### MC simulation *Found on HERA in the folder glass/Analysis/code_Monte_Carlo/MC_less_cycles/HS* In order to predict where the interface will be for these large snapshots, we ofcourse have to relax them using the MC method. In order to make the simulations as short as possible, we want to know how long we need to run the system to get a good enough accuracy. To do so, we do simulation for a different number of initialization cycles ($I$), measurement cycles ($M$) and write intervals ($\Delta W$). To have a good comparison, we perform the run for $I = 5\cdot 10^5$, $M= 10^6$, $\Delta W= 10^2$ twice and look how much they differ. Afterwards we then compare this difference to the difference between runs with less cycles and the run with $I = 5\cdot 10^5$, $M= 10^6$, $\Delta W= 10^2$. Below we show the results. The first row computes the average distances between the same particle in the run with less cycles and the reference run $\Delta r$. For the second row we first compute the difference in distance between the initial snapshot and the position in the MC snapshot in the reference run $\Delta r_{MC}$ and then computes the percentual difference $\Delta r/\Delta r_{MC}$. | | $I = 10^5$, $M= 10^5$, $\Delta W= 10^1$ | $I = 10^5$, $M= 10^5$, $\Delta W= 10^2$ | $I = 10^5$, $M= 5\cdot10^5$, $\Delta W= 5\cdot10^1$ | $I = 5\cdot 10^5$, $M= 10^6$, $\Delta W= 10^2$ | | ---- | --------------------------------------- | --- | --------------------------------------- | --------------------------------------- | | Average difference per particle $\Delta r/\sigma_A$ |0.023469|0.0295396|0.0130657|0.0104697| | Percentual difference$\Delta r/\Delta r_{MC}$ |0.230696|0.290368|0.128433|0.102915| N.B. Note that the results for $I = 5\cdot 10^5$, $M= 10^6$, $\Delta W= 10^2$ are computed by measuring two different runs, both with $I = 5\cdot 10^5$, $M= 10^6$, $\Delta W= 10^2$ and then comparing the difference. From this we see that we probably get away with doing $6\cdot10^5$ cycles instead of $15\cdot10^6$, which already speads up the simulation by a factor of 3. The simulation with $I = 10^5$, $M= 5\cdot10^5$, $\Delta W= 5\cdot10^1$ takes about $30$ min. for $2000$ particles, which means that for $10^5$ particles it should take $0.5\cdot50= 25 \text{ h} \approx 1 \text{ day}$. #### CHECKS *See folder on HERA glass_largersystem/Structural_MC_relaxation/code_MC/CHECKS/* We use a new code that only checks for periodic boundary conditions when particles are close to the edge of the box. The average difference between the MC coordinates between the old and new code is $\approx0.01\sigma_A$ for $I = r\cdot 10^5$, $M= 10^6$, $\Delta W= 5\cdot10^1$. Since this is in the same ballpark as the error we found above, we conclude that the code works. #### Systems parameters The particles are restricted to a radius of $1.25$ it's own radius. We choose this number because we saw in the other project (*improving caging*) that for this radius the MC coordinates agree the most with the MC coordinates found with the voronoi restriction. As mentioned above we use $I = r\cdot 10^5$, $M= 10^6$, $\Delta W= 5\cdot10^1$ ### Density code and gradient of density code *Found on HERA in the folder glass_largersystem/code_density* Just as in the case of the MC code, we adjust the already existing density code and the Bops of density code slightly to be able to deal with more particles, making use of among other things celllists. #### CHECKS *Found on HERA in the folder glass_largersystem/code_density/CHECKS* We checked this code by running it for the system of 2000 particles where the maximum of cells to check was set such that all particles were included (since in the original code we included all particle pairs in the calculation of the density). In the files *densityMCtest.dat* and *density_MC_control.dat* we can see that the two codes yield the same results, meaning that we can conclude that the new code works. We also checked the bops of density code (*BOPS_of_density_weighted.c*), by checking whether the new code (see file *density_gradient.dat*) gave the same gradient as the old code (*density_gradient_control.dat*, note that this file might be named differently in the original HS folder). Both the density code, as well as the Bops of density code were checked on the *snap_MC.sph* file from the first dataset and first run of the HS system. ### Results We compute the density for the system for both a cutoff distance of $2.1\sigma_A$ and $5.3\sigma_A$ (respectively 2 and 5 shells of particles). The gradient of the densit is calculated based on again 2 or 5 particle shells (i.e. again a cutoff of $2.1 \sigma_A$ or $5.3 \sigma_A$). To find the particles that are part of the interface we calculate the average density gradient and the corresponding standdarddeviation. All particles that have a gradient that is higher than $1SD$ above the average gradient are said to be part of the interface. The remaining particles are divided in two groups based on their density (N.B. *density* not *density gradient*), i.e. we have a group with the highest density and a group with the lowest densities. Note that this analysis is performed seperately for *A* and *B* particles. The particles are then coloured according to their density that is found with a cutoff of $5.3 \sigma_A$ (even for the system where the density gradient is based on a cutoff of $2.1\sigma_A$). The blacker the colour the higher the density, while the redder the particle, the lower the density. :question:*Why do high and low density lay so close to eachother? It is not something you would expect since we look at the density based on a large volume* | | Density cutoff $2.1\sigma_A$ | Density cutoff $5.3\sigma_A$ | | ----------------------------- | ---------------------------- | ---------------------------- | |Gradient cutoff $2.1 \sigma_A$|![](https://i.imgur.com/FErUH2X.png)| ![](https://i.imgur.com/QkaktZL.png)| |Gradient cutoff $5.3 \sigma_A$|![](https://i.imgur.com/JrAdvYz.png)|![](https://i.imgur.com/IYaQtL1.png)| | | Density cutoff $2.1\sigma_A$ | Density cutoff $5.3\sigma_A$ | | ----------------------------- | ---------------------------- | ---------------------------- | |Gradient cutoff $2.1 \sigma_A$|![](https://i.imgur.com/jNAjtzm.png)|![](https://i.imgur.com/SUKGtSK.png)| |Gradient cutoff $5.3 \sigma_A$|![](https://i.imgur.com/mCuaAKG.png)|![](https://i.imgur.com/k7E6BQ9.png)| ## To do * Look at voronoi volume ## To discuss * How many cycles for MC? See proposal above