# IDM Notes
## Numerical Implementation
accel.c:
After computing hydro forces for current timestep, enter "additional forces"
gravity/ags_hsml.c:
## Tests
https://docs.google.com/presentation/d/1TM4ulLqvsmLYz3vxf4-07hHuLJucXsSxTc9HiV1rbJo/edit?usp=sharing
### Momentum/energy conservation
write equations from code
1D/2D/3D - Done
Mass diff - Done
Vel diff - Done
Vel disp - Done
### Check no interactions against AGORA
### Isothermal sphere
Notebook from Ethan
OOM core size
Need analytic prescription
Isothermal jeans
Solve for expected core size
Jeans equation -> what is radius within which you expect particles to scatter
Density profile -> Ethan notebook (might need changes since difference between baryons )
Gravothermal fluid
Coupling between DM-baryons
Solve diffeqs
How semianalytic models are calibrated
Given initial density profile what is final density profile
Products: Density/VDisp profiles
Make sure we get reasonable core size
Spherically symmetry gas sphere:
NFW halo with Hernquist sphere
Make masses equivalent to Agora
./spheric -halo -Nhalo 100000 -Mhalo 1254 -a 1 -b 3 -c 1 -rs 15 -rcutoff 150 -hernquist -Mstar 8.59 -Nstar 100000 -rhern 0.005 -name isotherm -ogb && mv isotherm-gadget.bin isotherm_test
./spheric -halo -Nhalo 100000 -Mhalo 1254 -a 0.25 -b 3 -c 1 -rs 15 -rcutoff 150 -hernquist -Mstar 8.59 -Nstar 100000 -rhern 0.25 -name isotherm -ogb && mv isotherm-gadget.bin isotherm_test
./spheric -halo -Nhalo 100000 -Mhalo 1254 -a 1 -b 3 -c 1 -rs 15 -rcutoff 150 -plummer -Mstar 8.59 -Nstar 100000 -rp 0.5 -name isotherm -ogb && mv isotherm-gadget.bin isotherm100000_10
./spheric -halo -Nhalo 1000000 -Mhalo 125.44361 -a 10 -b 2.2 -c 1.45 -rs 4 -rcutoff 110 -name nfw -ogb && mv nfw-gadget.bin nfw
### Proper scattering
Need scatter tracking
Scattering rate profile (scatter per radius) is scattering rate, plot in Vogelsberger 2012
### Softening & other parameters
Looping over DM/Bary
Read on adaptive
Why might we expect any issues here?
Different softenings for each species
Search length = 3x softening of root particle
what is `SEARCHBOTHWAYS` (both within each others' softening length?)
if searchbothways, max of softenings as search rad - comparing 3x to x? why not taking min
iterate through particles, if within search radius keep, else toss?
### Mass
Increase resolution, look at vel/den profs
DM to scattering target microparticle mass ratio
### Timestep
Decrease max timestep until complaint
Warning throw
### Symmetry (Rocha tests)
Sphere-in-stationary - Done but get analytic, Redo with SPH
Angular deflection:
Phi - Done
Theta - Done but redo
Post-scatter magnitude - Done but redo
Phase-space distributions - Done
### Thermalization (Fig 2 Fischer)
Re-run with diff approach
? Interplay between timestep warning and softening length
### Bullet cluster
Down the line but keep in mind
### Hybrid
See if hybrid accurately predicts
###
## Halo
### Core-size
#### Estimate
#### Actual
## LSS
---
# GIZMO Notes:
General overview:
1. Initialize
2. Domain Decomposition
3. Particle Map
4. Force Tree/PM/MFM/MFV
5. Calculate Forces (Gravity, Hydro)
6. Additional Physics Modules (Cooling, Chemistry, SFR, Feedback, etc.)
7. Timestep
8. Update Particles
9. Split/Merge
10. Output
### Tree+Gravity:
1. Treebuild: Tree construction, updating, drifting.
2. Treewalk: Traversing through the tree to find neighbors, calculate force between
3. Treecomm: Communicating computed forces between CPUs (gather I guess?)
4. Treeimbal: Balances data between processors
### PM-Gravity:
If TreePM is turned on, we use a particle-mesh for long-range forces, Tree for short-range
### Hydro/Fluids:
1. Density:
A. dens+grad: calculation of the initial density and the gradients of the density
B. denscomm: communication step for the density calculation, exchanging density data between different processors
C. densimbal: balancing the density data between processors
3. Hydro:
A. hydrofrc: determine the pressure/viscous forces at each cell and use them to determine the local accel of the fluid at each cell
B. hydcomm: communication between processors
C. hydimbal: balance data between processors
D. hmaxupdate: update kernel lengths in the tree based on the density computations
E. hydmisc: idk, other shit
### Domain:
Domain decomp step. Divide the sim into cells according to the approach (Tree, TreePM, MFV, MFM, etc) and assign properties (density, temp, velocity) to cells
### Peano:
Peano-Hilbert order. This minimizes distance between neighboring cells to reduce comp costs (?)
### Drift/Split:
1. Drift: Drift the particles (due to cosm expansion?)
2. Split/Merge: Split cells that are too large into smaller cells, merge cells to small into larger cells
### Kicks:
1. Determine timesteps
2. Calc accel due to external forces
3. Update particles according to the acceleration and timestep
### IO:
Writing of snapshot files
### Misc:
Idk other stuff
### Physics modules:
1. Cooling+Chem
2. Blackholes
3. Feedback
4. Local wind
5. AGS
6. FOF/Subfind
7. Grains
8. SFR-Cooling
---
# General notes
## Hydrodynamic treatments:
### Smoothed particle hydrodynamics (SPH)


SPH employs a kernel func to smooth out any discontinuities found in particle properties to allow use of fluid eqs. Interactions between particles are calculated according to their smoothing kernels
### Adaptive mesh refinement (AMR)

[Example visualization](https://www.youtube.com/watch?v=u-VV3euIsXo&ab_channel=HolzmannCFD)
AMR tessellates the sim domain into grid blocks, recursively splitting the domain into more resolved chunks where better refinement is needed. The necessity of refinement is determined by looking at the complexity of local gradients (rate of change of variables like temp, press, vel). More complex regions (shock waves, sudden temp/press increases, etc.) require greater resolution. The size of the region of increased resolution spans the cells with a gradient exceeding some threshold
### Moving-mesh finite volume (MMFV)

Instead of a static rectangular grid, MMFV grids are flexible. Just as AMF increases refinement levels based on complexity of local gradients. That is to say, grid cells are deformed in response to large gradients.
### Meshless finite volume/mass (MFV/MFM)
 

In MFV approaches, the domain is divided into control volumes by drawing triangles between neighboring particles. Neighbors are determined by looking at particles withing some maximum rad.
## Gravitational treatments:
### Particle-Particle (PP)
Brute O(N2) particle-particle force computation between all pairs. Integrate to update positions/velocities.
### Particle-mesh (PM)

Masses are assigned to particles then mapped onto a mesh grid via interpolation (charge assignment func). Calc the force at each mesh point from all other mesh points. Then interpolate back to particles, update positions and velocities based on charges.
### Tree+Particle-mesh (TreePM)


Instead of a uniform grid, we use a hierarchical tree-based structure. Starting with a uniform grid, we then further divide until reaching some desired force resolution. This produced structure is known as an tree (octree, kdtree, etc.). We can then choose to only consider particles within a certain radius (and approximate all others, e.g. barnes-hut), using the tree for quick access. Direct force is computed for particles at the same level, approximations are used for particles at different levels (e.g. far-field CM)
### Particle-Particle Particle-Mesh (P3M)



Here we have two grids to deal with, the chaining mesh and the potential mesh, used for PP and PM calculations, respectively. The potential mesh is filled as before (change densities, potentials, etc.), whereas the chaining mesh is filled only with the positions of the particles. The potential mesh (PM) is used to compute indirect forces resulting from the charge distribution whereas the chainign mesh (PP) is used to compute direct forces between particles.
### Fast Multipole (FMM)


Grid with upward and downward pass force computations. In the upward pass, multipole expansion is passed from individual particles/elements to a group. Then compute the forces and pass the approximation of the forces between these groups back to the individual particle level.
## Common data structures:
### Force trees:


Force trees are heirarchical data structures that represent the spatial distribution of particles.
Most common form is an octree where each node has eight children. So the simulation is recusively split into eight cubic cells until we reach some resolution level (let's say at most one particle per cell).
To calc the force on each particle, we traverse the tree, starting from the root, and evaluate the ratio between the node (cell) size (for the target node) and the distance from the evaluation node.
The ratio of the target node size to the distance to the target node is the opening angle. If the opening angle is less than some pre-defined threshold, then we are in a long-distance regime and we use some multipole approximation to compute the force. If larger, then the node is too close for approximation and we traverse down the tree to the children, if still larger, keep going until we are either smaller than treshold or we are at the leaf nodes.
### Smoothing Kernel:
### Softening Function:
The modification to the original equation for force. So let's say we originally had
$$
F = \frac{Gm_1m_2}{r^2}
$$
We might instead have
$$
F = \frac{Gm_1m_2}{(r^2+\epsilon^2)^{3/2}}
$$
If we were applying a Plummer softening function, where $\epsilon$ is our softening parameter
### Grids:
### Merger trees:
---
# References
Gizmo:
http://www.tapir.caltech.edu/~phopkins/Site/GIZMO_files/gizmo_documentation.html
ICs:
For generating non-cosmological ICs: https://github.com/maamari/gizmo_idm/blob/main/scripts/make_IC.py
For generating cosmological ICs: https://www-n.oca.eu/ohahn/MUSIC/
For redistributing particle types: https://bitbucket.org/yymao/helpers/src/master/helpers/distributeMUSICBndryPart.py
Softening:
https://academic.oup.com/mnras/article/324/2/273/1020633
https://ui.adsabs.harvard.edu/abs/2002MNRAS.333..378B/abstract
https://arxiv.org/abs/astro-ph/0201544
https://arxiv.org/abs/1301.4520
https://ui.adsabs.harvard.edu/abs/2013MNRAS.434.1756A/abstract
https://arxiv.org/abs/1810.07055
Progress prior to 9/21/22:
https://docs.google.com/document/d/1MQlPfD63bBrjWffB2GRRU9T1KPC8R0pvcSyzMxeSAlM/edit?usp=sharing
Validation:
https://sites.google.com/site/santacruzcomparisonproject/
SIDM:
https://arxiv.org/pdf/2205.03392.pdf (gravothermal),
https://arxiv.org/abs/1705.02358 (general review),
https://arxiv.org/abs/1201.5892, https://arxiv.org/abs/1612.03906, https://arxiv.org/abs/1906.12026, https://arxiv.org/abs/2012.10277, https://arxiv.org/abs/2203.06035 (SIDM implementations and code comparison)
https://arxiv.org/abs/1706.07514, https://arxiv.org/abs/2102.12480, https://arxiv.org/abs/2206.14830 (examples of SIDM + hydro simulations)
Other:
https://arxiv.org/abs/2301.03612
https://aip.scitation.org/doi/pdf/10.1063/1.4822978
HPC modules:
`module load intel/20.1 hdf5/1.10.1 gsl/2.4 fftw/3.3.7 openmpi/4.1.1_intel-20.1 python3/3.8.5`
`export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/kmaamari/local/lib/`
ssh kmaamari@login.hpc.caltech.edu -L 2020:localhost:2020
jupyter-notebook --no-browser --port=2020 --ip=127.0.0.1