owned this note
owned this note
Published
Linked with GitHub
# JAX-SPH
## Open TODOs
- as soon as the solver is stable enough, publish to [MLOSS](https://www.jmlr.org/mloss/)
- Clones to remove (1):
- [x] some scripts on Mac `code/jax-sph`
- [x] lundquist `git/jax-sph` has SitL script - archived
- [x] ??? mikoto `code/iclr_workshop_2024/jax-sph`
- [x] working dir: mikoto `code/tmp/jax-sph` - can be safely deleted
- [x] LagrangeBench dataset code on mikoto `code/sph-dataset-jax` - on github
## References
1. phiFlow
- [workshop paper](https://montrealrobotics.ca/diffcvgp/assets/papers/3.pdf)
- [main reference paper](https://arxiv.org/abs/2001.07457)
- hierarchical control seems almost too complicated as a toy example that gradients through jax-sph are useful. Probably the example from Simulating Liquids with GNs (see below) is goog enough as a demo?
2. [Farimani paper](https://www.sciencedirect.com/science/article/pii/S0097849322000206). Previously [rejected](https://openreview.net/forum?id=7WwYBADS3E_) paper and workshop [paper](https://simdl.github.io/files/34.pdf ) + [poster]( https://simdl.github.io/posters/34-supp_ICLR2021_workshop_poster%20(3).pdf)
3. [jax-cfd](https://www.pnas.org/doi/10.1073/pnas.2101784118)
4. [solver-in-the-loop](https://arxiv.org/abs/2007.00016)
5. [Simulating Liquids with Graph Networks](https://arxiv.org/abs/2203.07895) - demonstates how to use gradients for a very simple inverse problem
6. [SPNet](https://arxiv.org/abs/1806.06094)
7. Other differentialbe Lagrangian solvers:
- [proteins](https://openreview.net/forum?id=Byg3y3C9Km), [soft robotics](https://arxiv.org/abs/1810.01054), [cloth simulation](https://papers.nips.cc/paper_files/paper/2019/hash/28f0b864598a1291557bed248a998d4e-Abstract.html)
8. [Learning to simulate](https://arxiv.org/abs/2002.09405)
9. [Inverse Design for fluid-structure interactions](https://github.com/google-deepmind/inverse_design/tree/main)
10. NA mentioned: perodynamik, hyperviscosity mit ML
## Experiments
- Combine particle redistribution (e.g. transport velocity) and GNS to improve long rollout stability.
- inverse problem like finding the initial particle distribution (see [Simulating liquids with GNs](https://arxiv.org/abs/2203.07895))
- Simulate two fluids and control one of them with a GNN to enforce specific behavior of the second one.
- Geometry optimization to enforce fluid behavior.
- Das [practical example](https://physicsbaseddeeplearning.org/diffphys.html) ist Mal sehr hilfreich. Wir könnten irgendwie 2 Arten von Teilchen wechselwirken lassen. Und dann vl ein GNN im zweiten Schritt dazuschalten?
## Idea dump
- Allegro Legato demonstrates that sharpner-aware minimization (SAM) can significantly improve long rollout performance. This paper also builds on Allegro, which shows that local interactions are often sufficient, and allow for parallelization.
- Example of presentation SE(3) transformer paper
- 1. problem - Lagrangian particle distribution, 2. how we solve it, 3. we "had" to write the library.
## Pseudo Abstract / Pseudo Intro
Smoothed particle hydrodynamics, i.e. Lagrangian tracking of computational fluid dynamics, is omnipresent in modern engineering and scientific disciplines. Due to the particle like nature of the simulation, graph neural networks have emerged as appealing and successful surrogate. However, particle clustering which is caused by [what causes particle clustering] is a challenging problem to overcome both for numerical and for learned solvers. In this work, we introduce graph redistribution which uses [concept]. Graph redistribution allows neural solvers to flexible redistribute particles after each time step, achieving significantly longer rollout and significantly better physics modeling for all tested architectures. Graph redistribution is based on [method], i.e., requires fully differentiable [what].
## TODOs
- [x] make JAX version for lagrangebench and jax-sph the same one.
- [x] [Riemann SPH](https://github.com/arturtoshev/jax-sph-demo/tree/riemann_sph)
- [x] Inverse problem pipeline - we probably need to demonstrate the usefullness of the gradients, but are inverse problems compulsory? Sounds rather like sth for the appendix.
- [x] Solver-in-the-Loop - sounds like the first experiment in the ML paper. Would be a very good ML baseline, next to the actual SPH solver.
- [x] Particle redistribution trick
- [ ] transport velocity style
- [ ] pressure force due to density inequality
## Not do to
- Hamiltonian Monte Carlo through the diff-bar solver. This was suggested by Thuerey's guy.
- multi-phase functionality, e.g. Rayleigh-Taylor -> LagrangeBench 2.0
- surface tension, e.g. water drop in shear flow -> LagrangeBench 2.0
- energy spectrum computation
## Logs
How the transport velocity term (left) and the pressure term distribute particles after 100 steps. Both lead to same final rho_max, but Sinkhorn is 0.000000620 vs 0.000001186 starting from 0.000001301


Plots at sec 7.4 of different dam break symulations on how to compute the density. From top to bottom:
<!-- 



-->
- 7.4sec
- density evolution

- density evolution with reinitialization (see Generalized TVF by Zhang, Hu, Adams)

- density evolution with reinitialization (threshold at 1.02 instead of 1.0)

- rho=np.where(rho<1.0, 1, rho)

- rho=np.where(rho<0.99, 1, rho)

- rho=np.where(rho<0.98, 1, rho)

- rho=np.where(rho<0.95, 1, rho)

- generalized wall boundary condition paper

- 12sec
- density evolution

- density evolution with reinitialization (see Generalized TVF by Zhang, Hu, Adams)

- density evolution with reinitialization (threshold at 1.02 instead of 1.0)

- rho=np.where(rho<1.0, 1, rho)

- rho=np.where(rho<0.99, 1, rho)

- rho=np.where(rho<0.98, 1, rho)

- rho=np.where(rho<0.95, 1, rho)


Um meine Experimente mal abzuschliessen:
- simple density evolution (top left), `2D_DB_SPH_0_20240115-211247`
- evolution + reinitialization (top right), `2D_DB_SPH_0_20240116-083346`
- density summation + rho=np.where(tho<0.98, 1, rho) (bottom left), `2D_DB_SPH_0_20240115-211429`
- same as before + reinitialization at 1.05 (bottom right) `2D_DB_SPH_0_20240116-224123`
- Additional:
- density summation + rho=np.where(tho<1, 1, rho) + reinit at 1 sucks. With `rho=np.where(tho<1, 1, rho)` the wave front disperses.
- density summation + rho=np.where(tho<0.98, 1, rho) + reinit at 1.02 seems to do a good job similar to 0.98 and 1.05
## Meeting Notes
**15.01.2024**
- What we do:
- 1.5dx vs 3dx
- Fabian's notebook upload?
- last author