Papers:
Replica exchange, multisim, others, excluded from modularsimulator at this time. Ref: ModularSimulator::isInputCompatible It's probably not too exciting just to enable it and see what breaks :-) Mark can do this properly in January 2023, but not right now
Assumptions
Preconditions we must construct during setup
The pipeline that we want to implement in modular simulator looks like
A note on program flow during simulation launch:
All of the ModularSimulator set-up starts after https://gitlab.com/gromacs/gromacs/-/blob/main/src/gromacs/mdrun/runner.cpp#L2200
The actual computation that will be performed for force calculation (by the existing ForceElement) during the MD step is extended by the contents of the MDModules container (not to be confused with the Modular Simulator framework!) which must be final before https://gitlab.com/gromacs/gromacs/-/blob/main/src/gromacs/mdrun/runner.cpp#L2218
See also https://manual.gromacs.org/current/doxygen/html-full/page_mdmodules.xhtml, but Mark is proposing that you try to defer integrating with these frameworks right away, and try to use existing facilities and/or localized tightly-scoped changes for handling input parameters, new communication calls, and new blocks of computation.
A multi-sim implementation for modular simulator exists: https://gitlab.com/gromacs/gromacs/-/commit/84a5f710734aed6725ee15cae6565a8e5e2110ed. Mark will put that on main branch to use as a starting point for this work.
Pre-conditions have been met, the integration of path MD algorithm in Gromacs is underway.
We still need a way to output the effective potential energy of the system (e.g. in the .trr?).
MPI tests e.g. https://gitlab.com/gromacs/gromacs/-/blob/main/src/gromacs/domdec/tests/CMakeLists.txt and https://gitlab.com/gromacs/gromacs/-/blob/main/src/gromacs/domdec/tests/haloexchange_mpi.cpp
repartitioning happens in https://gitlab.com/gromacs/gromacs/-/blob/main/src/gromacs/modularsimulator/domdechelper.cpp, perhaps you want to register a IDomDecHelperClient to get called back each time repartitioning happens
nearest neighbour means neighbours plus self, for brevity
Having previously over-allocated the receive buffers for global atom indices during construction,
Now this domain knows which nearest neighbour has each particle that it cares about.
Now this domain knows which particles are expected by each of its nearest neighbours.
Advanced technique (only do this when everything is working!) post the MPI_Irecv for the next step immediately after the MPI_Waitall. This helps the MPI library be efficient when some domains run faster than others (which they always will). But the book keeping is a bit trickier.