# Subframes for Simulations and Baking
Blender typically evaluates the current scene at integer frames. However, it is already supported to evaluate it at subframes by enabling the corresponding checkbox in the `Playback` menu in the timeline header. When enabled, the current scene frame can be set to fractional values.
## Goal
When there is only animation, all of this already works well. In this context, "only animation" means that the scene can be evaluated at a frame without looking at other frames. Things work less well when simulations or baking is involved. Note that in both cases, we might have to look at data from other frames than the current one. During simulation, we always have to look at the previous state, and when evaluating a fractional frame between two baked frames, we generally try to interpolate between the data of those frames.
The goal of this document is to find solutions to the following problems which are caused by the lack of features related to subframes:
* The quality of simulations in geometry nodes is not good, because the delta time in each time step is too large. One could use a repeat zone inside of the simulation to improve the simulation stability, but this is not perfect, because e.g. each substep gets the same particle emitter transforms as input. The result would be better if we could get access to the emitter transforms at subframes.

* Only baking integer frames can look bad the final result also shows data from subframes. That happens when using motion blur (especially with long shutter times). When slowing down the baked data (e.g. to retime a simulation), the result can look unnatural, because there is a noticable linear interpolation between the baked frames.

## Initial Attempts
Implementation wise, basic support for subframes is not too complicated. The more tricky part is figuring out how the user specifies where subframes are used. This section presents a few simple approaches and explains what is good about them and why they might not work. We will probably end up with a mix of those approaches.
### Scene Wide Settings
The obvious advantage of the approach is its simplicity. If the quality of the simulations or motion blur is not good enough, one can just increase the number of subframes and the problem is solved.
If we had unlimited compute and memory, this would likely be the best approach. Without that, this approach is somewhat wasteful because the number of subframes of all simulations/bakes is determined by the one that needs the most, even if some might not even need subframes at all.
Another problem is that geometry nodes simulations can be put into two categories:
* Continuous simulations take the delta time into account and their quality typically improves as the number of substeps increases.
* Discrete simulations don't care about the delta time. They update the simulation state in fixed steps (e.g. add a subdivision in every step). Subframes are very unexpected for this kind of simulation. So somehow we would have to exempt these simulations from subframes. In theory, discrete simulations could be seen as a special case of continuous simulations if the simulation step checks whether enough time has passed to perform the next discrete step. However, that shouldn't be something we want users to do manually.
Even for continuous simulations, it can be problematic when the number of subframes becomes too large, due to numerical accuracy. Too many substeps could make the result unstable, but I'm not sure if that is ever a problem in practice.
### Settings per Simulation and Bake
If we store the settings per simulation or bake, we get much more control as we can tune everything independently. That introduces a new problem though. Now it is likely that e.g. subframes of different simulations don't line up, which causes extra processing that should be avoided.
For example, imagine there are two simulations that use the same animated emitter mesh. The first uses three and the second uses four subframes. In this setup, the emitter has to be evaluated at seven subframes (`0.25, 0.5, 0.75 | 0.2, 0.4, 0.6, 0.8`).
Furthermore, if one of them depends on the other, they can't be computed in one pass anymore. That's because, to compute the first `0.25` step, we first have to compute the `0.2` and `0.4` step of the other simulation already, and then mix between those states to get the inputs to the `0.25` step.
Generally, this approach works though. We might just have to make it easy for users to make sure that their subframes line up well enough to avoid too many unnecessary recomputations.
### Setting in the Bake Operator
Another approach is to specify the number of subframes only when actually baking. The user could then select e.g. multiple simulations and bake them all together using a single subframe setting.
This could work, but has the major downside that it does not consider normal playback or a future real-time mode.
----
TODO:
* Playback framerate
* Adaptive subframes?
* Match subframes for everything exactly or allow slight variations to avoid recomputations?
* Bake result should be the same regardless of whether each simulation is baked individually one by one, or all are baked together.
* simulation vs bake subframes (might also want to bake only every second frame)