# Status of The DaCe Optimizer
This list collects the most sever/important bugs/limitations/quirks that are currently present in the optimizer toolchain.
There are more of them, just grep for `TODO` in the code to see them.
Previously this list was maintained within the shaping document of the current cycle, but since the project spans several cycles it was outsourced into its own document.
It is important that this is not a list of tasks, it is more of a reminder of things that could be done if there is spare time or a list of particular issues that might surface while working with a stencil.
The problems are either classified as DaCe or GT4Py related, which indicates where a possible PR would go.
Within each category they are roughly ordered according to importance.
##### DaCe Related Issues
- DaCe's code generator has an issue with CUDA streams, that were solved using different approaches:
- The first approach was to generate the code using only one stream and then later set that stream to the default stream.
However, even in that mode _wrong_ synchronization code was generated, i.e. more than one stream was used, which resulted in a segmentation fault at best.
- Then we switched to generate the code directly for the default stream, which lead to some trouble, see [here](https://github.com/spcl/dace/issues/2120) and [here](https://github.com/GridTools/gt4py/pull/2201).
This solution "works"^{(R)} but is obviously wrong and potential faulty.
- In some cases `gt_substitute_compiletype_symbols()` (which is just `replace_dict()` and `COnstantPropagation`) does not work.
There are two issues:
- If a value that should be replaced also has an AccessNode then it leads to an error.
The current solution is to run simplify first, which is not supper nice but also not that bad either.
This error is located in `replace_dict()`.
- If there are multiple states, then only _some_ symbols gets replaced.
This error is associated to `ConstantPropagation` and there is no solution yet, but it has been scheduled.
- DaCe has a pipeline system, such that a pass only has to specify on which analysis passes it depends and the system will run them automatically.
This is also possible in the `PatternTransformation`, however the member, that is storing the results, `_pipeline_results`, is [only set during the `apply()` function and not when `can_be_applied()` is called](https://github.com/spcl/dace/issues/1911).
As discussed during a DaCe meeting, this is likely an oversight and should be simple to fix.
However, it was also noted, that it might have some (undocumented) consequences.
The solution is to pass the needed passes directly to the code transformation objects.
- There is a bug in DaCe's GPU transformation (see [DaCe Issue#1773](https://github.com/spcl/dace/issues/1773) and [GT4Py PR#1741](https://github.com/GridTools/gt4py/pull/1741/files)).
However, it has only low priority as it only affect a unit test that covers an edge case.
##### GT4Py Issues
- The splitting transformations and tools are an important corner stone of the optimizer.
However, in general a more sophisticated logic is needed to drive them:
- Currently, there are some situation (Ioannis found them in `dy_{39, 41}_to_60` where they are applied to much).
- In some cases they are also not applied although they should be.
- Currently, we do not touch global data, which blocks some operations.
This is a problem, because on a `program` it is not possible to allocate a temporary, thus external **global** memory is used for that, although, it should be a transient.
- The functions that split AccessNodes have a special rule when the producer is a Map.
The reason for this is to ensure the invariant that everything that is computed should be used, which is sometimes violated.
The issue is that this rule causes some issues in `dy_{39, 41}_to_60` and should be removed or reworked if possible.
- There are [problems with `concat_where` expressions](https://hackmd.io/bd-hKFV1SZG84VLzS7zqKw).
This is probably a bug in the lowering.
It might not be urgent, because the ICON4Py user code was changed to create better trees.
However, the result should not depend on how it is written.
- The `CopyChainRemover` requires that the source node (the one that is eliminated) is fully copied into the destination,
i.e. that everything that is computed (written into the source node) is also used or, that the destination is large enough to absorb everything.
However, in some cases it is not possible to prove this, the canonical example is that the size of the source is `max(horizontal_end - horizontal_start, 0)` and the range `[0:(horizontal_end - horizontal_start)]` is transferred, while it is clear that the whole thing is read, it can not be proven.
This problem is avoided with specialization, but in the long term it should be solved, in some way.
- The GPU transformations have become very outdated and should be over hauled.
For example in the beginning we had big problems with trivial Tasklets, however, they should have been gone now.
- The way how the iteration order of Maps is set is not very stable nor robust.
To make things "interesting" there are multiple aspects that have to be considered:
- For normal Maps, the ones created by the lowering, we assume that their order is good for CPU and when we have to go to GPU, then we reorder it in such a way that the horizontal dimension, which we identify by its name alone, ends up being used for the `x` direction of the block.
- The second kind of Maps only appear only on GPU.
They are generated to replace Memlets which can not be represented by `cudaMemcpy()`.
This is quite ugly and quite complicated as we directly rely on implementation details of DaCe, see [here](https://github.com/GridTools/gt4py/blob/983e5287e68c56cc92b46d7683b33b2921ca0519/src/gt4py/next/program_processors/runners/dace/transformations/gpu_utils.py#L128) for more.
Never the less, for these Maps we simply revers their iteration order and hope that it is okay.
- Furthermore, there is no DaCe API to influence it, i.e. that allows us to reliable control which iteration variable becomes associated with stride 1.
Currently we simply rely on an implementation detail.
What we should do instead is infer from the strides of the input/output of a Map and the access pattern what the right iteration order should be.
- There is the `MultiStateGlobalSelfCopyElimination2`, it has the same goal as `MultiStateGlobalSelfCopyElimination`, i.e. eliminating redundant copies spanning multiple states, but it does it in a slightly different way.
Unfortunately we need both and there was no time to merge them together.
- We also have to set the strides of transients.
Currently we build on the assumption that the lowering uses C-order, which is okay for CPU.
When we go to GPU, then we assume that it is enough to go to FORTRAN-order.
It should actually be inferred from the access pattern given by the Map.
See [here](https://github.com/GridTools/gt4py/blob/983e5287e68c56cc92b46d7683b33b2921ca0519/src/gt4py/next/program_processors/runners/dace/transformations/strides.py#L31) for the current implementation.
- DaCe switched from the simple state machine layout to control flow structure.
This caused some issues, as the exploration of the states is no longer simple.
The problem is mostly confined to [`is_accessed_downstream()`](https://github.com/GridTools/gt4py/blob/983e5287e68c56cc92b46d7683b33b2921ca0519/src/gt4py/next/program_processors/runners/dace/transformations/utils.py#L138), [`find_successor_state()`](https://github.com/GridTools/gt4py/blob/983e5287e68c56cc92b46d7683b33b2921ca0519/src/gt4py/next/program_processors/runners/dace/transformations/utils.py#L576C5-L576C25) and other functions, which are mostly located in [`utils.py`](https://github.com/GridTools/gt4py/blob/983e5287e68c56cc92b46d7683b33b2921ca0519/src/gt4py/next/program_processors/runners/dace/transformations/utils.py#L576C5-L576C25).
For this collaboration with SPCL is needed as we have not yet fully understand how it works.
- An issue is that we highly relay on specialization, e.g. replacing `limited_area` with `True`.
This is fine for MCH but we should improve on that, for example pulling things inside `if` blocks.
As far as I know Berke Ates (SPCL) is working on `ConditionFusion` along this lines.
Another idea was to make use of SymPy's assumption interface to improve simplification.
- There is a [transformation](https://github.com/GridTools/gt4py/blob/983e5287e68c56cc92b46d7683b33b2921ca0519/src/gt4py/next/program_processors/runners/dace/transformations/strides.py#L132C11-L132C12) that correct strides of containers in NestedSDFGs.
However, currently views are ignored, there is an experimental [PR](https://github.com/GridTools/gt4py/pull/1784) for this, it also shows that such a pass is needed.
It has kind of low priority, since we try to kill any view anyway.
## `concat_where` Expressions
> A [project](https://hackmd.io/bd-hKFV1SZG84VLzS7zqKw) was shaped to solve this issue.
> The description was moved there and is removed here.