owned this note
owned this note
Published
Linked with GitHub
## Running the granule
I managed to run the granule again with the following PRs which contain a few hacks and custom branches
- https://github.com/C2SM/icon-exclaim/pull/338
- https://github.com/C2SM/icon4py/pull/731
for
- [x] exp.mch_ch_r04b09_dsl on 1 gpu
- [x] exp.mch_ch_r04b09_dsl on 4 gpu
- mch_icon_ch2 broken, see below
didn't try any other experiments.
Current problems
### Skip values
GT4Py's unstructured extent analysis will insert the full horizontal domain instead of doing proper extent analysis.
Therefore, the resulting field operations (`as_fieldop`) will indicate computation on the full horizontal domain, including halo/boundary regions where the neighbor tables contain `-1` (because there is no neighbor on the current rank/in the domain). In ICON these values are never used, because the domain is always smaller.
In GT4Py there is a problem:
- for DaCe, because the domain will be used in the map
- for GTFN, if temporaries are extracted, the domain will be used to fill the temporary.
Agreed workaround:
#### Replace `-1`s
Replace **all** `-1` by an existing index, e.g. one that is already used in for the neighborhood (for better locality) or just `0`. This is ok for local experiments, since the resulting computations (close to the halo/boundary) are not used for output (because the original problem is that we overcompute too much) and there are no pentagon points.
For global experiments, we can apply this to all connectivities that don't fix pentagons.
Note: I tried this approach (replacing all points) and managed to run multi-gpu setup for all experiments that I tried (mch dsl experiment, mch_icon_ch2, exclaim ape). See https://github.com/C2SM/icon4py/pull/728
### vn_incr optional
In a recent PR vn_incr was added to the granule interface. Here is a PR to make it optional in icon4py https://github.com/C2SM/icon4py/pull/729.
Note, that the Fortran code needs to be fixed, too! If you have old bindings you will get a weird error.
icon-exclaim changes https://github.com/C2SM/icon-exclaim/pull/338
### ddt_w_adv_ntl1/2 not verifying
Show as wrong in `build_xpu2py_verify` because they don't match (intentionally) in the halo. Daniel implemented a feature that allows to decide were to verify. For these fields the range needs to be fixed.
### bug in divdamp_type == 32
See
https://github.com/C2SM/icon4py/blob/242f29f285bfd5374b8aa257ee680172398feb0a/model/atmosphere/dycore/src/icon4py/model/atmosphere/dycore/solve_nonhydro.py#L546
and the following line. This bug can be triggered by `exp.mch_icon-ch2`.
- It should be `_backend`
- the `where` is missing 2 arguments? -> `np.asarray(condition).nonzero()` -> return tuple of arrays with the components of the indices where the condition is non-zero
- The problem is probably that it returns a tuple of arrays, not just an array
## Environment on balfrin and santis
Once, the new icon uenv (v2 or whatever number they will use) is officially released, we should switch icon-exclaim to use it.
I have https://github.com/C2SM/icon-exclaim/pull/333 to switch balfrin to this uenv.
Building should also work on santis, however running needs the run scripts from upstream. I lost track of the status: Will and Daniel are probably involved.
I would try to merge my PR first, as it moves the only supported system for granule runs (balfin) to the new setup. Then, in a separate PR, make also santis work.
## Running **the** benchmark
I used to compare jablonowski williamson r2b5, r2b6, r2b7 between greenline, blueline + granule, openacc.
Chia Rui gave me the openacc setup and the greenline setup. For the granule you need to build `build_gpu2py` copy the runscript and grid files (see all absolute paths in the runscripts) to `build_gpu2py`.
Last time I tried R2B7 with greenline jw driver the python overhead was relatively small. I concluded this, because I used the `detailed_timers` from https://github.com/C2SM/icon4py/pull/731/files#diff-c5c9f29cbbd1ece39cd3a41411f035761a31618fcdde23233ad97b2d11873905R74, which syncs after dycore substep and after diffusion (same syncs as in granule), and got the sum of 5 * dycore substep + diffusion to be about the same as measuring without `detailed_timers`.
As a first step to benchmark, I would try to reproduce these numbers https://docs.google.com/spreadsheets/d/1YkehDnIZP-BtRFMuYaphYiON08Dkf2U9ESJeSYnfU7Y/edit?gid=8172009#gid=8172009.
I.e. achieve something like `0.389463` (the better number is with static args for diffusion and velocity advection) to `0.407985` for R2B7 greenline JW with `detailed_timers=False`.
It is important to set `PYTHONOPTIMIZE=2` (or `1`) and `GT4PY_UNSTRUCTURED_HORIZONTAL_HAS_UNIT_STRIDE=1`.
Also probably `GT4PY_BUILD_CACHE_LIFETIME=persistent` to make sure programs are cached.
Note, that icon4py main has now more combined programs, which has static args in https://github.com/C2SM/icon4py/pull/731, but I didn't have time to run the greenline.
### Overhead investigation
Once https://github.com/GridTools/gt4py/pull/1978 is merged, gt4py can report detailed timers. Would be interesting to see from these timers the overhead in gt4py. Additionally, `py2fgen` bindings have timers which can be enabled with `PY2FGEN_PROFILING=1`. The timings will be reported in the ICON slurm log via Python logger. (There is also `PY2FGEN_LOG_LEVEL` to define the logging level for the bindings part.)
Note:
- with PYTHON_OPTIMIZE>0, the profiling output is disabled!
- check the bindings code to understand how the timers work, see e.g. here https://github.com/C2SM/icon4py/blob/242f29f285bfd5374b8aa257ee680172398feb0a/tools/src/icon4py/tools/py2fgen/_export.py#L127
### Granule performance
With the fixes described in [running the granule](#Running-the-granule) it should be able to do a comparison between openacc and the granule. E.g. compare multi-node icon-mch2 or single node jw r2b6.
Last time I tried, the r2b6 was slower than openAcc and slower than greenline (greenline faster than openacc), see google sheet mentioned above.
### Blueline vs greenline performance for JW
Investigate if greenline has the cupy cub compilation problem
```
/scratch/mch/vogtha/icon_granule_integrate4/icon-exclaim/externals/icon4py/.venv/lib/python3.10/site-packages/cupy/_core/include/cupy/_cccl/libcudacxx/cuda/std/detail/libcxx/include/limits(465): error: floating constant is out of range
return __FLT_DENORM_MIN__;
^
```
which can be worked-around with `export CUPY_ACCELERATORS=""`.
If greenline doesn't need this fix, this could be the reason for the performance difference.
We noticed that there is an expensive debug print in the blueline with loglevel > 7?
So for performance run with Fortran ICON log level 1
## concat_where frontend
handover to Enrique:
- move the unchain comparison from python ast preprocessing to foast_to_gtir, because then we can run type inference on foast and reject the cases for domain, while still allowing scalar comparisons.
- disallow `0 < K < n`, while allowing `0. < scalar_float < someval`.