# EXCLAIM Planning Meeting 24.3.2024
**Designing greenline for warm bubble**
GOAL: of warm bubble experiment
- 3 cycles in next quarter
- resources?
### bottom line:
There is no way we can reach the entire warm bubble experiment in the next quarter. Reasonable by end of the year.
### discussion for Wednesday at QPM
- @Will, ~~Anurag~~: ICON-C components: what is happening there?
- there is micro physics and libraries. They will probably focus next on radiation.
- @Anurag, @Chia Rui: turbulence needed?
- No, there are enhance warm bubble tests including turbulence, so it can be added in a second version
- @Will,@Mauro: should/can we use Fortran granules, where does it makes sense?
- is ICON-C interested at all? How much do we need to take them into account.
- how is the resource situation for us? Need more python skills.
- @Will, grid what is going on there?
- DWD does not contribute any resources (FP). It is unclear whether ICON-C will adopt the `libGridMan`. Will will write a Fortran diffusion granule that uses `libGridMan` in order to have a show case. We (icon4py) should focus on our own.
- @Will, @Anurag: advection what schemes: for the warm bubble experiment it should be enough to use the simple schemes.
- It should be enough to first only port one simple scheme. [See below](#Advection)
# Discussion: what extra functionality do we need for warm bubble?
- initialization same as JW test case?
- analytic conditions
- mesh:
- double periodic boundary conditions
- components:
- dycore + diffusion
- tracer advection (what schemes?, horizontal, vertical)
- microphysics
- turbulence (not necessarily)
# questions 24.3.2024
- advection:
- horizontal:
- semi langrangian
- vertical advection
- tri diagonal solver
- what exactly is ported? there was this issue with the Semi langrangian pattern
- which schemes? what would need
- depends pytree in gt4py
-
- performance measuring/tooling? (greenline specific)
- turbulence (what is easier Fortran or python granule)
- there are 3 cycles in next quarter
# next projects -> *need to be prioritized and added stuff*
- (1, 0.5 PC) merge JW (initial conditions)
- (1, 0.5 PC) merge microphysics
- (1, 1 PC) finish IO prototype: add all fields needed for JW case
- (2, 1 PC) refactor the grid-domain interface: it is very much like ICON and confusion to everyone.
```
def get_start_index(domain)
pass
def get_end_index(domain)
pass
```
- (1, 2-3 PC ) port tracer advection granule
- (2, 1 PC) vertical grid: k levels read from file or calculate
- (3, 1 PC) Configuration: setup configuration library for entire model
- (3, 3 PC) work towards architecture (https://hackmd.io/Rl0fAO8bSHa-ltMHxbNyFA): adapt existing code to new concept
- (1, )metric, interpolation fields
- make initialization fields available for the model.
- there are more fields needed for the new componennts and for torus
-
- turbulence - needed ? NO
## Advection
- Split into horizontal and vertical
`mo_advection_hflux.f90` und limiter `mo_advection_hlimit.f90`
`mo_advection_vflux.f90` and limiter `mo_advection_vlimit.f90`
- then it there is an extra advections stepping that calls the advection on its own substeps `n_advection_step = 2`. It does invert vertical vs horizontal on each substeps.
-> We should with one simple schemes for horizontal and one for vertical. Each scheme should go into its own granule anyway, parallelizing only makes sense for tracer running on the same scheme.
Horizontal: 2 = MIURA (`upwind_hflux_miura`) with limiter `4 = ifluxl_sm` `hflx_limiter_pd`
Vertical: 3 = ippm_v `upwind_vflux_ppm` with limiter `1 = islopel_vsm` (nothing is done in `upwind_vflux_ppm`)