# Repurposing BRIDGE as a Grid Manager
## BRIDGE Grid Reader
Florian Prill has packaged the BRIDGE grid read and distribute functionality as a standalone code. This code has been included in the eth-cscs/ICON_kernels.git repository, in the `grid_manager` directory. Let's refer to the current version as GridMan0. It does the following:
* Reads a standard ICON grid file, and builds up a global version of the grid information;
* Distributes the grid onto an arbitrary number of processes;
* Creates the cell halo information, including the row-indexing refin_c_ctrl.
As such it could be a basis for a Grid Manager which could be used in the context of the full ICON model, although this would potentially require additions.
* The current BRIDGE code does not fully support nesting;
* The decomposition utilizes a k-d tree algorithm and is thus different than the one used in ICON
* GridMan0 could be extended to allow a predefined domain decomposition of the cells, either from ICON, or, say, from MeTiS.
Florian is helping me make sense of his BRIDGE code. I take the liberty of posting some of his answers here. In particular, he uses a fairly simple domain decomposition method based on k-d trees, which is sufficient for their DG purposes, but which yields a different decomposition than ICON would calculate.
> By construction, a k-d search tree evenly (and recursively) splits the set of objects into subsets of smaller complexity. This, and the knowledge that this bisection process is defined by geometrical criteria, led me to this short implementation. No doubt, I'm most probably in the midst of a long row of "inventors" of this algorithm :slightly_smiling_face: The algorithm will provide a reasonable load balancing also for rather large grids. However, it does not fulfil optimality criteria like METIS' minimum graph cut criterion. MPI communication can therefore be sub-optimal. One example: The k-d tree does not guarantee that a PE's set of cells is simply connected. There are (unlikely) cases where this could lead to sub-optimal communication patterns.
Given that the domain decomposition is different than one we want (and is probably not optimal), I wondered how difficult it would be to create an interface to specify the desired decomposition, which might be the existing ICON one, or one generated by MeTiS:
> Yes, this extension should be rather easy. The important point here in the code is where we "build an index list for generating the local sub-triangulation local_idxlist": A METIS-decomposed grid would define the index list mask_c differently and it would simply ignore all previous lines related to the k-d tree.
At the all-hands meeting there was some concern that the BRIDGE code could not support the identical p_patch grid layout which we have in ICON. That is currently true, but with the above addition to specify the desired data decomposition, we can specify the one from ICON. Then there is the ordering of the cell and halo rows. This ordering is supported in BRIDGE :
> Strictly speaking, the DG discretization does not need halo cells at all, which is one of its major advantages. Nevertheless I decided to support the ordering of cell rows (refin_c_ctrl) and halo rows. When you take a closer look at the BRIDGE grid handling, you will realize that it is even more general than ICON's halo region definition: Of course, we can specify cell rows, but we can also add additional arbitrary cell indices. As far as I understand it, a nudging region would operate on top of the cell row ordering. It would not introduce a different ordering or enlarge the halo region.
The refin_c_ctrl in BRIDGE seems to be an arbitrary choice. One could take over the definition from ICON.