# Field View discussions
###### tags: `discussion`
- Participants: Felix, Hannes, Enrique, Peter, Till
### StencilPy links:
- Slides: https://1drv.ms/p/s!AvQEMrXVXiLyg9x3IlEJtohxLLnJ6A?e=WWOgdE
- Frontend: https://github.com/petiaccja/StencilPy
- Backend: https://github.com/petiaccja/StencilIR
## Summary of issues
1. Missing features:
1. Embedded field view execution
2. Local view frontend
2. Design inconsistencies / contradictions:
1. Field operator != local view stencil: field operators cannot return tuples of independent fields
2. Clarify field concept: sized or not, origin or not?
3. Fields have different mutability semantics in different contexts
4. Function-like objects have context-dependent out-arg or return-value semantics
3. Potentially suboptimal design / architecture:
1. Self-contained IRs may be better (self-contained: all information present within the IR to compile into machine code -- types, closure variables, referenced functions)
2. Too many different function-like objects: programs, field operators, scan operators
3. Abstract shifts, field offsets, offset providers and string names: difficult for the user, difficult to handle/optimize, error-prone
4. Compilation pipeline streamlining: frontend IR and backend IR are dissimilar
5. Builtin functions vs dedicated AST nodes: builtin function introduce special casing
4. Improvements of the implementation:
1. Using MLIR & the LLVM infrastructure
2. Performance of the compilation process
3. Performance of the optimized binaries
## Description of issues
### Missing features
#### Embedded field view execution
The code written by the user was planned to be regular Python code that can be run in the Python interpreter as is. This is what we call embedded execution.
Currently, this is not supported in GT4Py.
#### Local view frontend
We've been planning to enable users to describe stencil computations in the context of a single element of the output grid, not just using expressions that operate on fields.
Currently, this is not supported in GT4Py.
### Design inconsistencies / contradictions
#### Field operator != local view stencils
In the current lowering pipeline, field operators of the frontend are being lowered to a single local view stencil. Field operators have function-like semantics, meaning they can return arbitrary tuples of fields. Stencils have different semantics, they can produce tuple of fields that have the exact same size and dimensions. As a consequence, field operators cannot be lowered into a single stencil, and expected features of field operators like tuple returns remain unavailable to users.
Resolution: tuple semantics need to be defined
#### Clarify field concept
Fields have several possible interpretations:
1. Fields are infinite, operation apply through infinity
2. Fields are weakly sized and origined, operations apply on the domain intersection
3. Fields are strongly sized and origined, operations apply only when domains match exactly, otherwise result in error
4. Fields are strongly sized and originless, operations apply only when sizes match exactly, otherwise result in error
Note that weakly domained fields are the computer-based implementation of infinite fields, therefore they are only different conceptually, not implementation-wise. Additionally, strongly domained and strongly sized fields are very similar to each other.
We can therefore reduce the interpretation to two categories:
- Weakly sized fields:
- May be difficult to reason about
- Implementation is more involved
- Simpler syntax
- Correctess checked only on the top-level function
- Boundary treatment not straightforward
- Does not interact well with neighbor tables
- Strongly sized fields:
- Clear meaning
- Straightforward implementation
- More verbose syntax
- Correctness checked for every expression
- Boundary treatment straightforward
- Interacts well with neighbor tables
It would be good to clarify what exactly is the concept of a field.
Resolution: we decided for version 2
#### Fields have different mutability semantics in different contexts
What is a modification?
```python=
a: Field = ...
b: Field = ...
# Reassigment IS NOT modification
# No side effects present that change the elements of a or b
c = a
c = b
f(a, b) # a and b stay the same
# Creating a new field from existing ones IS NOT modification
# Again, no side effects
d = insert_slice(a[XDim[1:-1]], b, XDim[1:-1])
f(a, b) # a and b stay the same
# Overwriting part of b IS modification.
# The side effect of this statement is that subsequent operations
# will see a different b compared to preceding ones.
b[XDim[1:-1]] = a[XDim[1:-1]]
f(a, b) # b is different!
```
Currently, programs have mutable fields, whereas field operators have immutable fields. This combined with programs having strongly sized fields and field operators having weakly sized fields makes the API more complicated than ideal, so it would probably be beneficial to have consistent mutability semantics.
Resolution:
- mutability is there for
- integrating with applications that provide the buffer (Fortran)
- timeloops (static allocation, memory pools makes things complicated)
- "having strongly sized fields and field operators having weakly sized fields":
- see apply field operator in https://hackmd.io/fCXnShnFR96kFau7lw36ew?view
#### Function-like objects have context-dependent out-arg or return-value semantics
The current semantics:
```python=
# 1. Field operators being called from field operators: return value
r = foo(...)
# 2. Field operators being called from Python: out-arg
foo(..., out=r)
# 3. Field operators being called from programs: out-arg
foo(..., out=r)
# 4. Programs being called from Python: out-arg
# `outs` arguments are mutated in place, but read-only are not marked
bar(ins..., outs...)
```
Currently, the calling convention of function-like objects varies by context. It would be clearer if all function-like objects used the same semantics regardless of the context.
Resolution:
We acknowledge that different call semantics is not easy for the user, but we didn't find a better way that works nicely with all constraints (e.g. Fortran integration).
1. want this in as many places as possible (and minimize how field_operators being called from programs); in a pure Python GT4Py (e.g. green-line ICON)
2. was introduced as a short-cut for testing ("implicit program")
3. "normal" mode, program serves as entry-point for integration in external applications and bindings
4.
### Suboptimal design / architecture
#### Self-contained IRs
What is a self-contained IR?
- Contains complete typing information
- Incorporates all referenced symbols
- Global constants
- Global variables
- Called functions
- Consequently: the IR alone is enough to produce machine code for any target
Our IRs are currently not self-contained:
- Frontend (PAST, FOAST):
- Called functions missing: the IRs contain only a single function, the referenced functions are accessed from data outside of the IR
- There are two independent IRs: FOAST and PAST
- Backend:
- Typing information is missing
- Abstract offsets need offset provider dictionary
This is an issue because it complicates the code paths. Now every step that processes the IR requires not only the IR as input, but also the missing information such as typing or other complementary IRs. The processing step must internally assemble the information from various sources to perform modification of the IR, then it again discards the information to output the modified IR, making the passes 3 steps instead of just 1.
Resolution:
- we all agree that types and "offset_provider" should be part of the IR.
- a FieldIR would represent both program and field_operator
#### Too many different function-like objects
We need to implement several computational patterns. There are two main approaches to it in the API:
1. Add a new callable object for each pattern
2. Add a new way to call an existing object for each pattern
With new callable objects:
```python=
@field_operator
def foo(a):
return 2 * a
@scan_operator
def bar(_, a):
return 2 * a, 2 * a
@ewise_operator
def baz(a):
return 2 * a
r = foo(a) # Applies operation to entire field
r = bar(a) # Performs scan over entire field
r = baz(a) # Applies operation elementwise to entire field
```
With new ways to call objects:
```python=
@func
def foo(a):
return 2 * a
@func
def bar(_, a):
return 2 * a, 2 * a
r = foo(a)
r = scan(a, bar)
r = map(a, foo)
```
Adding new ways to call existing objects has the benefit that callables can potentially be reused in multiple contexts. Additionally, code is also more readable as the syntax of the statements clarifies the intent, otherwise one would have to look up the definition of the callable to understand the intent.
Another related issue is the existence of programs, which could potentially be replaced by field operators / functions. If an externally provided output field is a requirement from our users, we can make such calling convention possible for field operators, eliminating the need for separate programs with mutability semantics.
#### Abstract shifts, field offsets, offset providers and string names
There are several concerns about the tagged dimension and neighbor access infrastructure:
- **Dimension classes**: There are three types of dimensions: horizontal, vertical, and local. The limitations that apply:
- unstructured neighbors are only accessible for horizontal dimensions,
- reductions can only be applied to local dimensions,
- there must be exactly one horizontal dimension and there can be at most one vertical dimension for unstructured grids,
- and vertical dimensions are not applicable to structured domains.
Removing dimension classes and allowing all operations for all dimensions may greatly simplify the implementation as well as the user code.
- **Abstract shifts**: Virtually every IR pass and every user needs to know the nature of the neighbor access, therefore making them abstract does not give much benefit, but makes code complicated. Removing abstracts shift may simplify IR passes, and might also mean that the equivalent frontend concepts of field offsets and offset providers disappear altogether.
- **Offset providers**: The current way offset providers are handled can be rather confusing, especially for structured grids. One has to create a FieldOffset object that's used in the syntax to access structured neighbors, and one has to create a mapping of FieldOffsets to Dimensions. Why not simply use the Dimension in the syntax to access neighbors? Removing the indirection by the offset provider dictionary would also simplify unstructured neighbor access. If grouping connectivity tables is a requirement, we can introduce named tuples into the language.
- **String names**: It is not clear if two Dimension objects or two FieldOffset objects with the same name are equivelent and interchangable. It is also confusing that one has to use strings as keys in the offset provider dictionary instead of using actual FieldOffset objects. Removing strings and relying solely on objects would eliminate these issues.
#### Compilation pipeline streamlining: frontend IR and backend IR are dissimilar
| Feature | Frontend | Backend |
| -------- | -------- | -------- |
| Functions | Program<br>Field operator<br>Scan operator | Program<br>Stencil<br>`scan` builtin |
| Data access | Field | Iterator |
| Typing | Typed | Partially typed |
The frontend and backend IRs have a different structure to organize code, have a different data access model, and differ in their typing system. The conversion between them is heavily focused on converting one model to the other, while very little progressive lowering of the level of abstraction happens. If the two IRs were more similar, it would be less effort to convert between them, and more focus could be on progressively getting closer to machine code, like eliminating named dimensions or higher order functions that don't help with optimization.
#### Builtin functions vs dedicated AST nodes
The current approach primarily seen in iterator IR is that there are very few dedicated AST nodes, and most concepts in the IR are represented by builtin functions that have a special meaning.
Arguably, the two methods are conceptually equivalent or very similar, because both represent an operation of the IR as a string plus some arguments.
However, for either to be usable, one needs a proper framework. For dedicated AST nodes, the framework is essentially the OOP facilities of the programming language, where the identifier of operations is a class name and the specification for the arguments are the names and types of the class' fields. In the case of builtin functions, we have to implement the framework ourselves, which we haven't done. This results in patterns where we use if-statements based on the builtin's name or auxiliary visit methods like `_visit_shift`, essentially creating a "virtual" IR within the IR. This is also coupled with excessive verification of constraints like the types of the arguments to a function. This kind of code is substantially more verbose and complicated than leveraging OOP to do the same.
### Improvements of the implementation
#### Using MLIR & the LLVM infrastructure
MLIR provides a powerful infrastructure to implement modern compilers:
- It's possible to define *dialects* that are basically domain specific languages, and mix the dialects in the same IR
- Progressive lowering is done by converting one dialect at a time to a lower level dialect
- Conversion and optimization passes can be defined in a straightforward and structured manner
- The dependency graph of operations is exposed directly to help implement IR analysis
- Common transforms are generalized accross dialects: inlining, canonicalization, debug checks, CSE, bufferization, ...
- Compilation uses multi-core CPUs
- Straightforward way to produce LLVM IR and SPIR-V from MLIR, therefore binaries for CPU and GPU
Leveraging this infrastructure can help with focusing on the domain specific problems instead of building the infrastructure in-house.
#### Performance of the compilation process
The current compilation pipeline in GT4Py is rather slow, stencils taking several seconds to compile. MLIR is about 50 times faster.
#### Performance of the optimized binaries
I'm not sure if Felix referred to the performance of the code or the performance of the compiler