owned this note
owned this note
Published
Linked with GitHub
# Integral
`Integral` aim to provide a framework for provenance tracking, automation and extensibility in the context of analysis using `xarray.Dataset's`.
## Objectives
Here we attempt to clearly articulate the problems we are trying to solve.
### Enable common interface to different types of Datasets
It is often not possible or computationally efficient to merge data into a single Dataset. For instance,
- collections of models on different grids;
- collections of variables from different numerical experiments;
- combinations of gridded observations and models.
Yet as a matter of convience and to enable automation of workflows, it is desirable to have a clean interface to collections of Datasets.
### Provide automated caching of intermediate results
While Dask enables powerful parallelism and thus rapid execution of large computation, in practice (at least in my experience) it is often not possible to reduce execution time below thresholds required to sustain interactive analysis. Caching the results of expensive computations is an easy way to dramatically enhance interactive analysis.
Moreover, in the context of a modular workflow, where multiple processes may want to use the results of the same computation, caching can dramatically reduce total computational cost.
Finally, cached dimensionally-reduced data products have the potential to enable interactive visualization tools.
Building caching in as an intrinsic component of the workflow can yield a seamless end-to-end framework, wherein computation is done only if necessary, relying on cached products when they are available, but preserving the entire provenance chain *in situ*.
### Formalize extensibility
Developing codes that handle Big Data in the context of complex workflows is challenging. We require a framework that is inherently *scalable* in the sense that codes can be rapidly developed and tested on small problems---and then deployed at scale on large ones. Development and testing is often best done outside the context of comprehensive frameworks, thus we want to enable development outside these frameworks, and specific means of plugging-in.
### Provide an API for data access
An API wraps messy details behind a standardized interface. Using APIs for data access has several advantages.
- Data access based on hard-coded paths is not portable;
- The *type* of data being read can be part of the API, triggering appropriate access to metadata or overloaded methods;
- Desired product sometimes involves rote computation
- e.g., [`pop_tools.get_grid(...)`](https://pop-tools.readthedocs.io/en/latest/examples/get-model-grid.html) reads `CESM INPUTDATA` files via web protocol;
- Access details are messy:
- Arbitrary number of files,
- Standardizations steps to be applied en route;
- An API can be parameterized (i.e., it can accept arguments, enabling control and automation).
### Provide infrastructure to automate interoperability, dataset-specific settings and preferences
Standards and conventions can help make data interoperable, but in reality, many of our datasets have peculiarities that require special treatment. The desired units of a particular quantity, for instance, may need standardization.
## Design
### Objects
There are two primary objects: `Component` and `Collection`.
A `Component` is to a `Collection` as an `xarray.DataArray` is to an `xarray.Dataset`. The `Collection` object provides a powerful means of loading multiple `Components` and operating on them en masse, acknowledging that they may require different handling.
Here is some pseudo-code defining the `Component`.
```python=
class Component(object):
"""A container for an `xarray.Dataset` that facilitates:
- loading data via intake-esm API
- applying methods with automated caching
- attaching dataset-specific methods for
particular computation
"""
def __init__(self, **kwargs):
"""Instantiate an instance of this object:
1. compose object from intake catalog and query
2. determine the dataset type?
3. apply query
4. set provence-tracking attributes for utilized data assets
"""
self._assets = ... # apply query
self._ds = None
@property
def ds(self):
"""lazy but persistent access to the xarray.Dataset
for this Component.
"""
if self._ds is None:
# open dataset from assets
# apply dataset-specific "data cleaning" operations
# self._ds = ds
return self._ds
@property
def _grid(self):
"""return an xarray.Dataset of the asset's grid"""
# use attributes of "self" to determine how to get the grid data
# for example: self._grid = pop_tools.get_grid(self.grid_name)
return self._grid
def _persist_ds(
compute_func,
dataset_name,
):
"""call operator within a wrapper to persist a dataset"""
# define the wrapper function
def compute_wrapper(self, **kwargs):
# get variables meant for local control
persist = kwargs.pop('persist', True)
clobber = kwargs.pop('clobber', False)
if persist:
# generate xpersist partial: xp_persist_ds
return xp_persist_ds(compute_func)(self.ds, **kwargs)
else:
return compute_func(self.ds, **kwargs)
return compute_wrapper
```
### Inject methods
We want to do something like `xarray` does for datasets (see [here](https://github.com/pydata/xarray/blob/master/xarray/core/ops.py)).
```python=
def inject_methods(cls):
for f in INJECTION_LIST:
# provenance_key = query "method registry"
func = cls._persist_ds(f, provenance_key)
func.__name__ = name
func.__doc__ = ...
setattr(cls, name, func)
```
This will be called following the declaration of `Component` etc.
### Integral projects
The `integral` package will be very general, providing key functionality for core tasks and supporting anything available in `xarray`, but will rely on extensions developed elsewhere to support application-specific implementations.
For example, the CESM project may have a `integral-cesm` repository that implements specific methods and configurations relevant to CESM components and observational datasets used to validate them.
## Example applications
Here is a an example of what the user API might look like. Imagine someone wants to create a plot showing the globally-integrated air-sea CO2 flux (`FG_CO2`) timeseries at annual resolution.
```python=
# load a collection for list of query values (experiments, variables)
dsets = integral.Collection(
experiment=experiments,
variable=variables,
)
# compute timeseries
dsets_ts = dsets.timeseries(
normalize=False,
freq='ann',
persist=True
)
# make a plot
dsets_ts['historical', 'FG_CO2'].plot()
```