owned this note
owned this note
Published
Linked with GitHub
---
title: 'Intake-esm Design Document'
---
## Table of Contents
[TOC]
## Goal
Be able to _search_, _discover_, and _ingest_ climate datasets from major model inter-comparison projects such as CMIP into xarray by eliminating the need for the user to know specific locations (file path) of their data set of interest.
## Introduction
[`Intake-esm`](https://github.com/NCAR/intake-esm) provides an intake plugin for creating file-based [intake](https://intake.readthedocs.io/en/latest/) catalogs for climate data from project efforts such as the Coupled Model Intercomparison Project (CMIP) and the Community Earth System Model (CESM) Large Ensemble Project. These projects produce a huge of amount climate data persisted on tape, disk storage components across multiple (in the order of ~ 300,000) netCDF files. Finding, investigating, loading these files into data array containers such as xarray can be a daunting task due to the large number of files a user may be interested in.
### Related Work
- [ESGF Search API](https://github.com/ESGF/esgf.github.io/wiki/ESGF_Search_REST_API) - They have already thought about many of these problems
- [STAC Datacube Extension](https://github.com/radiantearth/stac-spec/tree/master/extensions/datacube)
- [AOSpy](https://aospy.readthedocs.io/en/latest/) - aospy enables firing off multiple calculations in parallel using the permutation of an arbitrary number of climate models, simulations, variables to be computed, date ranges, sub-annual-sampling, and many other parameters.
## Data Holdings
### Supported
So far, `intake-esm` supports the following data holdings:
- **CMIP5** Data Holdings stored on NCAR's GLADE Storage system
- Directory Structure:
<activity>/
<product>/
<institute>/
<model>/
<experiment>/
<frequency>/
<modeling realm>/
<MIP table>/
<ensemble member>/
<version number>/
<variable name>/
<CMOR filename>.nc
- CMOR filename:
<variable name>_<MIP table>_<model>_<experiment>_<ensemble member>
[_<temporal subset>][_<geographical info>].nc
- format: netCDF
- **CMIP6** Data Holdings stored on NCAR's GLADE Storage system
- Directory Structure:
<mip_era>/
<activity_id>/
<institution_id>/
<source_id>/
<experiment_id>/
<member_id>/
<table_id>/
<variable_id>/
<grid_label>/
<version>/
<CMOR filename>.nc
- CMOR filename:
<variable_id>_<table_id>_<source_id>_<experiment_id>_
<member_id>_<grid_label>[_<time_range>].nc
- Note: For time-invariant fields, the last segment (`time_range`) above is omitted.
- Format: netCDF
- **CESM1-LENS** data holdings stored on both NCAR's GLADE Storage system and HPSS Tapes
- Filename template:
- Format: netCDF
- **CESM1-LENS** data publicly available on Amazon S3 (us-west-2 region) --> https://ncar-cesm-lens.s3.amazonaws.com/
- Bucket structure:
<component>/<frequency>/cesmLE-<experiment>-<variable>.zarr
Where:
- component = atm (atmosphere), lnd (land), ocn (ocean), ice_nh or ice_sh (ice, northern and southern hemispheres)
- frequency = monthly, daily, or hourly6-startYear-endYear (6-hourly data are available for distinct periods)
- Format: zarr
- **ECMWF ERA5** data holdings stored on NCAR's GLADE Storage system
- Filename template:
- Format: netCDF
- **NA-CORDEX** data holdings stored on NCAR's GLADE Storage system
- Filename template:
<variable>.<experiment>.<global_climate_model>.<regional_climate_model>.
<frequency>.<grid>.<bias_corrected_or_raw>.nc
- Format: netCDF
### Need Support
- **CMIP6** data holdings in zarr format, including in Pangeo's Google Cloud Storage, but also on LDEO's linux machines and elsewhere.
- Example: <https://console.cloud.google.com/storage/browser/pangeo-cmip6>
- Directory Structure:
<mip_era>/
<activity_id>/
<institution_id>/
<source_id>/
<experiment_id>/
<member_id>/
<table_id>/
<variable_id>/
<grid_label>
- Zarr store structure:
- At the <grid_label> level, the directory is a standard zarr store, with files .zattrs,.zgroup,.zmetadata and a subdirectory named for each coordinate and variable
- Example: [`pangeo-cmip6/AR6_WG1/CMIP/NASA-GISS/GISS-E2-1-G/piControl/r1i1p1f1/Amon/va/gn/`](https://console.cloud.google.com/storage/browser/pangeo-cmip6/AR6_WG1/CMIP/NASA-GISS/GISS-E2-1-G/piControl/r1i1p1f1/Amon/va/gn/)
- It might be nice if the zarr store had the suffix `.zarr`, i.e. `gn.zarr`.
- Format: zarr
## Design
`intake-esm` provides an interface that permits searching and ingesting datasets.
### Using collections
It is important to distinguish between generating `intake-esm` collections and *using* those collections. `intake-esm` currently has code to crawl directories and parse filenames to build collections. The functionality for building collections has no inherent dependence on functionality for usage, other than the collection build must yeild a `collection` that conforms appropriately.
#### `collection`
Collections are defined in a yaml file.
##### Dataset-specific attributes
A `collection` is defined by a list of attributes that define a particular dataset. For instance, for CMIP we have
```yaml
collection_columns:
- activity_id
- institution_id
- source_id
- experiment_id
- member_id
- table_id
- variable_id
- grid_label
- path
```
##### Control of returned datasets
There are set of attributes that determine how datasets are returned to the user. `intake-esm` returns a dictionary of datasets where the keys are compatible groups and the values are datasets that have been concatenated and merged according to specification.
```yaml
# define the set of attributes for which common values indicate
# datasets that can be merged or concatenated. `intake-esm` applies
# `df.groupby(col_compatible_dset_attrs)` to determine the unique groups
col_compatible_dset_attrs:
- institution_id
- source_id
- experiment_id
- table_id
- grid_label
# define a set of new dimensions across which to
# concatenate and construct a return dataset.
col_concat_newdim_attrs:
- member_id
# define a set of collection attributes across which to
# merge returned datasets
col_merge_attrs:
- variable_id
```
`intake-esm` always concatenates across `time`.
##### Private attributes
Additional attributes are "private"; these enable `intake-esm` to appropriately access the data. For instance:
- `resource_type`: string, [posix, HPSS, gs, s3,...]
- `direct_access`: boolean, can the data be read directly?
- `data_format`: string, netCDF, Zarr, etc.
These attributes need to be set by the builder of the catalog.
### Building collections
Building collections could happen in a variety of ways. At present, `intake-esm` relies on a yaml file to control building. We make
#### `storage_resource`
A `storage_resource` object is defined by the following attributes:
- `name`: string
- `resource_type`: string, [posix, HPSS, gs, s3,...]
- `direct_access`: boolean, can the data be read directly?
- `data_format`: string, netCDF or Zarr
- `urlpath`: string, path to data tree
- `exclude_dirs`: list of string, glob patterns to omit
## Application Programming Interface (API)
- Loading Dataset collection catalog
```python
import intake
col = intake.open_esm_metadatastore(collection_name="GLADE-CMIP6")
```
- Search and Discovery
```python
cat = col.search(variable_id=['hfls'], table_id=['Amon'], \
experiment_id=['1pctCO2', 'histSST'], \
source_id=['CanESM5', 'IPSL-CM6A-LR'])
```
- Data Loading
```python
dsets = cat.to_xarray(chunks={'time': 100})
```
## CSV Based Approach
At Lamont we have had success with a CSV-file catalog that looks like this:
```csv
activity_id,institution_id,source_id,experiment_id,member_id,table_id,variable_id,grid_label,zstore
AerChemMIP,BCC,BCC-ESM1,ssp370,r1i1p1f1,Amon,pr,gn,gs://pangeo-cmip6/AR6_WG1/AerChemMIP/BCC/BCC-ESM1/ssp370/r1i1p1f1/Amon/pr/gn/
AerChemMIP,BCC,BCC-ESM1,ssp370,r1i1p1f1,Amon,ts,gn,gs://pangeo-cmip6/AR6_WG1/AerChemMIP/BCC/BCC-ESM1/ssp370/r1i1p1f1/Amon/ts/gn/
AerChemMIP,BCC,BCC-ESM1,ssp370,r1i1p1f1,Amon,ua,gn,gs://pangeo-cmip6/AR6_WG1/AerChemMIP/BCC/BCC-ESM1/ssp370/r1i1p1f1/Amon/ua/gn/
```
This format is basically ad-hoc but it really fits the way people want to use the data. You can load it in pandas, a familiar tool for most, and execute complex queries that way. Together with [qgrid](https://github.com/quantopian/qgrid), one can very quickly make a fast, interactive, browseable table that can be used to sort and filter the data. Naomi showed examples of this working well on files with 20,000+ rows.
We are convinced that a CSV catalog is a useful way to index this data. CSV is
- simple: everyone can read and write it (even non-python people)
- interoprable with many different tools (not locked into one package)
- capable of describing both cloud and file-based holdings
- flexible enough to encompass the diversity of ESM datasets; the choice of columns is totally arbitrary and can be tailored to the specific dataset
_So where does intake fit in?_ We opened an [issue](https://github.com/intake/intake/issues/417) to discuss this. Basically, intake provides the ability load the data into python, via its `.read()` and `.to_dask()` methods. In that issue, Martin Durant very quickly whipped up an example of intake loading such a CSV catalog, providing a qgrid widget, and loading the selected data to xarray.
We think this could be the right path for our project here. Specifically
- All intake esm datasets are catalogged into a simple csv file.
- Intake can read this file and provide tools for loading the data.
- Or users can bypass intake and just look at the csv file (useful for other languages).
Some of the major open questsions are:
- What is the specification for the CSV file? Naomi chose `zstore` as the name of the data path. I suggested the more generic `path`. In general, we might want to distinguish between
- columns related to data attributes (e.g. `activity_id`, `source_id`)
- columns related to file details (e.g. `path`, maybe also `format={zarr|netcdf}`)
- Do we want to support aggregation of multiple files into a single dataset? This is highly complex. Intake-esm does this now, in a fairly ad-hoc way. This is emerging as one of the major bariers to cloud vs hpc interoperability. As things stand now, we would need to re-implement all of that same logic in a cloud-based catalog. A big practical difference is that Naomi has already conconcatenated all the time ranges into a single zarr store before uploading, so we have lost the 1:1 correspondence between netcdf files and zarr stores. Ideally, on cheyenne, this concatenation could happen on the fly when loading a dataset. If the answer here is **yes**, we need to think about the best place for such code to live. I think that specifying complex merge options for xarray datasets is a very general problem that would be useful in many different contexts, so perhaps should not live in an intake or ESM specific package.
We should meet asap to discuss these challenges and make a strategy for going forward.
## Proposal (2019-10-02)
- We define an **ESM catalog spec**, ideally as a csv file. This would contain aribtrary metadata columns (e.g. activity_id) plus perhaps some required ones (filetype, path). The paths can be files paths or web endpoints, doesn't matter. This lives in a standalone repo with a simple spec validator script. Generating catalogs is the responsibility of the data provider. Can be as simple as walking a directory tree or something else.
- https://github.com/di/vladiate
- Maybe we also need a `meta.yaml` file for column definitions, etc.
- We refactor intake-esm around the new catalog spec. Now its job is to parse the catalog and provide an intake interface to loading the data. No special cases. If the catalog matches the spec, intake-esm can handle it, period.
## Appendix and FAQ
:::info
**TODO**!
:::
###### tags: `ncar` `pangeo`