owned this note
owned this note
Published
Linked with GitHub
# 2021-11-10 DataONE Community Call
## Approaches to cross-repository dataset replication and linking
[Hackmd.io Link](https://hackmd.io/EKi9azkVTzW2FzmsZPjIgw)
**Time:**
```
2021-Nov-10 17:00 UTC
2021-Nov-10 12:00 America/New_York
2021-Nov-10 10:00 America/Denver
2021-Nov-10 09:00 US/Pacific
```
:::spoiler Click for more times
```
2021-Nov-10 17:00 UTC
2021-Nov-10 12:00 America/New_York
2021-Nov-10 11:00 America/Chicago
2021-Nov-10 10:00 America/Denver
2021-Nov-10 10:00 America/Phoenix
2021-Nov-10 09:00 US/Pacific
2021-Nov-10 08:00 US/Alaska
2021-Nov-10 07:00 Pacific/Tahiti
2021-Nov-11 06:00 Pacific/Auckland
2021-Nov-11 04:00 Australia/Sydney
2021-Nov-11 03:30 Australia/Adelaide
2021-Nov-11 02:00 Asia/Tokyo
2021-Nov-11 01:00 Australia/Perth
2021-Nov-11 01:00 Asia/Hong_Kong
2021-Nov-10 22:45 Asia/Kathmandu
2021-Nov-10 19:00 Europe/Riga
2021-Nov-10 18:00 Europe/Paris
2021-Nov-10 17:00 Europe/London
```
:::
### Description
Repositories frequently need to both replicate datasets that are held in other repositories (for policy, availability, and other reasons) and to link to external datasets that represent sometimes the same and sometimes related datasets. For example, the Arctic Data Center frequently needs to replicate datasets from EDI to meet NSF policy guidelines, and so have developed out a streamlined workflow to make sure that researchers do not have to double enter their metadata or data. Being able to generalize these capabilities across the network could increase efficiency and reduce duplication.
### Invited Speakers
* Natasha Haycock-Chavez - Arctic Data Center
* Joan Damerow - Lawrence Berkeley National Laboratory
* Mark Servilla - Environmental Data Initiative
### Participants
(Please add your name, affiliation and email)
* Amber Budden, DataONE/NCEAS/Arctic Data Center, aebudden@nceas.ucsb.edu
* Karl Benedict, University of New Mexico, DataONE Community Board Co-Chair, kbene@unm.edu
* Mike Conway, Nat'l Institute of Environmental Health Sciences (NIEHS/NIH) mike.conway@nih.gov
* Valerie Hendrix, Lawrence Berkeley National Lab (ESS-DIVE/NGEE Tropics/Watershed Function SFA) vchendrix@lbl.gov
* Matt Jones, DataONE/NCEAS/Arctic Data Center, jones@nceas.ucsb.edu
* Nancy Voorhis, FEMC, UVM, nancy.voorhis@uvm.edu
* Bryan Westra
* Bryce Mecum
* Chad Trabant
* Chris Turner
* Gary Motz
* Greg Maurer
* Jeanine McGann
* Kim Ely, Brookhaven National Lab, kely@bnl.gov
* Kristin Vanderbilt
* Kyle Andrew Zollo-Venecek
* Liz
* Margaret O'Brien, UCSB, EDI, mobrien@ucsb.edu
* Marisa Conte
* Martin Seul
* Mary Martin
* Meghan Eyler
* Suzanne Remillard, Andrews Forest LTER, Oregon State University, suzanne.remillard@oregonstate.edu
* Stevan Earl
* Susan Borda
* Dave Vieglais
* Natasha Haycock-Chavez, Arctic Data Center haycock-chavez@nceas.ucsb.edu
### Agenda
* Welcome and logistics (5 minutes)
* Introductions (2 minutes)
* Panelist comments (15 minutes)
* Community Discussion (35 minutes)
### Notes
* Arctic Data Center - Natasha Haycock-Chavez
* ADC supports registration (through metadata addition) of datasets into their system while hosted in other repositories
* About 95% of data are submitted to the ADC, and the remainder to other repositories
* Metadata can be synchronized between ADC and external repositories either in an automated fashion through the DataONE API (if supported by the host repository), or through a manual process that may include identical or non-identical metadata, depending on metadata compatability.
* ESS-DIVE - Joan Damerow, Val Hendrix
* Focus on new approach to external linking
* ESS-DIVE repository for DOE Environmental System Science. Observational, experimental, ... data
* External linking approach using schema.org - EML, JSON-LD. Links to copies or part of the dataset elsewhere.
* Next step will extend the reference model to use related identifiers for external links. Based on DataCite Related Identifiers. May require some extensions of the DataCite model.
* Need to support linkages across seven different systems. Need to determine linking needs.
* EDI - Mark Servilla
* 45k unique data packages, over 80k when including versions
* Data packages = metadata + data. Package subject to quality checks - producing a related quality report.
* DataCite DOIs provided for each publicly accessable data package
* Scenario - need to publish data packages that relate to data published in another repository.
* How should replicas be represented in the repository?
* Should a replica be assigned a new DOI?
* Should a replica store the full data package (data + metadata) that is also stored in the source repository?
* Potential solution - use of the EDI - PASTA provenance model
#### Q&A and Discussion
Please write any questions here as they are raised...
* MJ: One challenge with replicating data is that the replicas can be accessed from multiple repositories and access metrics are important to repos, which might make replication less attractive to a repository. How might DataONE be utilized in this area (wrt rporting and metrics). Thoughts?
* MS: Haven't though about that yet
* VH: Allow contributor to decide if they want to replicate or point to original repository. Policies re what is linked out to not yet formalized. Would want to ensure that they also have long term stewardship. Size considerations. Hence case by case basis. Allow use of original minted DOI.
* MS: Same concern about replicating large volume data. Do need to be concerned about longevity/integrity of authoritative dataset. EDI runs checks and the local copy makes that possible. Would want to retain thos checks.
* AEB: What is the purpose of replication? Discovery or preservation? And does responsibility lay with the contributor or the repository? What is driving this?
* MS: Not necessarily preservation, mostly discovery; hurdle with project maintaining a local data catalog with their API, but data that are not in their repo, the catalog would be missing items;
* KV: wants ability to create data catalog from EDI and have everything in it; could rearchive in their catalog, but would prefer to pull it in from an external repo
* VH: ESS-DIVE mandates discussed
* KB: What models have you been looking at for metadata harvesting/sync? Thinking back to the web-accessible folder model for geospatial data... and are the repos you need to pull from using standards for metadata that work?
* JD: Many of them are working with genomics data and have similar standards there but metadata harmonization is a challenge. Even using the same identifiers causing challenges, e.g. sample data. Recognising a common identifier. Starting with identifiers - fining common identifiers and promoting PIDs
* MJ: Not harvesting all of the data sets at a given repository. Done on a individual basis. If wanting to harvest both the metadata and data thats more of a challenge. Harmonization is a challenge. Frequently get great metadata that is immediately complicant. Othertimes get metadata that is comprehensive and comlete but in another standard. If we convert this to an aligned standard, do we issue a new DOI? Use a 'deried from' indicator but this is a bit opaque, may not show in google search for example.
* KB: Could we use version and shared version control for metadata where an identifer for the metadata remains stable, within that there is reference to a particular version within a (for example) git based system.
* MS: Still not a clear path forward. Nowhere near automating this at this point. Note these things in the metadata but no structured way. Reliance is upon curator / information manager. With the quality checking and the inclusion of this report, we would assign a new DOI. But how do we capture the relevant links back to the origin source.
* MJ: It would be great is we could all agree on a vocubulary to support this. Can be difficult to parse out provenance relationships when there is variation in vocabulary. Provenance approach in DataONE is also used in science-on-schema.org. Can we get community agrement?
* Here's the provenance proposal for science-on-schema.org: https://github.com/ESIPFed/science-on-schema.org/blob/master/guides/Dataset.md#provenance-relationships
* MS: schema.org, annotation capability are options.
* VH: Have a rest API for users, shows data package in JSON-LD. metadata stored in EML (via Metadcat). schema.org incorporated as annotations
* MJ: challenge of DataCite predicates relationship ... which are numerous, and somewhat overlapping
* VH - Found the schema.org relationsip terms more usable
* AEB: When these replicas are portrayed on the repository, how is it shown to the data contributors and data users? Should our focus be on education in the community about these relationships, or slick UI displays?
* VH: Wide range of users from capable and interested in hand-crafting their packages, vs. heavy UI support.
* JD: Recognition that the UI needs to clarify the complexities inherent in replication.