owned this note
owned this note
Published
Linked with GitHub
# ICF Meeting 2
## Agenda: https://github.com/psychoinformatics-de/org/issues/155
1) DICOM data is hosted by the ICF in a single uncompressed TAR per visit. Where in the tree of a DataLad dataset branch is this placed?
**NOTES**
* Visit = BIDS session = DICOM study = 1 file in the ICF store
* Where do we represent the file in the tree of a datalad datset?
* We want to index the contents of the tar archive for further datalad benefits
* the content of the tarball starts with several layers of empty directories as it maps the internal organization of the ICF. If we clean this up, we will make us incompatible and impose a probably wrong heuristic, Laura and Alex also add that the ICF file organization is highly volatile and can't be relied upon.
* Instead of cleaning it up, we could go for UKB-like parallel representations in branches
* **BUT**: Whatever we add/change imposes a costs. Largest dataset so far: 100k files, every new representation adds this amount of files on top; can result in sluggishness
* Alex adds in chat: "less magic on our side also makes it easier for ICF to "control". Don't want those dirs? Stop generating .tars with those dirs. No "magic" code in the middle to tickle."
* DICOM Sorting: Mih asks "Why would we do it if they don't have an interest in this?", "Would the consumers actually be interested in DICOMs if they could get a converted NifTI instead?"
* mslw asks "Can we expect that niftis will be provided? If they exist, the need to do anything with DICOM reduces indeed"
* Mih says "Nifits are the natural outcome of DICOMs for all usecases I'm aware of. Who provides them? No clue".
* "Would not sorting the DICOMs be a problem for someone like e.g., Felix - no, I don't think so" - mslw adds that sorting would make selective DICOM access possible
* Aqw reminds us that there is little competence in the ICF, they likely don't know that things like DICOM2Nifti exist. They treat software we provide them with a blackboxes
* Mslw cautions that the final DataLad dataset will be judged for non-sensical organization, even if it is only mirroring the ICF structure
* mih is in favor of treating the ICF as input datasets (YODA style) to a standardization pipeline/workflow that we understand. Other standardization workflows are possible.
* Aqw: There is a desire to clean things up, but all paths of cleaning lead to madness
**CONSENSUS**: Go beyond the single tarball by exposting whatever the ICF is doing inside of the tarball. Mirror the mess that the ICF provides with this and expose the granularity of individual DICOMs to consumers and DataLad machinery. Additional clean-up (standardization, sorting, etc) is a possible and desired second step, e.g. in the form of a parallel representation.
* Where do we put the tarballs in the dataset hierarchy? Options put forward are:
* ``DICOM_archive.tar`` at toplevel
* ``inputs/filename_archive.tar``
* ``.datalad/archive.tar``
* ``sourcedata/archive.tar``
* ``icf/archive.tar``
* ``icf/<original-filename>.tar`` <- **CONSENSUS**
* ``icf_archive.tar`` at toplevel
* aqw asks for clarification: Given the individual file access from archives, would downloading all DICOMS in an archive be 100k thousand individual requests, or 1 request for the entire archive?
* What do we do with the MD5 sums that the ICF provides?
* We pull the md5sum, and use it to register the download with the MD5sum. The download will then once it happens verify the MD5sum (this means we don't need to perform downloads when indexing) **CONSENSUS**
2) Confirm that using uncurl as a remote to register file content availability is a sensible approach
**NOTES**:
* mih explains: the uncurl special remote provides a templating framework for registered location URLs that allow flexible updating of location information of annex contents. Its a thin front end for datalad-next's url operations framework
* the only viable alternative is a ICF-specific special remote. More flexible, but requires extra software
**CONSENSUS**: there is no counter-proposal or alternatives, so this is what we do
3) Any data access (incl. metadata extraction) requires extraction of the TAR archive. Do we index the (extracted) content in the dataset, or go a different path?
- Metadata extractor to report files from compressed archives datalad/datalad-metalad#59
- support DICOMs tarballs datalad/datalad-neuroimaging#32
**CONSENSUS**
We do extract, and use whatever organization is used in the Tarballs (as a first step, see notes to Q1)
4) If we decide to not index TAR content, how can we ensure that such an addition is backward-compatible with the organization we have gone for instead?
**CONSENSUS** Not applicable anymore
If we decide to index content...
- 4a) Do we use the add-archive-content + datalad-archives combo, despite their age and open issues, or fix them, or replace them?
- Implement alternative to add-archive-content datalad/datalad-next#183
- Rough sketch of an archivist annex remote datalad/datalad-next#223
**NOTES**
mih explains: Add-archive-content goes through the list o files in an archive, checksums the files, registers a key for each archive memeber, and registers a URL for each file pointing inside of the archive. To interpret this URL, the datalad-archives special remote needs to be enabled.
The command is really not necessary as long as you can turn a tarball into a table similar to that posted in the snippet - then you can use add-urls to add it to the dataset.
A reimplementation along the lines of the draft PRs with FSspec operations would allow transparent memeber access without the need to download the entire archive. Also adds caching capabilities. Concern: Throughput is dependent on certain parameters, but this is not yet fully understood (see first comment in https://github.com/datalad/datalad-next/pull/215/). An additional advantage is that this feature would be reusable and needed for OpenNeuroPET
**CONSENSUS** (from comments to the issue, and discussion in the call): replace
**Who does it**: Christian would be interested, Adina can help a bit, Michael is on board
**When**: must be done by the end of next week
**action item**: seperate meeting to plan this, schedule in chat
5) Can we support partial archive access to avoid unconditional full downloads? (A DICOM archive tends to be in the several-GB range), and the present implementation in datalad-core has a 200% overhead cost on storage
- Draft of FsspecUrlOperations datalad/datalad-next#215
- Rough sketch of an archivist annex remote datalad/datalad-next#223
**NOTES**: answered in the notes to Q4a
6) Which parts of the inner structure of a TAR archive do we expose in the dataset? Examples indicate several layers of empty directories
**NOTES**: answered in the notes to Q1
Issue 156