# Next steps for Zarr integration
## Current functionality
Acquire can write straight to Zarr on disk. As of [#80](https://github.com/calliphlox/cpx/pull/80), we write files in chunks of no larger than 256 MiB. Chunking is only done on the append (time) dimension. External metadata is written directly to the `.zattrs` file.
## Current limitations
- Storage is not configurable. Among the things we could open up to user input are:
- Chunk size (not only along the append dimension, but along other dimensions as well).
- Dimension separator (`.` or `/`). Only `/` is currently supported.
- Compression (see below).
- Missing acquisition metadata
- We throw away frame timestamps.
- Camera data, e.g., transformations
- Group support
- What are the relevant use cases for group support? (Question for Kyle?)
- Timestamp information might be one (see above).
- Bet we could go faster
- Need benchmarking for this.
- Zarr v3 spec
- NGFF / OME-Zarr
- This is less clear to me (acl)
## Requests from Adam's lab
- Compression
- Adam notes that Cameron (?) has gone through all the supported compression codecs and one of them is very promising for the kind of data they collect.
- Per Adam:
> The scheme is reading Imaris HDF5 files in off of our VAST NAS system using ... what looks like 64 cores and 192 GB of RAM, then writing the data back out to Zarr with various compression schemes directly to S3... blosc-zstd level 1 with shuffle was the best option in both respects. ... it appears that with 64 cores Cameron is able to get write/compression speeds to Zarr approaching 2 GB/sec.