---
tags: NGFF
---
# NGFF-vizarr-Notes
**Zulip braindump from Trevor (2020-10-01)**
Love the idea for open-with-url. I haven't shared using vizarr to open OME-Zarr
urls directly much because the primary use we highlighted in our paper is via
the imjoy-rpc with a python API.
Something you will understand the nuance of, but I haven't articulated well on
my own, is the difference between Avivator and vizarr from our lab (and why we
have two viewers at all)
Avivator is a purely client web-based viewer that showcases nearly all the
features the viv library provides. It is designed to view OME-TIFFs and the
output of bioformats2raw --file-type=zarr (which is technically not OME-Zarr
due to the lack of metadata in data.zarr/0/.zattrs)
Avivator works by parsing the OME-XML from within the OME-TIFF (or METADATA.xml
for the zarr variant). It then selects the first N channels in an image,
assigns colors, and computes some reasonable defaults for contrast settings
based on the histogram of the lowest resolution for a pyramid.
vizarr, in contrast, only focuses on the zarr-component of Viv. If provided a
URL, it expects that the data at the endpoint is the root of an OME-Zarr.
Therefore, it doesn't guess what colors to use, which channels to show, or
contrast limits to set -- it should just fail loudly.
If using vizarr as a standalone viewer (e.g. opening up
https://hms-dbmi.github.io/vizarr in a browser tab), you must provide a source
query parameter in the url that points to an OME-Zarr. In this mode, no code
for imjoy is loaded. Instead, imjoy is only used if trying to load vizarr in a
Jupyter notebook.
When using vizarr _within_ a Jupyter notebook, the
https://hms-dbmi.github.io/vizarr webpage is loaded within an iframe by imjoy
an python API is exposed that allows the users to programmatically interact
with the vizarr web-app.
The core API that we expose simply creates a custom zarr.js store that wraps
_any_ zarr-python store. This means that your data don't need to be available
via HTTP, and instead the imjoy-rpc takes care of sending chunks securely from
the python kernel (could be remote server) to the HTML Jupyter notebook on your
machine.
Right now the imjoy API is very minimal, it's mainly to allow one of our
collaborators to run a registration workflow and view the results within the
same notebook (without having to spin up a public server and open another
web-page).
...
The approach in omero-openwith-url seems extremely reasonable to me. The
support for OME-Zarr via url in vizarr is mostly a side-effect of just building
a viewer around zarr in general. Once I had things working in imjoy I just
exposed the source query parameter so you could use the viewer without imjoy.
In one case the store is a zarr.js HTTPStore, in the other it's a custom imjoy-rpc store.
It's kind of like nueroglancer except just for OME-Zarr :)
When you navigate to https://hms-dbmi.github.io/vizarr you download all the JS
needed to view an image. You could turn off internet, start a local HTTP
server, and view an OME-Zarr.
...
Adding the ability to share a view state should be somewhat straight forward.
We implemented viv's rendering component as custom deck.gl layers, where each
layer additive blends active channels. So we have one deck.gl canvas that keeps
track of it's own notion of global viewer state.
Not to throw JS at you, but here is essentially the outward API for vizarr:
https://github.com/hms-dbmi/vizarr/blob/933d97cf4b19ab4c8aa83a0662ee5dca47707103/src/pages/index.tsx#L19-L55
there are two useEffect react hooks: one that checks the url params and other that is executed once, checking if vizarr is loaded in a jupyter notebook.
Once an config is created for a source, its just a matter or drawing that as a layer in the canvas.
At it's core, viv provides utilities for requesting chunks of data (from OME-TIFF or zarr-based images) and then rendering those chunks as deck.gl layers. So there isn't really a UI for viv as much as a set of building blocks to developer higher-level applications.
deck.gl has many types of layers as well, Points, Vectors, etc, and in the long-term it would be great to have a less feature-complete napari (built on deck.gl) which is essentially a UI for composing these layers
...
So for me, it's clear what viv needs to maintain: this bridge between dense array formats (OME-TIFF / zarr) and image layers indeck.gl. This includes maintenence of both zarr.js and geotiff.js. I think our apps, avivator & vizarr, have much smaller scope because they are currently at a point that we feel we can maintain a small set of use cases.