Author: Isaac Virshup (isaac.virshup@helmholtz-muenchen.de)
Status: Draft
Type: Specification
Created: <date created on, in dd-mmm-yyyy format>
Discussion: <url> (link to zarr-developers post for discussion)
Resolution: <url> (required for Accepted | Rejected | Withdrawn)
This ZEP defines string and variable length binary datatypes. It describes the binary format for the chunks of these arrays.
This section describes the need for the proposed change. It should describe the existing problem, who it affects, what it is trying to solve, and why.
This section should explicitly address the scope of and key requirements for the proposed change.
Strings are a commonly used data type in many analytic fields. They are frequenly used as labels of array dimensions or for categorical variables.
That zarr v3 does not support a string encoding is a loss of features (even ones defined by convention) from zarr v2. Without support for strings, zarr v3 does not have the features required by downstream projects like netcdf, xarray, and anndata.
This specification The encoding for strings here is utf-8
, as that is the most commonly used encoding.
LargeString
, LargeBinary
data type where the offsets are 64 bit
List[T]
, LargeList[T]
(where T
is the inner dtype, e.g. List[Int32]
) data types
List type
One could imagine having:
List[BufferDtype, ValueDtype]
Where String
is a specialized case of List[Int32, UInt8]
. The buffer dtype and value type could be expressed as configuration
values of the data_type
.
However it may be easier to just specialize on the two most common cases, and leave this for further discussion. Cases which make this more complex:
List[List[data_type]]
List[extension_type]
These cases make it complex enough I think this is a non-goal.
This proposal explicitly excludes complex nested array structures (e.g. awkward array, sparse arrays).
While these have similar structures (offsets + data buffers) they are more complex, and more domain specific. Text and variable length byte data are extremely common, and supported by similar formats such as n5
, hdf5
, tiledb
, and arrow
.
Additionally, it can be useful to access the component arrays directly in these more complex structure. E.g. retriving the structure of a sparse array, accessing a single field from a ragged collection of structs. This would suggest an approach where the offset and data arrays are not bundled in a single buffer, but storing those buffers seperatley.
This would need each chunk of a zarr Array to be composed of multiple buffers in a store – and is not compatible with the current zarr data model.
This section describes how users of Zarr will use the new features, spec changes or a new process described in this ZEP. It should be comprised mainly of code examples that wouldn’t be possible
without acceptance and implementation of this ZEP, as well as the impact the proposed changes would have on the ecosystem. This section should be written from
the perspective of the users of Zarr, and the benefit it will provide them; as such, it should include implementation details only if necessary to explain the
functionality.
g = zarr.group()
g.create_array("string_array", data=["the", "quick", "brown", "fox"])
g["string_array"][:]
# array(["the", "quick", "brown", "fox"], dtype=object) # The return type is flexible
This section describes how the ZEP breaks backward compatibility.
Its purpose is to provide a high-level summary to users who are not interested in detailed technical discussion, but may have opinions around, e.g., usage and
impact.
This does not break any backwards compatibility with zarr v3.
One concern may be that it will break particular implementations of zarr v3, since it introduces a new concept of variable length types. But this is an extension, so can be adopted gradually.
There may be issues with codecs that accept or return multidimensional arrays, since the format of string arrays may not be consistent across languages.
This section should provide a detailed description of the proposed change. It should include examples of how the new > functionality would be used, intended
use-cases and pseudo-code illustrating its use.
The proposed formats borrow heavily from arrow
's variable length string and binary data types.
This ZEP interprets the definition of data_type
as a schema for each chunk after all codecs have been applied.
This means data_types
could be thought of as the "final codec".
The data types being proposed here are
{
"data_type": {"name": "string"}
}
{
"data_type": {"name": "binary"}
}
These have a similar buffer layout. Each chunk is composed of
chunk_size
+ 1 int32 offsets (int64 for LargeString
, LargeBinary
types)fallback
for string would be "binary"
chunk_buffer: memoryview = ...
chunk_shape: tuple[int, ...] = ...
chunk_size: int = reduce(mul, chunk_shape)
# buffers start points are aligned to 64 bytes
def round_up_to_multiple(x, base=64):
n, rem = divmod(x, base)
if rem > 0:
n += 1
return n * base
offset_end_idx = (chunk_size + 1) * 4
data_start_idx = round_up_to_multiple(offset_end_idx)
offset_buffer = chunk_buffer[:offset_end_idx]
data_buffer = chunk_buffer[data_start_idx:]
Interpreting these chunks is a little more complicated, since it depends on the kind of array we are returning.
Numpy is going to be the worst case, since it doesn't do variable length data:
result = np.array(shape=chunk_size, dtype=object)
for i in range(len(offsets) - 1):
# For binary:
result[i] = data_buffer[offsets[i]:offsets[i+1]]
# for string:
result[i] = str(data_buffer[offsets[i]:offsets[i+1]], "utf8")
However, this is zero copy using pyarrow
:
import pyarrow as pa
# For binary
pa.Array.from_buffers(
pa.binary(),
chunk_size,
(None, pa.py_buffer(offset_buffer), pa.py_buffer(data_buffer))
)
# For string
pa.Array.from_buffers(
pa.string(),
chunk_size,
(None, pa.py_buffer(offset_buffer), pa.py_buffer(data_buffer))
)
And also for awkward array:
import awkward as ak
offset_array = np.frombuffer(chunk_buffer[:offset_end_idx], dtype=np.int32)
# For binary
ak.Array(ak.contents.ListOffsetArray(
ak.index.Index(offset_array),
ak.contents.NumpyArray(data_buffer, parameters={"__array__": "byte"})
parameters={"__array__": "bytestring"}
))
# For strings
ak.Array(ak.contents.ListOffsetArray(
ak.index.Index(offset_array),
ak.contents.NumpyArray(data_buffer, parameters={"__array__": "char"})
parameters={"__array__": "string"}
))
offsets = np.zeros(array.size + 1, dtype=np.int32)
offsets[1:] = np.cumsum([len(x) for x in array])
# For bytes
data_buffer = memoryview(b"".join(array))
# For strings
data_buffer = memoryview("".join(array).encode())
offset_buffer = offsets.view(np.uint8)
data_start_idx = round_up_to_multiple(len(offset_buffer))
chunk_buffer = np.zeros(data_start_idx + len(data_buffer), dtype=np.uint8)
chunk_buffer[:len(offset_buffer)] = offset_buffer
chunk_buffer[data_start_idx:] = data_buffer
This section should list relevant and/or similar technologies, possibly in other libraries. It does not need to be comprehensive, just list the major examples
of prior and relevant art.
This section lists the major steps required to implement the ZEP. Where possible, it should be noted where one step is dependent on another, and which steps may
be optionally omitted. Where it makes sense, each step should include a link to related pull requests as the implementation progresses.
Any pull requests or development branches containing work on this ZEP be linked to from here. (A ZEP does not need to be implemented in a single pull request if
it makes sense to implement it in discrete phases).
If there were any alternative solutions to solving the same problem, they should be discussed here, along with a justification for the chosen approach.
Pros
Cons
awkward.to_buffers
)Pros
Cons
Still investigating this, but it is a new type being introduced to arrow for representing strings.
PR #0 [3] adds Utf8View and BinaryView types to the C++ implementation and to the IPC format. For the purposes of IPC raw pointers are not used. Instead, each view contains a pair of 32 bit unsigned integers which encode the index of a character buffer (string view arrays may consist of a variable number of such buffers) and the offset of a view's data within that buffer respectively. Benefits of this substitution include: - This makes explicit the guarantee that lifetime of all character data is equal to that of the array which views it, which is critical for confident consumption across an interface boundary. - As with other types in the arrow format, such arrays are serializable and venue agnostic; directly usable in shared memory without modification. - Indices and offsets are easily validated.
The offset array is now larger per-record and there are a variable number of data buffers (with 0 being possible). From the PR:
* Short strings, length <= 12
| Bytes 0-3 | Bytes 4-15 |
|------------|---------------------------------------|
| length | data (padded with 0) |
* Long strings, length > 12
| Bytes 0-3 | Bytes 4-7 | Bytes 8-11 | Bytes 12-15 |
|------------|------------|------------|-------------|
| length | prefix | buf. index | offset |
Big benefits from this approach are:
In theory, one could use unsigned integers for offsets. However, I would propose we use signed integers.
LargeString
, LargeBinary
type with int64 offsets can be used.This section should have links related to any discussion regarding the ZEP. It could be GitHub issues and/or discussions. (The links to discussions in past
if any, goes in this section.)
Each ZEP must either be explicitly labelled as placed in the public domain (see this ZEP as an example) or licensed under the
Open Publication License.
This document has been placed in the public domain.