Try   HackMD

PEP Draft buffer protocol custom dtypes

Authors:

  • Sebastian Berg
  • Nathan Goldbaum

Abstract

The Python buffer protocol is widely adopted across the ecosystem to allow zero-copy data sharing between packages or across language boundaries. For example, Cython's typed memoryviews, NumPy's ndarray or other similar data structures provided by machine learning libraries, or PyO3's PyBuffer bindings for Rust crates. Despite those successes, a recurring limitation of the buffer protocol is its restricted set of allowed data types. This does allow for a wide representation of struct-like dtypes similar to what is directly representable in C, but it does not allow the safe exchange of data types not explicitly supported by the struct module. This includes buffers that contain:

  • Numerical types currently undefined in the struct module.
  • Variable-width strings or other data types defined in terms of a reference or offset.
  • Datetimes
  • Categoricals.
  • Custom arbitrary used-defined data types such as NumPy's user dtypes or other objects.

To address this, this PEP proposes to extend the format string to allow arbitrary custom data representations.

Motivation

The buffer protocol has been very useful in exchanging data between various projects. While NumPy has it's own exchange mechanism and other exchange protocols like the Arrow C data interface have arisen to make up for the limitations of the buffer protocol, the buffer protocol remains the most accessible and widely used method for sharing array data between NumPy, Cython, and other packages building on NumPy and Cython. Additionally, many rust crates make use of the PyBuffer type, which allows zero-copy sharing of byte buffers between Python and Rust.

While NumPy can export almost all data via the buffer protocol, the new StringDType cannot be safely shared as it has custom memory handling and references that cannot be described with the current format string. Additionally, NumPy datatime, float16, and float128 data types cannot be represented using the buffer protocol because these types are not supported by the struct module.

A further example is the rise of specialized numerical types, such as bfloat16, requring the use of other exchange protocols. The dlpack protocol was promulgated in part due to this limitation.

With this extension, the buffer protocol would allow users of NumPy to write native accelerator code that generically accepts arbitrary data types in the same way they can write code accepting data types that support the buffer protocol.

This will allow zero-copy sharing of new kinds of data among packages throughout the ecosystem and prevent future developers from writing workarounds for limitations of the buffer protocol.[1]

While proliferation of new protocols allows experimentation without changing Python, standardization in Python also allows broad ecosystem-wide adoption after a slow rollout as projects update their minimum required Python version. We believe that the success of the buffer protocol and its wide use across the ecosystem demonstrates how a protocol defined by Python can create a rich ecosystem of exciting use-cases. We believe that extending the buffer protocol as argued here will provide benefits justifying the slow rollout inherent to modifications to Python making possible new ways to share increasingly common data types between libraries without a need for unnecessary copying or casting.

Rationale

The goal of this PEP is to allow specifying arbitrary, custom, data types within the Buffer protocol fmt string specifier. While for some data types (such as bfloat16) including them into the struct format specifier may make sense, this can never represent the depth and breadth useful in practice:

  • NumPy data types that may be user-defined
  • Arrow categorical, list, etc.
  • Variable-width strings arrays, ragged arrays, or any data type defined in terms of a buffer of offsets or references.
  • Specialized numerical compute types.

Python should neither have to specify how to describe these data types nor itself implement support for them. Although, we do think that Python may wish to specify a family of well defined binary types specified e.g. in the C++ standard.[2]

Instead, we propose to extend the format specifier in a way that is currently always invalid and allows safely specifying any of these data types.

Specification

We propose to use [] as an indicator for a "custom data type" within the buffer protocol fmt string. The characters [ and ] are currently undefined as buffer protocol format characters, so any well written consumer should raise an error when seeing the first [. All characters within the [] must be printable ASCII characters characters excluding ];$ and specifications must e.g. use base64 encoding for arbitrary data. This ensures that memoryview(obj).format is both human readable and consumers aren't confused while parsing.

The [ must be followed by a unique identifier ending with $, i.e. [numpy$]. As this specifier must be unique, it should be chosen to match an importable, PyPI available, name to avoid conflicts (e.g. numpy or numpy.<something>).[3] We chose $ to allow the use of : for modules in the description (see NumPy example below). While we have decided to paint this particular shed by using $ to demarcate the format string from the unique ID, we are open to alternate suggestions.

The interpretation of all characters following the $ and until the next ; or ] characters are user specified and documented and are out of control of Python.

For example, NumPy may use a format of [numpy$module:DTypeClass:<dtype specific>] and specifically it's new string dtype may be encoded as [numpy$numpy.dtypes:StringDType:<hex-encoded-pointer-to-dtype-instance>][4].

If present, a ; denotes the start of an alternative naming for the same data type. For example [numpy$...;torch$...] in case both numpy and torch should provide competing standards. This is not a union, consumers can and should, stop as soon as they understand the first one. In general, libraries are also strongly encouraged to avoid this and may consider any but the first spelling to be deprecated.

Nathan Goldbaum The "note on multiple definitions" section should follow this.

No struct module extension

The buffer protocol format string is already a superset of what the struct module supports and we do not propose to extend struct (if such an extension seems desirable, it should be a separate proposal).

Reserved identifiers

We propose to already reserve struct$... where ... can be an arbitrary string as used by struct.struct, and buffer$ for a string conforming to the buffer protocol (not including the new extended syntax itself).

This choice enables cases when it is useful to have a more precise type but also expose the data as a simple structured type. For example, a module that has custom coordinates may wish to share that information but also expose them as a simple structure: [mymodule$coords2d;buffer$T{d:X:d:Y:}]. This exposes the type as a coords2d struct defined by mymodule, and that structure can be used diretly if that is supported. Otherwise, consumers can still work with a struct containing two doubles named X and Y.

We can see defining further ones, such as cpp$ within Python, but do not propose it yet in this PEP. C++ for example is very difficult ABI wise in practice so that it would only work safely for trivial types.

Byte-order and size

Since the current format string explicitly supports byte-order specification (e.g. leading @><=! characters), adopters are encouraged to honor this byte-order and size state where it makes sense. For example, NumPy would use >[numpy$numpy.dtypes:TimeDelta64:...] to encode non-native byte-order datetime64. Similarly, complex types may be defined via Z[...]. In practice, we expect most consumers and exporters to only support native byte order (no character or leading @). Consumers must make sure to honor the modifier characters; this could mean rejecting the buffer as unsupported.

Note for consumers

If an object does not wrap another buffer without accessing it (like memoryview), it may be best to immediately stop when finding the first [ or an unknown name. However, it is guaranteed safe to continue searching for the next ] even if the unique identifier is unknown.

When a unique identifier is unknown, the size or content of this data type are unknown so that using is never safe. Instead of generating an error that simply states a data type is unknown or not supported by the buffer protocol, each package can now more gracefully tell users whether or not they support data with custom formats that are created by different packages, or have not added support for a new format, with detailed information gleaned from the format string.

Nathan Goldbaum Could highlight this aspect more in the motivation. You get a super confusing error from Cython or Numpy when you try to create a buffer using an unsupported type.

Note on multiple definitions

Users of such user-defined data types should be aware of existing definitions and we propose Python documentation to link existing ones (such as numpy prefix linking to the corresponding NumPy documentation). Alternatively, this could live outside of the proper documentation (e.g. on a discuss thread).

A general issue is that duplicate equivalent definitions could cause divergence in the ecosystem. If one convention is to replace another niche one, this could also cause difficulties for libraries trying to move from exporting an old convention to the newer one.

Because of this, we included the ; way for a producer to export a buffer with multiple alternative spellings. This way, both producers and consumers can roll out a new preferred spelling at any time.

Consumers can also issue a deprecation warning when they wish to delete an old spelling again. Producers, unfortunately cannot issue a warning. They should order their preferred spelling first and eventually remove the old alternative. At least during testing, consumers may choose to warn if the first definition isn't the one that is used.

Exporting multiple definitions makes the consumer parsing slightly more difficult. However, it frees up consumers to drop (or never add) support for alternative spellings by asking a producer to (slowly) migrate.

Safety notes

Allowing arbitrary storage, including pointers, in the data type field requires the defining downstream libraries to be conscious about security concerns and bugs.

In general, the buffer protocol cannot be serialized and sent to a different thread/program (e.g. you cannot pickle a memoryview). So the interpretation need not be valid within a context other than the one that calls PyObject_GetBuffer. Additionally, the Python buffer protocol cannot protect against simultaneous reads and writes in multiple threads, and mutating a buffer via the buffer protocol should only be done with care and with full control over the runtime environment. This proposal does not change this situation.

Backwards Compatibility

This PEP is fully backwards compatible. It is even fully foward compatible as long as consumers of buffer protocols who do not wish to support extended formats make sure to raise an error when the first [ character is found in the format string. This is the case for all known users, but adopters should be aware that this extension was not specified on older Python versions.

Reference Implementation

Basic, but working, PoC implementations for Cython/NumPy interoperability can be found in https://github.com/seberg/numpy/pull/50 and https://github.com/da-woods/cython/pull/5.

The NumPy changes use the Cython support and require the Cython PR (even to build NumPy at this time, we can require a from numpy cimport string_memview to solve this).

This PEP requires no implementation in CPython itself. The buffer protocol documentation will have to be extended to document this specification. Further, it would be good for Python to provide a defined place to share custom definitions, either in documentation or via a discussion forum or code repository.

Python's memoryview should grow support for struct$ and/or buffer$ unique pre-fixes if defined as proposed. Memoryviews have only limited support for this syntax at the moment and we will be happy to implement support.

Security Implications

Custom data type definitions will be responsible to be careful about defining safe protocols. In general, untrusted users must never be allowed to define a format string, however, this was always the case.

Since we expect buffers containing new format strings to be consumed by code running on older Python versions, there is a small possibility that a consumer does not fail gracefully when finding the first [, and exporting a buffer with following this PEP leads to heretofore unseen crashes or issues in low-level string parsing code.

Some libraries may have no support for buffer types and re-interpret buffers directly. However, this is not a new issue, the buffer protocol always allowed such unsafe use and libraries that would do unsafe things with the new buffers already do this for buffers containing Python objects or pointers.

Rejected Ideas

Adding new data dtypes piecemeal to the struct module is not tenable, since rollout of new Python features is slow and all consumers of the buffer protocol would then need to add explicit support for the new format code. While downstream consumers would need to add support for buffers advertised by a package, they can do so without needing any updates to Python, as long as they are using a version of Python that supports this PEP.

The main alternative to extending the buffer protocol is fully relying on custom new exchange protocols. We believe that wide adoption, centralized definition, and low-level integration in Python are large benefits. Since known consumers fail gracefully when seeing an unknown format string, there should be few downsides to allowing custom extensions.

Another approach would be to more closely define how the content of [] should be interpreted. Although, it may be helpful to link to early adopters or provide some design guidelines.

Open Discussions

Using a request flag

We believe that buffer protocol consumers can be expected to raise an error when encountering the first [ character. However, if there is concern on this point we are happy to propose introducing a new PyBUF_EXTENDED_FORMAT flag. If not passed, the exporter must raise a BufferError if the export would require the extended syntax.

This extension has the downside that e.g. memoryview(obj) will fail on older versions of Python, if obj uses the extended format flag. This will hinder adoption to Python versions that support it. That is because wrapping a buffer into a memoryview is a common pattern NumPy uses it for ownership tracking, because memoryview calls PyBuffer_Release as needed.

Encoding the bit size

As is, consumers trying to interpret the data must always raise an error. A niche use-case could see exporting a structured dtype of which only one field is custom. If we encoded the bit-size, the buffer could be used in part. This seems a very plausible extension, although we doubt that there will be much need for it practice. This proposal also does not preclude adding support for this in the future.

Variations to the format string

Using [name$...] is one of many possible ways to encode custom formats.

Omit alternative spellings via ;

We decided to allow alternative spellings via ; even though this makes parsing a little more difficult. We are not sure about the usage, but adding it in the future is unfortunately hard, so we opted to include it.

We could opt to not do this or strongly encourage consumers to consider all but the first spelling deprecated. In practice, we believe this helps transitions, because it allows producers to add a definition without all consumers to accept it first.


  1. For example pandas uses the buffer protocol (via Cython typed memoryviews) for NumPy datetimes. However, since these are unsupported, it has to cast the NumPy array to int64 early and keep track of this cast. If Cython would have even minimal support, these casts could be local to Cython code directly working with the buffer. ↩︎

  2. We could imagine [cpp$std::bfloat16_t] for example to allow general well defined C++ types. Unfortunately, attempting to share general C++ types is fraught with ABI issues and may at least need a list of types that are unproblematic. ↩︎

  3. Should, because Python cannot enforce this, unfortunately. ↩︎

  4. In general, we prefer not including any encoded pointers. However, the storage details of StringDType are complex enough that this makes sense as this dtype can currently only be used via NumPy C-API. ↩︎