---
tags: NumPy, NumPy2
---
# NumPy2 lighning talk: Change `NPY_MAX_DIMS` to 64 (from 32)
Originally proposed in [issue 5744](https://github.com/numpy/numpy/issues/5744), and on the [mailing list](https://mail.python.org/archives/list/numpy-discussion@python.org/message/U6WZP7ZIEKF3X4OIEQEO2UBYVMALNPCZ/). The motivation seem a bit unclear. There is a clear use case for this in simulating quantum systems. While 64 is better than 32, it is not clear that will be the end of this discussion. In the issue there is a suggestion to make the limit dynamic. In `PyArrayObject` the value is not used, but it is used heavily in the structs behind `PyArrayIterObject`. `PyArrayMultiIterObject`, `PyArrayMapIterObject`, `PyArrayNeighborhoodIterObject`. It is also used as a shortcut in code like
```
npy_intp dimensions[NPY_MAXDIMS];
for (i=0; i<PyArray_NDIMS(arr); i++) {
dimensions[i] = PyArrayDIMS(arr, i);
}
/* Some code that broadcasts or otherwise manipulates "dimensions" */
ret = new_array(..., dimensions, ...);
```
SciPy has about 30 such uses.
This could be rewritten with a dynamic allocation (both inside NumPy iterators and in SciPy), which would also save some space on the stack, but might influence benchmarks.
The proposal does not discuss `NPY_MAXARGS`, but it too is limited to 32.
## Implications for Array API compatibility
The Array API spec [says](https://data-apis.org/array-api/latest/API_specification/array_object.html):
> Furthermore, a conforming implementation of the array API standard must support array objects of arbitrary rank N (i.e., number of dimensions), where N is greater than or equal to zero.
There is no discussion around an upper limit to "arbitrary".
## Who is in favor?
There have been a handful of users requesting this feature. As far as I can tell they come from quantum simulations and combinatory statistics. None have been too vocal about the limit.
## What will break and how could we work around the breakage?
The constant is part of the public NumPy headers, and code like the snippet above is found in many other libraries like SciPy and Numba.
There could be a scenario where we detect (when `import_array()` or `import numpy` is called) what the expected value of the constant should be. This seems really complicated an error-prone.
## Why not make the change?
The constant is very pervasive.
## Is it worth it?
<Discussion></Discussion>
## Decision
<Discussion></Discussion>