Video call link: https://meet.google.com/jrp-qgcf-nxh
If not working: https://colgate.zoom.us/j/276854695
axis
to tuple in validateaxis
min/max(explicit=True)
cuz different codepathaxis=None
, use sputils._todata().sum()
instead of self @ ones
out
instead of res.sum(axis=(), out=out)
. (Feels closer to intent of out…??)get_index_dtype
and tests that the list was correcteq/ne/gt/ge/lt/le
multiply/_divide
maximum/minimum
_convert_to_2d
toolsop
in comparison/bool funcs. Use op.name when calling _binopt
. Use op(0, other)
for whether to warn.argminmax
, arminmax_axis
has_canonical_format: in test suite _binopt
is called ### times with ### attribute and check both True amd ### both False.
Right now eq
and ne
return True/False if shapes do not match!?!? Shouldn't we raise? Can we deprecate/remove this for sparray?
Many astype
w/o copy
kwarg could be copy=False
. I changed a couple. More in a later PR.
21925 broadcasting of _binop at C++ level
int ndim;
int nnz;
int shape[]; // length ndim
int strides[]; // length ndim (or compute these)
I indptr[]; // length shape[0] + 1
I indices[]; // length nnz
T data[]; // length nnz
Ready for review/merge (approved already):
construct.py
function Think about:
expanddims
Ready for review/merge (*approved already):
construct.py
functionRead through (easy?)
Think about:
nD features:
_matmul_sparse
binop general/canonical discussion
A[1,:,None]
index for spmatrix #22472coo
copies python code for binopt
& minmax
from _compressed.py
and _data.py
binop
sparsetools?
general
vs sort & canonical
, general
is predictably slower for some parameter regions. (most of the time it is faster)general
back in due to an incorrect timing reported on the issue.sort & canonical
. The conditions seem to be large M
with small nnz
. But large M suggests should switch to CSC format.general
& canonical
indptr
can be easily includedindices
could be separate func or separated in an if clause._swap
Timing? (no difference? but how to check that)coo_todense_nd
performance?_sputils
). Is that a problem?coo
copies binopt
& minmax
from _compressedConversion of SciPy subpackages to sparray
idx_dtype
handling
scipy.io
and optimize
to sparray.arange
, np.concatenate
, etc. Also fixes Issue 20389: maintain int64 idx_dtype in vstackmsg
kwarg? allows e.g. "for SuperLU" in errmsg "indptr values too large for SuperLU". See @stefanv and @perimosocordiae comments.scipy.io
migration to sparray Requires new keyword arg to indicate sparray/spmatrix. Can we go over the deprecation plan to see if it looks OK?
mmread(file, sparray=True)
slow
move sparray test64bit tests out of slow
. Separate test_base.py
from test_64bit.py
csr.h
changesbroadcast_shapes()
new function broadcast_shapes
needed to avoid numpy restrictions on dense array size.extend_dims
, nd-kron
, swap_axes
.rng
. After deprecation period, only Generators will be used. Implemented for rand
, random
and random_array
.Conversion of SciPy subpackages to sparray
linprog
still not removed.