[NOTE: if you want to comment, click on the edit button]
Proposal: strive for a consistent handling of missing values accross data types in pandas through:
This document mainly focuses on the first item (the new NA scalar) and its consequences, as this has the most user-facing impact and potential API changes to discuss. The mask-based approach to store the missing value information (like the nullable integers currently do) is also important, but more an implementation detail (altough important one) that can be discussed separately.
This is being discussed at:
A summary of the proposal is to introduce a new NA value (singleton) for representing scalar missing values (instead of np.nan) that can be used consistently across all data types.
Motivation for this change:
s[idx]
) depends on the data type (np.nan for many, but pd.NaT for datetime-likes). Some types support missing values, others don't. This proposal would ensure you get back pd.NA
regardless of the dtype.isna
, dropna
, fillna
, skipna
, …).Currently, we use np.nan
as missing value indicator in floating point data, np.nan
or None
in object dtype data (eg strings, or booleans with missing values are cast to object), pd.NaT
in datetimelike data. For categorical, interval or nullable integer data, the physical storage is different for each of them, but they return np.nan
on access.
This situation has some drawbacks:
s[idx]
). Some dtypes have their custom "not a value" (float, datetime-likes), other re-use np.nan
. Some types support missing values, some types don't.pd.isna
) recognize those, certainly in case of externally defined ExtensionArrays? (See for example discussion in GH-27825).np.nan
floating point value.NULL
, R has NA
, Julia has missing
).np.nan
(e.g. np.isnan
) does not work generally with data from pandas that holds NaNs. It is also strange to get a value back of type float from non-float datatypes, as this gives misleading dtype information.Therefore, this proposal puts forward a single pd.NA
value that is used as missing value indicator accross data types in the user facing interfaces (under the hood, it could still be implemented in the Array using a mask-based approach or sentinel-based approach).
A simplified definition (for illustration purposes) of such a NA
singleton would look like:
class _NAType:
__eq__(self, other):
return False # or return NA (see below for discussion)
__str__(self):
return "NA"
...
NA = _NAType()
Advantages:
isna
, dropna
, fillna
, skipna
, …).This object would be returned when accessing a scalar missing element:
>>> s = pd.Series([1, 2, pd.NA], dtype='Int64')
>>> s
0 1
1 2
2 NA
dtype: Int64
>>> s[2]
NA
The dtype-specific "not a value" objects can have an API similar to the normal scalars of that data type. For example, pd.NaT
has the same methods and attributes as a pd.Timestamp
. This makes that you more easily write robust code that works with scalars accessed from a timestamp series, whether they are missing or not.
Having a "not a value" object for each data type could make that:
>>> isinstance(s[scalar], s.dtype.type)
always being true, regardless of whether the scalar is NA or valid?
However, this is currently only true for datetimelike data (using pd.NaT
), and not generally the case anyway for other data types (unless we would introduce also a "not an integer", "not an interval", …).
And even for datetimelike data, the above isinstance
check does not hold true as pd.NaT
is only an instance of datetime.datetime
(but see discussion below regarding discussion of adding multiple NaTs).
When having a non-dtype-specific NA value, the resulting dtype of certain operations will be undefined.
For example, consider a timedelta64[ns]
Series. Depending on the type that is added, the type of the resulting Series differs:
>>> s = pd.Series([pd.Timedelta('1 days'), pd.Timedelta('2 days')])
>>> s + pd.Timedelta('1 days')
0 2 days
1 3 days
dtype: timedelta64[ns]
>>> s + pd.Timestamp('2019-01-01')
0 2019-01-02
1 2019-01-03
dtype: datetime64[ns]
>>> s + pd.NaT
0 NaT
1 NaT
dtype: timedelta64[ns]
In this case, pd.NaT
is considered as another (but missing) timedelta and not as timestamp. Exactly for this reason, it was proposed to add versions of "NaT" specific to timedelta and period to be able to distinguish this (GH-24983).
So we currently already have this issue to some extent. But, it could be solved by introducing multiple "not a value" scalars for each dtype. While introducing a single scalar pd.NA
will make this worse.
Related to this is the introduction of a "NA dtype", which is what could be used in such "undefined" cases (to at least ensure the result has no incorrect dtype). See more on that below.
In Python, and in the pandas-based data analysis ecosystem specifically, people have used np.nan
generally as the missing value indicator in absence of a "true" missing value.
And a lot of code is built upon that assumption (e.g. scikit-learn also uses it as default missing value indicator in imputation methods).
This proposal does not change anything for numpy-based libraries. Algorithmic code, when the pandas data gets converted to numpy arrays, currently works with float data and np.nan if there are missing values present. This would not change (at least initially, we would still output float with np.nan for missing values on conversion to numpy).
A change to output pd.NA
instead of np.nan
or pd.NaT
on access for the existing data types would certainly be a backwards incompatible change. Therefore, we will need to introduce this gradually and in the beginning optionally. But personally, I think the consistent interface is worth the breakages, even for the default data types (on the longer term, in a major new pandas version). See more below.
This is of course not backwards compatible. It can therefore only be implemented gradually. Possible steps:
On the short term (ideally for 1.0):
pd.NA
scalar, and recognize it in the appropriate places as missing value (e.g. pd.isna
). This way, it can already be used in external ExtentionArraysBooleanArray
with support for missing values and appropriate NA behaviour. To start, we can just use a numpy masked array approach (similar to the existing IntegerArray), not involving any pyarrow memory optimizations.pd.NA
as missing value indicator.pd.NA
as the missing value indicator for Integer/String/BooleanArray (breaking change for nullable integers)Although a breaking change, I think it is important to start using the pd.NA
and BooleanArray
retro-actively for the experimental nullable IntegerArray, since this was only introduced recently and it is the perfect use case for it.
On the intermediate term
On the long term (2.0?)
Or, what is the dtype of the result given the dtypes of the input?
pd.Series([pd.NA]).dtype
? (with np.nan
this would be float64
)pd.Series([pd.Timestamp('2000-01-01')]) - pd.NA
? (could be either Timestamp
or Timedelta
)pd.Series([1], dtype='Int64') + pd.NA
? (could be either int of float)For those operations with NA where the resulting dtype is not obvious, a NADtype result could be returned:
>>> pd.Series([0, 1]) + pd.NA
Series([NA, NA], dtype=NADtype)
# or
>>> pd.Series([0, 1]) + pd.NA
Series([NA, NA], dtype=int/float)
This would mainly occur with scalar NA values. Because when doing operations with arrays (even if all NA), the dtype of the result can be known from the input dtypes:
>>> pd.Series([0, 1]) + pd.Series([pd.NA, pd.NA], dtype=int)
Series([NA, NA], dtype=int)
>>> pd.Series([0, 1]) + pd.Series([pd.NA, pd.NA], dtype=float)
Series([NA, NA], dtype=float)
Different options to deal with this:
For those cases with a potentially undefined behaviour, make a choice.
This way, we still have consistent defined behaviour, although it is somewhat arbitrary. For example, we could choose to preserve the original type if possible: int + NA = int
, Timedelta + NA = Timedelta
, Timestamp - NA = Timestamp
. Or, we could choose to interpret the NA as the same type as the value, if possible: with this logic third example would become Timestamp - NA = Timedelta
.
Introduce an "NA dtype".
This way, we don't need to make this arbitrary choice, and preserve the fact that the resulting dtype is actually unknown. However, this gives a new dtype that needs to be handled. See item below on that.
Have dtype-specific realizations of "NA".
Instead of a single NA singleton, we could have NA variants for each dtype (so it can carry information on its originating type), while for the user it still all looks as "NA". However, this seems a lot more work to implement (would eg this "NA_period" have all the methods of a Period?) and also potentially to code against. And we would also still need to define the behaviour for the generic NA anyway.
NA
dtype?For the all-NA cases with undefined dtype, an "NA dtype" could be used (see above).
Additionally, it could also be used as the dtype of pd.Series([pd.NA])
upon creation (currently, pd.Series([np.nan])
gives a float dtype, as well as pd.Series([])
, although we want to change this to object dtype).
For this, we could introduce an "NA dtype" (and a NAArray which only needs to store a length):
>>> pd.Series([pd.NA, pd.NA])
Series([NA, NA], dtype=NADtype)
In element-wise operations it would propagate NAs, and it can be upcasted to any other dtype (eg in concat).
Alternatively, we could also choose a default, existing dtype to store all-NA arrays. For example object dtype.
Some open questions: how should it behave in dtype specific methods? (eg .dt
or .str
methods) Should those work (as if it were all NA of that dtype), or error?
In numerical operations, NA propagates (see also above). But for boolean operations the situation is less clear. Currently, we use the behaviour of np.nan
for missing values in pandas. This means:
>>> np.nan == 1
False
>>> np.nan < 1
False
>>> np.nan != 1
True
However, a missing value could also propagate:
>>> pd.NA == 1
NA
>>> pd.NA < 1
NA
>>> PD.NA != 1
NA
This is for example what Julia and R do.
If we propagate NA in comparison operations (see above), the consequence is that you end up with boolean masks with missing values. This means that we need to support a boolean dtype with NA support, and define the behaviour in logical operations and indexing.
True & NA
)Currently, the logical operations are not very consistently defined. On Series/DataFrame, it returns mostly False, and for scalars it is not defined:
>>> pd.Series([True, False, np.nan]) & True
0 True
1 False
2 False
dtype: bool
>>> pd.Series([True, False, np.nan]) | True
0 True
1 True
2 False
dtype: bool
>>> np.nan & True
TypeError: unsupported operand type(s) for &: 'float' and 'bool'
For those logical operations, Julia, R and SQL choose for the "three-valued logic" (only propagate missing values when it is logically required). See https://docs.julialang.org/en/v1/manual/missing/index.html for a good explanation. This would give:
>>> pd.Series([True, False, pd.NA]) & True
0 True
1 False
2 NA
dtype: bool
>>> pd.Series([True, False, pd.NA]) | True
0 True
1 True
2 True
dtype: bool
>>> pd.NA & True
NA
>>> pd.NA & False
False
>>> pd.NA | True
True
>>> pd.NA | False
NA
Sidenote: we regard None
as missing value, which means that the behaviour of None in a Series differs compared to the scalar behaviour of None (another reason to not use None
as missing value indicator).