The old solver does not support the concept of normalizes-to, we always have the fallback to equating as a rigid projection behavior. It should be safe to maintain that behavior with the new solver as well and then maybe change it at some point in the future.
One-step normalization can either assemble no candidates, keep the term
unconstrained, or actually constrain it in some way.
If we fail to project, equate the term
with a rigid projection. In case of ambiguity, we stall. We make inference progress if selection results in a single candidate. We do not evaluate nested goals during selection if there is only a single candidate, which is different in the new solver.
If selection succeeds ignoring nested goals, but fails once we evaluate them, this means that there was a single impl candidate and no candidates from the ParamEnv
. This case would error in the old solver and result in a 'rigid projection' in the new one. Given that such a projection is not WF we error either way.
AMBIG w constraints -> ERROR
: do not want to emit an error like "failed to normalize <partially inferred assoc ty>
to <rigid projection>
". This should only happen if the projection is not well-formed
Projection
unconstrained term hack. How can the parent goal differentiate between "normalizes to itself" and "failed to normalize". One should result in overflow, the other one should not.
idea: normalizes-to
which only expects and infer var as it's term
With Projection
, it should be possible to use structurally_normalize_ty(lhs) == structurally_normalize_ty(rhs)
. This has good perf :3