paperclip-core breakage

The following example fails to pass coherence with the new solver due to an interaction of two separate design decisions.

struct W<T: ?Sized>(T);
trait NoImpl {}
trait Unconstrained {
    type Assoc;
}
impl<T: Unconstrained<Assoc = U>, U: NoImpl> Unconstrained for W<T> {
    type Assoc = U;
}

trait Overlap {}
impl<T: Unconstrained<Assoc = ()>> Overlap for T {}
impl<U> Overlap for W<U> {}

Nested goals are evaluated eagerly

In the new solver evaluating a goal does not return nested goals. We instead handle them eagerly and recompute the whole goal later if any nested goal remains ambiguous.

This is different from the old implementation where evaluation selects a unique candidate and then pushes all nested obligations to the caller.

normalizes-to is a function

When proving that an associated item normalizes to a given term, we do so in two steps. We first normalize the associated item, returning the normalized-to term, and then equate that term with the expected term.

This is implemented by always replacing the expected term with an unconstrained inference variable when evaluating a NormalizesTo goal: source.

A walkthrough of the concrete regression

The old solver errors when proving W<?t>: Future<Assoc = ()> while the new solver stalls with ambiguity.

In the old solver, we prove Projection(<W<?0> as Unconstrained>::Assoc, ()), this first normalizes <W<?0> as Unconstrained>::Assoc to ?1, returning the nested goals [Projection(<?0 as Unconstrained>::Assoc, ?1), Trait(?1: NoImpl)]. We then equate ?1 with (), and attempt to prove the nested goals. Proving (): NoImpl then errors.

In the new solver, we prove Projection(<W<?0> as Unconstrained>::Assoc, ()), this first normalizes <W<?0> as Unconstrained>::Assoc to ?1. Normalization tries to eagerly prove [Projection(<?0 as Unconstrained>::Assoc, ?1), Trait(?1: NoImpl)]. These goals are still ambiguous at this point. Normalization therefore returns ?1 as the normalized-to term and discards the nested goals, forcing the certainty to Maybe. We then unify ?1 with (). However, we never get to prove ?1: NoImpl as attempting the normalization again still does not use the term. As a simplified proof tree:

  • Projection(<W<?0> as Unconstrained>::Assoc, ())
    • NormalizesTo(<W<?0> as Unconstrained>::Assoc, ?1) replace expected term with unconstrained infer
      • Trait(?1: NoImpl) ~> ambig
    • Eq(?1, ())
    • NormalizesTo(<W<?0> as Unconstrained>::Assoc, ?2) we would replace expected term with a new unconstrained infer ~> still ambig

Why is the new behavior desirable

Nested goals are evaluated eagerly

Always returning nested goals is clearly undesirable. We need to evaluate nested goals anyways if there are multiple candidates for which matching the impl header succeeds. Having to reevaluate these nested goals later on causes duplicate work. This is the case with the old solver.

Returning nested goals likely also causes issues if we ever decide to get better at merging candidates. This is currently not an issue as we only prefer trivial candidates over others and never merge them.

If we already have to evaluate nested goals sometimes, it seems easier to just attempt to do it whenever possible. We previously avoided to return nested goals at all to reduce complexity by only supporting eager evaluation.

normalizes-to is a function

This both improves performance by allowing us to cache normalization once for each alias, reusing it for each expected term.

More importantly, it's a very easy way to prevent the expected term from impacting normalization, cc https://github.com/rust-lang/trait-system-refactor-initiative/issues/22

Can users avoid this regression

My vibe is that the above pattern cannot be expressed with the new solver. This breakage is a fundamental restriction to the kinds of pattern which can be expressed. If we have a where-bound T: Trait<Assoc = U>, there's no way to have the expected associated type interact with the nested goals of the normalization/trait goal. I am unable to prove it, but I believe there to be no good way to fix paperclip-core without removing one of the impls.

Can we avoid this issue in the new solver

Pushing knowledge about the expected term into normalizes-to

The idea here would be when creating a new inference var for U in <?t as Unconstrained>::Assoc = ?u we put <?t as Unconstrained>::Assoc normalizes-to ?u in the environment if ?u is created inside of a query and not constrained by an input of it. We discard this info if any of the inference variables involved in the alias get constrained. We add this info to QueryInput of all queries whose goal references all involved inference variables.

With this, Projection(<W<?0> as Unconstrained>::Assoc, ()) would compute NormalizesTo(<W<?0> as Unconstrained>::Assoc, ?1) which adds <?0 as Unconstrained>::Assoc normalizes-to ?1 to the environment. We then unify ?1 with (). When recomputing NormalizesTo(<W<?0> as Unconstrained>::Assoc, ?1) we now have <?0 as Unconstrained>::Assoc normalizes-to () in the environment, which we keep around as only the expected term, but not the alias, has changed. When now trying to normalize <?0 as Unconstrained>::Assoc inside of this nested goal, we immediately return () instead of using a nested AliasRelate goal.

This is non-trivial, likely has issues with region uniquification and probably has a significant perf impact.

Propagate nested goals out of normalizes-to

Put the nested goals of NormalizesTo into its Response and then handle them inside of the AliasRelate call. Implemented in https://github.com/rust-lang/rust/pull/122687.

Select a repo