leak check impacting candidate selection

For a general background regarding higher ranked region solving, see https://hackmd.io/qd9Wp03cQVy06yOLnro2Kg.

The first is something called the leak check. You can think of it as a "quick and dirty" approximation for the region check, which will come later. The leak check detects some kinds of errors early, essentially deciding between "this set of outlives constraints are guaranteed to result in an error eventually" or "this set of outlives constraints may be solvable".

The leak check is currently used in two places.

implicit negative overlap check

https://github.com/rust-lang/rust/blob/8b94152af68a0ed6d6af0b5ba57491e40481008e/compiler/rustc_trait_selection/src/traits/coherence.rs#L235-L238

The leak check is used at the end of coherence checking to detect any region errors. This use feels clearly acceptable to me.

after evaluation_probe

https://github.com/rust-lang/rust/blob/8b94152af68a0ed6d6af0b5ba57491e40481008e/compiler/rustc_trait_selection/src/traits/select/mod.rs#L607-L610

This function is used during candidate assembly for trait goals. Most notably we use inside of evaluate_candidate during winnowing: https://github.com/rust-lang/rust/blob/0e4243538b9119654c22dce688f8a63c81864de9/compiler/rustc_trait_selection/src/traits/select/mod.rs#L491-L502

evaluate_candidate applies a candidate to a potentially higher-ranked trait goal, so the placeholders from that higher-ranked goals are considered by its leak check.

This allows us to discard more candidates, which can

guide inference

trait Leak<'a> {}
impl Leak<'_>      for Box<u32> {}
impl Leak<'static> for Box<u16> {}

fn impl_trait<T: for<'a> Leak<'a>>() {}

fn main() {
    impl_trait::<Box<_>>();
    // The `Box<u16>` impls fails the leak check,
    // meaning that we apply the `Box<u32>` impl.
}

and

trait Trait<T, U> {}

impl<'a> Trait<&'a str, &'a str> for () {}
impl<'a> Trait<&'a str, String> for () {}

fn impls_trait<U>()
where
    for<'a> (): Trait<&'a str, U>{}

fn main() {
    impls_trait::<_>();
    // Similar to the above. Applying the first impl for a
    // `for<'a> (): Trait<&'a str, ?U>` goal results in a leak
    // check failure as `?U` cannot name `'a`.
}

and

trait Trait<T> {}
impl<T> Trait<T> for T {}
impl<T> Trait<T> for &T {}

fn get<S>(_: &S)
where
    for<'a> &'a u32: Trait<S>,
{}

fn render(template: &Box<u32>) {
    get(template);
    // By using the leak check, there's only one valid
    // candidate for `for<'a> &'a u32: Trait<?S>`, constraining
    // `S` to `u32`. This then allows coercion to deref `&Box<u32>`.
}

fn main() {}

However, even in the new solver we instantiate higher-ranked goals before proving their nested goals, meaning that the following does not compile:

trait Leak<'a> {}
impl Leak<'_>      for Box<u32> {}
impl Leak<'static> for Box<u16> {}

trait RequiresLeak<'a> {}
impl<'a, T: Leak<'a>> RequiresLeak<'a> for T {}

fn impl_trait<T: for<'a> RequiresLeak<'a>>() {}

fn main() {
    impl_trait::<Box<_>>();
    //~^ ERROR type annotations needed
}

avoid the incomplete preference for ParamEnv candidates

In case there is both a ParamEnv candidate and an impl, we always use the ParamEnv candidate, even if both candidates were to apply. However, if the ParamEnv candidate fails the leak check, we fall back to impl candidates.

trait Trait<'a> {}

trait OtherTrait {}
impl<'a, T: OtherTrait> Trait<'a> for T {}

fn impl_hr<T: for<'a> Trait<'a>>() {}

fn not_hr<'a, T: Trait<'a> + OtherTrait>() {
    impl_hr::<T>();
    // Using the `Trait<'a>` bound results in a universe
    // error as `'a` is not higher-ranked. We currently
    // use the `impl` candidate instead, relying on the
    // `OtherTrait` bound.
}

This can only happen if the goal has non-region inference variables or we have a trivial where-clause which is implied by other bounds or by an impl by itself.

We do not use the leak check when considering ParamEnv candidates for Projection goals:

trait Trait<'a> {
    type Assoc;
}

trait TraitBound {}
impl<T: for<'a> Trait<'a>> TraitBound for T {}

trait ProjectionBound {}
impl<T: for<'a> Trait<'a, Assoc = usize>> ProjectionBound for T {}

impl<'a, T> Trait<'a> for T {
    type Assoc = usize;
}

fn trait_bound<T: TraitBound>() {}
fn projection_bound<T: ProjectionBound>() {}

// ok
fn satisfies_trait_bound<T: Trait<'static>>() {
    trait_bound::<T>() 
    // We drop the `ParamEnv` candidate due to the leak check.
}

// higher ranked region error
fn satisfies_projection_bound<T: Trait<'static, Assoc = usize>>() {
    projection_bound::<T>()
    // We do not use the leak check when assembling
    // projection candidates from the `ParamEnv`, so we
    // prefer that candidate over the impl, resulting in an
    // error.
}

Possible final behavior here

The status quo

We could keep the status quo, using the leak check during candidate selection for Trait goals, but not for Projection/NormalizesTo. This is fairly straightforward to implement in the new solver and avoids any breakage. It also does not apply for candidate selection of nested goals.

Use the leak check in candidate selection for project goals

This has multiple issues.

We cannot use the leak check when normalizing, so the following would still break:

trait Trait<'a> {
    type Assoc;
}
impl<'a, T> Trait<'a> for T {
    type Assoc = usize;
}

fn projection_bound<T: for<'a> Trait<'a, Assoc = usize>>() {}
fn function<T: Trait<'static, Assoc = usize>>() {
    projection_bound::<T>();
    
    let _higher_ranked_norm: for<'a> fn(<T as Trait<'a>>::Assoc) = |_| ();
    // We only get to `<T as Trait<'a>>::Assoc` when `'a` is already
    // instantiated. So getting this to also use the impl due to a
    // leak check failure is pretty much impossible.
}

It adds non-trivial complexity to the new trait solver: to handle https://github.com/rust-lang/trait-system-refactor-initiative/issues/1 Projection goals are now implemented via AliasRelate. This again means that we instantiate the binder before ever normalizing any alias.

Even if we were to avoid this, we lose the ability to cache normalization by itself, ignoring the expected term. We cannot replace the term with an inference variable before instantiating the binder, as otherwise for<'a> Alias<'a> = &'a () breaks. If we only replace the term after instantiating the binder, we cannot easily evaluate the goal in a separate context, as we'd then lose the information necessary for the leak check. Adding this information to the canonical input also seems non-trivial.

This also does not apply for candidate selection of nested goals.

Move leak check out of candidate selection

This is the current behavior of the new solver and has been implemented in https://github.com/rust-lang/rust/pull/119820. This results in 23 crater regressions. See https://github.com/rust-lang/rust/commit/2f8aa7055eae76c27e518e374773b8903aaa37ac for minimal examples of what will break. Weaker type inference can prevent coercion from occuring, potentially resulting in weird errors.

lcnr preference

I think that the inconsistent status quo where Projection and Trait goals are different should not be the final state of our trait system. I do not believe that it is possible to consistently apply the leak check during candidate selection when normalizing. Doing so would also have a significant perf impact.

While unhappy about it, I am currently slightly in favor of moving the leak check out of candidate selection, even if it breaks some crates.

Select a repo