ITE meeting agenda

  • Meeting date: 2024-01-05

Attendance

  • People: TC, nikomatsakis, CE, tmandry

Meeting roles

  • Minutes: TC

"May define implies must define" and trait solving

At the October T-types meetup, as part of the proposal for Mini-TAIT, the types team proposed a "may define implies must define" restriction.

The purpose and effect of this restriction is two-fold.

One, and principally, it was designed to avoid changes in inference when moving from the old solver to the new solver.

Secondly, the restriction avoids producing cycle errors on the old solver.

At the meetup, it seemed to be assumed that we could likely later lift this restriction after the new trait solver lands.

We need to work out how likely or not that is, as it may affect various design decisions for TAIT.

New trait solver will cause inference differences in RPIT

One thing that was maybe not understood at the time of the T-types meetup was that moving to the new solver will involve breaking changes to type inference with stable RPIT. Consider:

// edition:2021
// revisions: new old
// [new]compile-flags: -Znext-solver
// [new]check-pass
// [old]known-bug: unknown

fn test(n: bool) -> impl Sized {
    let true = n else { panic!() };
    // The new trait solver treats this as a defining use while the
    // old one does not.  This results in the concrete type being
    // witnessed here in the new solver, which results in the inherent
    // impl being selected rather than the trait impl.
    let _: WShow = W(test(!n)).show(); //~ New solver behavior.
    // The old solver thinks the type is `OnWShow`.
}

struct W<T>(T);
struct WShow;
impl W<()> {
    pub fn show(&self) -> WShow {
        WShow
    }
}

struct OnWShow;
trait OnW {
    fn show(&self) -> OnWShow {
        OnWShow
    }
}
impl<T> OnW for W<T> {}

fn main() {}

Godbolt link

Niko: Alternate RPIT example

// edition:2021
// revisions: new old
// [new]compile-flags: -Znext-solver
// [new]check-pass
// [old]known-bug: unknown

fn other_fn() {
    let _: WShow = W(test(!n)).show();
    // ERROR^ `show` resolved to `OnWShow`
}

fn test(n: bool) -> impl Sized {
    let true = n else { panic!() };
    // The new trait solver treats this as a defining use while the
    // old one does not.  This results in the concrete type being
    // witnessed here in the new solver, which results in the inherent
    // impl being selected rather than the trait impl.
    let _: WShow = W(test(!n)).show(); //~ New solver behavior.
    // The old solver thinks the type is `OnWShow`.
}

struct W<T>(T);
struct WShow;
impl W<()> {
    pub fn show(&self) -> WShow {
        WShow
    }
}

struct OnWShow;
trait OnW {
    fn show(&self) -> OnWShow {
        OnWShow
    }
}
impl<T> OnW for W<T> {}

fn main() {}

Test suite

To work through and think through the hard cases, I've assembled a draft test suite:

https://github.com/rust-lang/rust/pull/118717

(I need to rebase it and re-bless the tests, as well as add some comments)

It contains a few dozens tests like the one above exercising both the old trait solver and the new one and comparing them to what is proposed as the correct behavior.

Hard case: the id2 pattern

Consider this pattern:

// edition:2021
// revisions: new old
// [new]compile-flags: -Zsolver-next
// known-bug: unknown

fn id2<T>(_: T, x: T) -> T {
    x
}

fn test(n: bool) -> impl OnI {
    let true = n else { panic!() };
    // The question is, "does this return the opaque type or the
    // concrete type, and does that matter?"
    let _: IShow = id2(test(!n), I).show();
    I
}

struct I;
struct IShow;
impl I {
    pub fn show(&self) -> IShow {
        IShow
    }
}

struct OnIShow;
trait OnI {
    fn show(&self) -> OnIShow {
        OnIShow
    }
}
impl OnI for I {}

fn main() {}

Godbolt link

In both the new and the old trait solver, whether the opaque type is the left or the right argument to id2 matters in terms of whether the opaque type gets returned. But due to lazy normalization, in the new trait solver, it doesn't matter. Under the new trait solver, the concrete type will be witnessed (though that is not yet implemented).

Hard case: revealing auto traits

Consider:

// edition:2021
// revisions: new old
// [new]compile-flags: -Zsolver-next
// [new]check-pass
// [old]known-bug: unknown

fn test(n: bool) -> impl Sized {
    let true = n else { panic!() };
    // We need to witness the auto traits of the hidden type here.
    let _: OnWSendShow = (&&W(test(!n))).show();
}

struct W<T>(T);

struct OnWShow;
trait OnW {
    fn show(&self) -> OnWShow {
        OnWShow
    }
}

struct OnWSendShow;
trait OnWSend {
    fn show(&self) -> OnWSendShow {
        OnWSendShow
    }
}

impl<T> OnW for W<T> {}
impl<T: Send> OnWSend for &W<T> {}

fn main() {}

Godbolt link

This is an example of where spurious cycle errors happen in the old solver (even with RPIT) but where the new solver can work things out.

Effect of "may define implies must define"

The "may define implies must define" rule means that when, during type inference, we see the opaque type, we know immediately whether we can use that type concretely, because:

  • If we're in a scope that's allowed to define, then we know that the hidden type will be registered somewhere in this function.
  • Otherwise, we know that it won't be, so we must treat it opaquely.

To lift the restriction, we need to consider:

type Foo = impl Trait;

fn test(x: Foo) {
    // We may need to know whether to treat `Foo` concretely here.
    let _ = (&&W(x)).show();
    .. //~ But we don't know yet whether it will be defined here.
}

The question is, can we craft a set of rules for type inference that allow this to work reasonably in the context of a new more complete trait solver with lazy normalization?

"Must define before use"

One perhaps(?) possible way we could do that (or save space to do that) is with a "must define before use" rule.

https://github.com/rust-lang/rust/issues/117866

If the body of an item that may define the hidden type of some opaque does define that hidden type, it must do so syntactically before using the opaque type in a non-defining way.

pub type Tait = impl Trait + Copy;

// This item is allowed to define the hidden type of `Tait`.
pub fn foo(x: Tait) {
    Tait::show(); // This is a non-defining use.
    identity::<X>(x).method(); // This is a defining use.
    // ~^ ERROR cannot define hidden type after non-defining use of opaque type
    x.method();
}

The idea is that we'd enforce that defining uses, if they exist, must precede non-defining uses.

TC: Do we need this to save space, e.g. for doing passthrough later with ATPIT?

CE: I'm not worried about it. I think we can do it anyway.

Consensus: We won't do this.

Discussion

Various examples

Niko:

impl SomeThing for MyType {
    type Foo = impl Display;
    
    #[defines(Self::Foo)]// niko really doesn't want to write this
    fn get(&self) -> Vec<Self::Foo> {
        
    }
}

Niko:

impl SomeThing for MyType {
    // what I (maybe?) really want? --niko
    fn get(&self) -> Vec<impl Display> {
        
    }
}

TC: impl Trait Everywhere:

//type CatSayReturn = impl Future<Output = ()>;

#[derive(Default)]
struct Cat {
    fut: Option<impl Future<Output = ()>>,
}

impl Say for Cat {
    fn say(&mut self) -> RefFuture {
        let fut: CatSayReturn = async {
            println!("meow");
        };
        RefFuture::new(fut, &mut self.fut)
    }
}

Niko:

//type CatSayReturn = impl Future<Output = ()>;

#[derive(Default)]
struct Cat {
    fut: Option<impl Future<Output = ()>>,
}

impl Say for Cat {
    #[defines(Cat)]
    fn say(&mut self) -> RefFuture {
        let fut: CatSayReturn = async {
            println!("meow");
        };
        RefFuture::new(fut, &mut self.fut)
    }
}

Niko:

trait MyIterator {
    type Iter: Iterator<Item = u32>;
    
    fn items(&self) -> Self::Iter;
    
    fn combinator();
}

impl MyIterator for MyType {
    type Iter = impl Iterator<Item = u32>;
    
    fn items(&self) -> Self::Iter {
        
    }
    
    fn combinator() {
        ...
    }
}

Niko:

trait MyIntoIterator {
    type Iter: Iterator<Item = Self::Item>;
    type Item;
    
    // can this define Self::Item?
    fn items(&self) -> Self::Iter;
    
    fn combinator();
}

impl MyIntoIterator for MyType {
    type Iter = vec::IntoIter<Self::Item>;
    type Item = impl Debug;
    
    fn items(&self) -> Self::Iter {
        
    }
    
    // ..also this..
    // -- yes, this would work --niko
    fn items(&self) -> vec::IntoIter<Self::Item> {
    }
    
    // ..what about..
    fn items(&self) -> vec::IntoIter<impl Debug> {
        // this would not work, nothing constraints `Self::Item` --niko
    }
    
    fn combinator() {
        ...
    }
}

type Tait = impl Debug;
impl MyIntoIterator for MyType {
    type Iter = vec::IntoIter<Self::Item>;
    type Item = Tait;
    
    #[defines(Tait)]
    fn items(&self) -> vec::IntoIter<Tait> {
        // this would work
    }

    #[defines(Tait)]
    fn items(&self) -> vec::IntoIter<()> {
        // this would not actually work unless we...did stuff to make it work
        //
        // but caller would not be able to see its unit anyway, so that's kind of consistent, it's a refinement
    }

    fn combinator() {
        ...
    }
}

trait TwoItems {
    type Item1: 
    type Item2;

    // can this define Self::Item?
    fn item1(&self) -> Self::Item1;
    fn item2(&self) -> Self::Item2;

    fn combinator();
}

impl MyIterator for MyType {
    type Item1 = Vec<Self::Item2>: 
    type Item2 = impl Debug;

    fn item1(&self) -> Vec<Self::Item2> {
        vec![()] // defines Self::Item2
    }

    fn item2(&self) -> () {
        ()
    }
}

Niko:

trait Foo {
    type FooItem;
    
    fn define() -> Self::FooItem;
}

trait Bar: Foo {
    type BarItem;
}

impl<T: Foo + ?Sized> Bar for T {
    type BarItem = T::FooItem;
}

impl Foo for Something {
    type FooItem = impl Debug;
    
    fn define() -> T::BarItem {
        // we are not smart enough!
    }
}

Niko: Design axioms

We believe that

  • impl Trait syntax (outside of type aliases) is for convenience and should have minimal overhead to use.
  • It should be easy to tell whether a function is a defining use of a TAIT (or ATPIT, etc.) or not. This affects things like method dispatch in edge cases. You shouldn't have to traverse other definitions or module-level items to figure it out.
  • Behavior of impl Trait should be very crisp: if it a defining use, it should act exactly like an inference variable (existential quantification). If it NOT a defining use, it should be like a generic type (universal quantification).
  • Module-level items are generally independent from one another and links between them should be explicit.
  • impl Trait is referenced by "name" (i.e., you can't expect to copy/paste impl Trait, you have to give it a name).

Stabilization of ATPIT

Consensus: Let's stabilize ATPIT essentially as it is implemented today. Specifically:

  • You can use impl Trait in the value of an associated type.
  • To find the defining uses for that:
    • Skim the impl signature as written (no normalization) to find Self::AssocType.
    • and then look at the value of this associated type until a fixed point is reached.
    • any impl Trait you encounter in this process may and must be defined.
  • We'll save space to have a more semantic normalization rule.
    • by normalizing and reporting an error if a discovered impl Trait opaque type doesn't otherwise appear in the signature.

We'll keep the other rules from the Mini-TAIT proposal, in particular:

"May define implies must define"

https://github.com/rust-lang/rust/issues/117861

At least until the new trait solver is stabilized, any item that is allowed to define the hidden type of some opaque type must define the hidden type of that opaque type.

"May not define may guide inference"

https://github.com/rust-lang/rust/issues/117865

The compiler is allowed to rely on whether or not an item is allowed to define the hidden type of an opaque type to guide inference.