Type Alias Impl trait defining scope design

TLDR

  • Defining scope can be explicit (defines) or implicit (parent mod + signature restriction).
  • Explicit has some advantages.
  • We don't have enough data to prove the current implicit design is right, only way to get it is stabilizing explicit first.
  • So, the safest thing to do would be stabilizing explicit first.

Background

The core of the type alias impl Trait feature is "opaque types with cross-function type inference".

For a given TAIT, code in the crate is split in two regions: they're either in the defining scope, or outside the defining scope.

Outside the defining scope, the TAIT is treated as opaque, i.e. code can't rely on anything about the type other than it implements the trait bounds listed (and the auto traits).

Inside the defining scope, the TAIT is not treated as an opaque type, but as an inference variable (i.e. as an "unknown" type, similar to let bindings inside functions). When typechecking the defining scope, the inference variable must be constrained to a concrete type.

Example: (with missing syntax to decide which fn is inside the defining scope. Just assume the first one is, and the second isn't.)

type Tait = impl Sized;

fn in_defining_scope() {
    // we're taking a value of type u32 and assigning it to a variable of type Tait
    // This lets the compiler know the hidden type of `Tait` is `u32`.
    let x: Tait = 2u32;

    // The TAIT behaves "transparently" here, we're allowed to rely on the fact
    // that the hidden type is u32.
    let y: u32 = x;
}

fn not_in_defining_scope() {
    // Fails, because the TAIT behaves opaquely outside the defining scope.
    let x: Tait = 2u32;
}

Note: To avoid confusion, in this document I'll be using the term "defining scope" as "the set of code that is allowed to constrain the hidden type".

Some documents in the past have used "defining scope" as "the parent module", as in "a function must be in the defining scope and pass the signature restriction to constrain the hidden type". while this document considers the signature restriction part of what determines the defining scope.

How is defining scope determined?

We need to decide how the compiler determines whether a given piece of code (like, a function) is considered defining scope. That is, a function like fn is_defining_scope(LocationInTheCode, Tait) -> bool.

In the design space, there's 3 main categories to explore:

  • Implicit: the compiler looks at clues in the code and uses some heuristic to guess.
  • Explicit: the user explicitly annotates which functions are the defining scope.
  • Implicit and explicit: the compiler uses a heuristic to guess, the user can optionally annotate explicitly if the compiler didn't guess right.

Implicit

Possible designs in the Implicit category are:

  • "any fn in the parent mod" - the design proposed in the original RFC 2071.
  • "any fn in the parent mod that mentions the TAIT in the signature (args, return value, or where clauses)"
  • "any fn in the parent mod that mentions the TAIT or a type containing the TAIT in the signature (args, return value, or where clauses)" - constraining through encapsulation, the design currently being proposed in #10745.
  • Some combination of the above (allowing encapsulation or not, allowing/disallowing args, return value, where clauses)
  • Some other heuristic with a different principle (like, not based on modules or signatures?)

Explicit

Under the "explicit" design, the user would add explicit annotations t the code to tell the compiler what the defining scope is.

Annotations are mandatory. Each TAIT must have at least one defining scope annotation, since an empty defining scope makes no sense. TAITs with no annotations get rejected at compile time:

type Tait = impl Sized;
          // ^ ERROR: undefined `impl Trait`
          //   Note: `impl Trait` in a type alias requires marking at least one function with `#[defines(Tait)]`. 

Possible designs in the Explicit category are:

  • Annotations on the defining scope, specifying which TAIT it defines.
    • Attribute: #[defines(Tait)] fn foo() {..}
    • New defines clause: fn foo() defines Tait {..}
    • New kind of where clause: fn foo() where defines(Tait) {..}
    • Magic marker type or trait in the where clause: fn foo() where (): Defines<Tait> {..}, fn foo() where Tait: Defined {..}, fn foo() where IsDefined<Tait>: {..} or a variation thereof.
  • Annotations on the TAIT, specifying which code it's defined by.
    • Attribute: #[defined_by(foo)] type Tait = impl Sized;
    • New defined_by clause: type Tait = impl Sized defined_by foo;
    • New kind of where clause: type Tait = impl Sized where defined_by(foo);
    • Magic marker type or trait in the where clause.
    • Modifier on the impl keyword: type Tait = impl(in foo) Sized;

Implicit and explicit

The possible designs for Implicit and explicit are a combination of the above: pick one design for the "implicit" part and another for the "explicit" part. There's a few ways of combining them, though:

  • Override: if there's explicit annotations, the implicit heuristic is not used at all. The defining scope is what's explicitly annotated by the user.
  • Expand: The defining scope is the union of the functions matched by the heuristic, and the ones explicitly annotated by the user. The annotations "expand" the defining scope, can't shrink it.
  • Expand-and-shrink: Same as above, but there's more kinds of annotations to indicate "this function is NOT defining scope even if the heuristic says it is".

Analyzing pros and cons

First we'll analyze the pros and cons within each category, to pick the best designs from each. Then, we'll compare them all.

Implicit

The design space for implicit defining scope have been extensively discussed in earlier design meetings, so we won't analyze it again in this document.

The preferred design is "any fn in the parent mod that mentions the TAIT or a type containing the TAIT in the signature (args, return value, or where clauses)". This is the design currently being proposed in #10745. We'll take that as the chosen implicit design in the rest of this document, but most pros and cons discussed apply to all implicit designs, not just this one.

Explicit

Annotating the defining scope (defines) is better than annotating the TAIT (defined_by). The reason is the defining scope always mentions the TAIT by name to constrain it, so the TAIT has to be visible from there, therefore it is always possible to name it in the defines attribute/annotation. On the other hand, defined_by would add the restriction that the defining function must be visible from the TAIT:

#[defined_by(foo)] // foo is not visible here!
type Tait = impl Sized;

mod bar {
    // not pub!
    fn foo() {
        let x: Tait = 2u32; // defining use
    }
}

Out of the syntax options for defines, the ones based on where clauses should be ruled out. This includes "New kind of where clause" and "Magic marker type or trait". Reasons:

  • It doesn't make sense semantically. The where clause is a list of obligations the caller has to prove in order to call this function. Defining a TAIT has nothing to do with conditions for the caller, it is a completely different concept.
  • The where clause shows up in rustdoc, IDE tooltips, etc. However, it is of no interest to the caller of the function (especially from another crate) whether that function is defining a TAIT or not. The where defines() clauses could be hidden from rustdoc, but then it's inconsistent to have some clauses hidden and some not.
  • It is a warning to name private types in where clauses for public functions. However, it should be perfectly OK to constrain a private TAIT from a public function. Again, the warning could be special-cased, but it's inconsistent.

This leaves us with two options:

  • Attribute: #[defines(Tait)] fn foo() {..}
  • New defines clause: fn foo() defines Tait {..}

The tradeoffs between them are subtle.

  • The attribute is using existing established syntax instead of inventing new one, which is arguably good for a Rust feature such as TAIT that will be used relatively rarely.
  • There's no infrastructure for parsing and name-resolving paths in attributes in the compiler, this would have to be added, and apparently the technical complexity required in the compiler is higher than for a defines clause.

In the rest of the document we'll consider these as the chosen "Explicit" design, interchangeably. The semantics are the same, so the tradeoffs for/against them are the same.

Implicit and explicit

Out of the 3 options for combining "implicit and explicit", the "Override" option is what makes most sense.

  • "Override" is the simplest. "Expand" adds extra complexity, "Expand and shrink" adds a ton of extra complexity since it requires two kinds of annotations.
  • Most of the time, only one defining use is needed. If the user has annotated one function with defines, it's very likely that is the only defining use they want, so making all the other code not defining scope is most likely to match the user's intent. This is what "Override" does.
  • "Expand" is not ideal because there is no way to make a function not be defining scope, which is sometimes needed (see below)

Therefore the chosen "Implicit and explicit" design is "Override".

Implicit vs Explicit

Getting the defining scope right is essential for the TAIT feature to be ergonomic and intuitive.

A too small defining scope (i.e. the function where the user was planning to
constrain the TAIT ends up not being considered defining scope) is an obvious
problem: the user is not able to express their intent.

However, a too large defining scope (i.e a function where the user didn't intend to
constrain the TAIT ends up being considered defining scope) is also a problem, for a few reasons:

  • It hurts performance of the compiler and IDEs: getting the hidden type requires typechecking the entire defining scope. If it is too big, it will hurt compiler parallelism, incrementalness or responsiveness of an IDE to code edits.
  • Worse compiler error messages. Within the defining scope, a type mismatch and a defining use are fundamentally indistinguishable. This makes diagnostics inside the defining scope are worse. The issue would be solved if the function wasn't considered defining scope. See Appendix B
  • Cycle errors when a function in the defining scope doesn't constrain the hidden type but inspects its auto traits. See (playground). The issue would be solved if the function wasn't considered defining scope.
    • For forward-compatibility with the new solver, it has been proposed to mandate functions in the defining scope MUST constrain the hidden type instead of MAY. This eliminates cycle errors, but replaces them with a different error, so the user still can't write the code they wanted.
    • It's been brought up that when the new solver lands, the "MUST constrain" restriction can be lifted. However, will the new solver throw cycle errors too, or can it make all such code Just Work? How sure are we about that to bet the TAIT design on it?

Therefore, it's essential that the mechanism used to decide whether a function is in the defining scope or not matches the user's intent as closely as possible. Every time a function where the user did intend to be defining scope ends up not being so, or vice versa, incurs in the above downsides.

This brings us to the main advantage of Explicit over Implicit: Explicit always matches user intent, by definition, because the user explicitly tells the compiler what their intent is. The above issues never happen.

With implicit defining scope, the above issues do happen. Designing a heuristic has to walk the line between being too liberal and causing issues due to too large defining scope, and being too restrictive and causing issues due to too small defining scope. It is unclear whether a heuristic that works for 80%+ of cases exists, the currently proposed one arguably doesn't.

Note that in many cases, the issues caused by defining scope intent mismatch can be worked around by the user, by mutating the code in ways to make the code fit the expectations of the heuristic. See Appendix A for a breakdown of issues with the currently proposed rules and their workarounds. This means we're not giving up expressiveness by going the "Implicit" route: most use cases of TAIT can indeed be expressed after applying workarounds. However, the fact that workarounds are needed is in an of itself a downside, independently of the existence of workarounds or of how onerous/invasive they are. The design goal of a language feature is not to be merely "expressive", it is to be intuitive, simple, ergonomic, learnable.

Advantages of Implicit:

  • Less typing required.
  • Simpler syntax. No new syntax is required beyond impl Trait.
  • Syntax is more similar to dyn Trait. (Note that with the current rules, you can't convert most code from dyn Trait to impl Trait without workarounds, so similarities end at the syntax.)
  • Already implemented, so would allow stabilizing earlier.

Advantages of Explicit:

  • always matches user intent, by definition, as discussed above.

  • Simpler semantics, easier to teach. Just compare the amount of words required to explain both.

    Implicit:

    The hidden type may be constrained only within the scope of the item (e.g. module) in which it was introduced, and within any sub-scopes thereof, except that:

    • Functions and methods must have the hidden type that they intend to constrain, or a type that (possibly transitively) contains the hidden type, within their signature – within the type of their return value, within the type of one or more of their arguments, or within a type in a bound, or
    • Nested functions may not constrain a hidden type from an outer scope unless the outer function also includes the hidden type in its signature.

    Explicit:

    The hidden type may be constrained only within items annotated with #[defines(MyTait)]. Each TAIT must have at least one #[defines(MyTait)], and it can appear anywhere in the crate.

  • Semantics of code doesn't change if you move it to another module.

    Currently in Rust you can always move some piece of code to a different mod, and the type system doesn't care. All code within a crate is the same for the type system.

    There's precedent of the type system caring in which crate things are in:

    • You can write inherent impls only for types from the current crate.
    • Coherence: you can write trait impls only if the type or the trait are from the current crate.

    However, there is nothing in today's Rust type system that cares about which mod things are in. There's no precedent for this. TAIT would be the first feature that does this.

    (Yes, there is visibility, but that only affects whether you can name some thing or not. It is independent from the type system)

    With #[defines] we can just say "must be in the same crate", just like with impls or coherence.

  • It lets the user keep their preferred module structure.

    Modules are used to control code organization, by the criteria that the programmer thinks is more adequate. For example, by feature, or by abstraction layer

    Implicit TAIT requires the user to move things between modules to get the heuristic's guess of defining scope to match their intent. This includes moving a TAIT or functions up/down the hierarchy to make their defining scope bigger/smaller, thus forcing the user to place things in modules where they shouldn't be according to the user's preferred module structure. (See Appendix A for examples)

    With explicit #[defines] the user can directly control the defining scope, and is free to place everything in the right module that would make most sense.

  • It preserves locality of reasoning, code searchability.

    With explicit defines there's an explicit link between a TAIT and its defining function. This means

    • When reading the code for the function, the defines in the signature gives a visual indicator that the TAIT is treated specially in the function.
    • To know where a TAIT is defined, searching the crate for its name will directly lead you to the defining function.

    With Implicit, there's a lot more factors influencing whether a given function is defining. It's no longer possible to tell if a function is defining by looking at it. It's no longer possible to search the TAIT name to find the defining scope. Especially due to the "signature must contain a type that transitively contains the TAIT" rule, it is now needed to recursively search through structs and enums to answer these questions. These structs and enums can be anywhere in the crate.

"Implicit and Explicit, Override" vs just Implicit or Explicit

Implementing both implicit rules and an explicit override has the potential to be the best of both worlds. It keeps the advantages of Implicit (less typing, simpler syntax, the Just Works experience when it does work), and mitigates the downsides because the user always has the escape hatch of explicitly overriding when it does fail.

The only downside is complexity, really. The complexity is the "sum" of Implicit and Explicit, which makes it more complex than either. Users will have to learn both ways, they will encounter code in the wild written in both ways.

Forward compatibility

We don't have to decide the full final design now. We can stabilize part of the design now, and later extend it. Let's explore which designs are forward-compatible with which future extensions.

As a general rule:

  • Any change that shrinks the defining scope is obviously breaking.
  • Any change that grows the defining scope is breaking, due to the current "if defining, MUST constrain" rules.
    • This would stop being the case after landing the new solver, lifting the "MUST constrain" restriction, and only if the new solver is guaranteed to never throw any cycle errors.

After stabilizing any implicit rules, there's little wiggle room to adjust the heuristic. All cases where the heuristic emits a "defining" or "not defining" verdict are unchangeable. Only the cases where the heuristic emits "compilation error" can be later changed to compile and be either defining or not. Currently some cases are reserved as "compilation error" for this reason, mostly code that could result in a cycle error.

After stabilizing explicit defines, there's no wiggle room to adjust its semantics. Functions marked defines are defining, functions not marked aren't, and that's it. We could add new defines syntax variants to express new things, but I see little need for this.

"Implicit" is forward-compatible with "Implicit and Explicit": the explicit defines is new syntax that old code couldn't be using. This is the case for all 3 variants of "Implicit and Explicit" (Override, Expand, Expand-and-Shrink).

"Explicit" is forward-compatible with "Implicit and Explicit, Override variant":

  • "Explicit" specifies TAITs must have at least one annotation. TAITs with no explicit defines annotations are always rejected at compile time.
  • "Implicit and Explicit, Override variant" specifies the implicit heuristic is only used for TAITs with no explicit defines annotations. It does not affect TAITs with annotations.
  • Therefore, the change can't affect the behavior of any existing TAIT in previously-compiling code.

("Explicit" is, however, not forward-compatible with the other (worse, discarded above) "Implicit and Explicit" variants (Expand, and Expand-and-Shrink), since the change might cause the defining scope for an existing TAIT to grow, causing breakage)

Data

We have a few alternatives (implicit, explicit, both), each with their pros and cons. How do we decide? We try to collect data to make objective measurements from.

The "Implicit" design has been implemented in Nightly. It had only the "parent mod" restriction for years, and since July 2023 it has had "parent mod + signature restriction" in place (for 4 months at the time of writing). This is enough time for people out there to start using the feature (only in projects using nightly, of course).

So, one question we can ask is: for what percentage of TAIT use cases do the current rules work?

Let's search GitHub for code using TAIT:

7 projects found using TAIT for which the current rules work, no signs of workarounds found:

Two projects showed signs of having to do workarounds because the TAIT implicit heuristic didn't match the author's intent:

It seems it works for more cases than not. However, here's a collection of links to chat logs or issues or projects where the author tried to use TAIT and ran into issues with the heuristic:

It is hard to extract conclusions from this data. First, the sample size is small. Second, if we count only actual committed code using TAIT, we have a 77% success rate (7 out of 9). If we count evidence of attempts at using TAIT from the latter list the outcome looks different: 43% success rate (7 out of 16).

One conclusion we can draw is that it displays a very strong survivorship bias. People only commit code that works, so by searching code we only find uses of TAIT that ended up working (i.e. "survived"). This means we cannot get a complete picture by searching code, we'll only get samples of either TAIT working, or requiring not very severe workarounds so the author decided TAIT was still worth it. We won't get samples of TAIT requiring onerous workarounds and the author falling back to Box<dyn Trait> or writing out long types by hand.

We can try to compensate the survivorship bias by counting issues and chatlogs of people trying to use TAIT and failing. Doing this is not very statistically sound either, since we're now comparing apples and oranges, but it can be useful as a counterpoint to the survivorship-biased data.

Proposal

Given the info we have:

  • We have some data proving there's use cases where the current Implicit design doesn't work.
  • We don't have enough data to say in what percentage of cases the current Implicit design works.
  • Stabilizing Implicit and waiting will not get us that data because of survivorship bias.
  • There is some (?) consensus that even if we stabilize Implicit now, we'll want to add Explicit later.

My proposal is:

  • Stabilize explicit defines first (either the attribute or clause syntax).
  • Wait a bit, collect data.
  • Evaluate going "Implicit and Explicit, Override" later. With data, try to find a heuristic that allows removing more than X% of explicit defines, if successful stabilize it.

Why?

  • The advantages for "explicit" listed above are stronger than for "implicit", in my opinion.
  • it'll allow collecting data. defines will allow everyone who wants to use TAIT to use it. It removes the survivorship bias, it allows all attempts at using TAIT to "survive", at the expense of being a bit more verbose. Once we have a corpus of code using TAIT, we'll be able to do unbiased statistics on it. For example, we'll be able to evaluate heuristics to see what percentage of defines would they allow eliminating. The percentage will allow making informed decisions about whether we want to have an heuristic, and which one to pick. The current "parent mod + signature restriction" heuristic is made based on blind guesses of how we believe people will use TAIT, with very little data to back it up.
  • We're likely to end up with explicit defines no matter what. If we stabilize implicit first, the end scenario will be "explicit + implicit". If we stabilize explicit first, the end scenario will also be "explicit + implicit", except the "implicit" part will have benefited from collected data, which is strictly better. (Or the data will have shown we don't need/want implicit, which is also strictly better)
  • Lifetime elision is a strong precedent of "implicit vs explicit" that showed the power of doing explicit first and collecting data. See Appendix C.

Appendix A: Issues with the current implicit rules

Define a TAIT from outside its module

This is the most obvious limitation of the current rules, but IMO it deserves a mention because it comes up often and showcases how the current TAIT forces non-ideal module structure. playground.

mod tait {
    type Tait = impl Sized;
    //          ^ error: unconstrained opaque type
}

mod foo {
    fn defining() {
        boom();
        let x: Tait = 2u32; // defining use
        //            ^ error[E0308]: mismatched types, expected opaque type, found `u32`
    }

    fn boom() { .. }
}
  • Workaround: move defining to inside mod tait, reexport it through the module structure to where it should be.
    • Problem: defining can't use private things from mod foo anymore.
      • Workaround: make them public towards mod tait
      • Workaround: move them too, and reexport them back.

Accidentally-defining functions.

It's easy to end up with functions being defining scope when you don't want them to. Currently this can cause cycle errors, but with the proposed "defining functions MUST constrain" rule it'll fail to compile.

type Tait = impl Sized;

fn defining() where Tait: { // dummy where clause to make this defining scope.
    let x: Tait = 3u32;
}

// accidentally defining!
fn passthrough(x: Tait) -> Tait {
    x
}
  • Workaround: move Tait and defining to a dummy module, reexport them out.

Constrain a TAIT from one function, pass it to another.

example, example, example. playground

type Tait = impl Sized;

fn takes_tait(t: Tait) {}

fn main() {
    takes_tait(32u32); // defining use
    //          ^ error: item constrains opaque type that is not in its signature
}

Workarounds:

  • Workaround: Add a dummy where clause: fn main() where Tait: { .. }.
    • Problem: main cannot have where clauses
      • Solution: make an "inner" fn that's not main, so it can have the where clause.
    • Problem: pub fns warn if their where clause mentions a private type.
      • Solution: #[allow(private_interfaces)]
      • Solution: make an "inner" fn that's private, so it can have the where clause.
  • Workaround: wrap/unwrap pattern
    • Problem: it needs typing out the type, which is a problem when the type is long or unnameable (which is probably why the user was trying to use TAIT to begin with).
struct MyType;
type Tait = impl Sized;
fn wrap(x: MyType) -> Tait { x }
fn unwrap(x: Tait) -> MyType { x }

Type inference for statics

example, example, playground.

struct TaskStorage<F: Future> {
    f: Option<F>,
}

impl<F: Future> TaskStorage<F> {
    pub fn init(&mut self, f: Fut) {}
    pub fn poke(&mut self) {}
}

type Fut = impl Future;
static mut TASK: TaskStorage<Fut> = TaskStorage{f: None}; // using static mut for brevity

pub fn start() {
    TASK.init(async{});
    //         ^ error: item constrains opaque type that is not in its signature
}

pub fn poke() {
    TASK.poke()
}

This is a very common need in embedded Rust, it has many use cases:

  • Building executors that don't need alloc.

  • Using &'static mut dyn Trait as a subsititute for Box<dyn Trait>.

  • Statically-allocating data so it can be shared between threads/tasks with no alloc (as a substitute for Rc, Arc).

  • Workaround: move Fut and TASK inside start.

    • Problem: doesn't work if multiple functions need access to the static, like poke here. (for example, it worked for Embassy, not for RTIC)
  • Workaround: Add a dummy where clause: fn main() where Tait: { .. }.

    • Problem: main cannot have where clauses
      • Solution: make an "inner" fn that's not main, so it can have the where clause.
    • Problem: pub fns warn if their where clause mentions a private type.
      • Solution: #[allow(private_interfaces)]
      • Solution: make an "inner" fn that's private, so it can have the where clause.

Appendix B: Impact of defining scope on diagnostic quality.

Take this code (playground):

mod dummy { // dummy module to prevent `foo` from becoming defining scope.
    pub type Tait = impl Sized;

    fn defining() where Tait: {
        let x: Tait = 2u32;
    }
}
use dummy::Tait;

fn foo(x: Tait) {
    // user made a mistake here, passed a Tait instead of a String.
    bar(x)
}

fn bar(x: String) {}

This results in the usual "mismatched types" diagnostic:

error[E0308]: mismatched types
  --> src/lib.rs:14:9
   |
4  |     pub type Tait = impl Sized;
   |                     ---------- the found opaque type
...
14 |     bar(x)
   |     --- ^ expected `String`, found opaque type
   |     |
   |     arguments to this function are incorrect
   |
   = note: expected struct `String`
           found opaque type `Tait`

If, instead, fn foo was inside the defining scope (which is what happens by default if
they're in the same module with the current "parent mod + signature restrictio rules"), we
get a much worse diagnostic (playground)

pub type Tait = impl Sized;

fn defining() where Tait: {
    let x: Tait = 2u32;
}

fn foo(x: Tait) {
    // user made a mistake here, passed a Tait instead of a String.
    bar(x)
}

fn bar(x: String) {}
error: concrete type differs from previous defining opaque type use
  --> src/lib.rs:11:5
   |
11 |     bar(x)
   |     ^^^^^^ expected `u32`, got `String`
   |
note: previous use here
  --> src/lib.rs:6:19
   |
6  |     let x: Tait = 2u32;
   |                   ^^^^

What's worse, if foo is put before defining, the diagnostic points to the wrong
place (playground)

error: concrete type differs from previous defining opaque type use
  --> src/lib.rs:11:19
   |
11 |     let x: Tait = 2u32;
   |                   ^^^^ expected `String`, got `u32`
   |
note: previous use here
  --> src/lib.rs:7:5
   |
7  |     bar(x)
   |     ^^^^^^

This is fundamentally unfixable, since to the compiler's eyes a constraining use and a type mismatch
error are indistinguishable. The best the compiler can do is point at one of the two places randomly
if it sees two mutually-inconsistent constraining uses.

If the user was able to communicate their intent on which functions are defining scope to the
compiler, the diagnostic would be good every time:

pub type Tait = impl Sized;

fn foo(x: Tait) {
    // user made a mistake here, passed a Tait instead of a String.
    bar(x)
}

#[defines(Tait)]
fn defining() {
    let x: Tait = 2u32;
}

fn bar(x: String) {}

Appendix C: Case study - Lifetime Elision

Another Rust feature that had a similar "implicit vs explicit" design space is lifetime elision. It has a strong parallel to the tradeoffs with TAIT. Implicit is terse and intuitive when it does work, but it doesn't always works, so you need explicit.

Rust had explicit lifetimes first, lifetime elision second. This allowed collecting data from real-world code to pick the best heuristic and back it in the RFC. Lifetime elision allowed removing 87% of manual lifetime annotations.

Compare it with an alternate universe where Rust had only implicit lifetimes using the elision heuristic, with no way to explicitly write lifetimes. How could have we collected data to decide whether "explicit lifetimes" were a thing worth doing or not?

Select a repo