HIR 2.0 for rust-analyzer

This document is a collaboratively edited RFC to design the end-game API for writing IDE analysis of the Rust programming language. The immediate goal is to use the API in rust-analyzer. The lodestar is to make this API stable and publicly available, to let others build on top of it.

Zulip discusion thread: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/Tree.20Based.20HIR

Domain

We want to build an API for programs written in the Rust programming language. The primary use of the API is getting semantic information about Rust programms. Crucially, this is an API for consumers of the analysis. Intrernally, compiler will use a different set of data structures to perform type inference and such.

Semantic Constraints

  • The API describes Rust code as written, and not some intermediate representation.
  • Full-fidelity there is precise correspondence between HIR node and the original source code (precise doesn't mean one-to-one though)
  • Full error tolerance no constraints are imposed on the source code, beyond "valid utf8". Combined with full fidelity, this means that arbitrary text can be round-tripped through the API.
  • Fully analyzed view of the code. In particular:
    • The code is fully macro expanded (but fidelity requires that original, non-expanded code is also available)
    • The code is fully resolved. Getting from the impl block to the corresponding trait is one-step (but fidelity requires that original path, designating the trait, is also available, and that each component of that path can be resolved).
  • The API presents static view of the code. The API to change the input or to apply refactors is specified elsewhere.
  • API allows for speculative analysis: "what would be the result if the text was like this?" Reasons for speculation:
    • Completion: want to insert an fake ident to repair the syntax tree and ask about the type of fake to better sort suggestions
    • Doc comments: want to resolve and highlight code snippets from comment as if they were written.
    • Some refactors in "remove semicolon" we want to speculatively remove ; to check if the result type-checks.
  • Traversability it should be easy to navigate through heterogeneous entities comprising the rust progam
  • Whole World API should not be constrained to a single crate, and should represent an arbitrary set of crates with cross-references between them.
  • rmeta-transparent source code might not be available for some crates, the API should support pre-compiled rmeta files as inputs.
  • single-file-mode ready? Do we need special support for files which are not part of the crate graph?

Implementation Constraints

  • Laziness although the API presents static view of the code, the implementation should be able to lazily compute only the minimally required bits on info. As a litmus test, when using the API to compute syntax hightlighting for a single file, bodies of functions in other files should remain unprocessed.
  • Cancellability analysis can take a long time to complete and, given that the user constantly types into the editor, we need an ability to cancel in-progress work.
  • In general, analyzing Rust source code can take unbounded amount of time, Rust is full of compilation bombs. None the less, the API should guarate reasonable response times (<16ms ideally, <100ms OK, >100ms needs improvement). So, every compiler functionality that can get stuck should be limited by some kind of fuel.
  • The API doesn't need to be thread-safe it's a light-weight view into data. It's ok to use cells & rc in the API. Internally, it's ok to transparently multithread to improve performance
  • VFS: do we require to know the super-set of all source files up-front? Can we dynamically discover additional files?

Guide-level explanation

EDIT: the below is probably won't work, it seems that we want to separate layers:

  • expanded syntax layer
  • symbols (whith substitution)

The code is generally tree shaped, so the idea is to make the tree shaped API work. That means that each entity in the API will have (conteptually) .parent() and .children methods. The root of of the tree is a crate, all crates form a DAG. The single child of a crate is it's root module. Module contains all the items defined within the module. Items like function or const contain bodies, which are a tree of expresisons, etc. The leaves of the tree are tokens. So, it is possible to get a reference to a particular ident token, and then get the type, resolve, lifetimes out of it, etc.

Macro calls will will have both the original argument and an expansion as children. Let's see a small example

macro_rules! m {
  ($i:expr) => { $i.abs() }
}

fn main() {
  let x = m!(-92);
}

Here, we'll have a MacroCall node for the m!(-92); fragment. It will have .arg() property, returning (-92) token tree, as well as .expansion() property, returning (-92).abs(). Correlating -92 from the call-site with the -92 from the expansion is a surprisingly hard problem, with no precise solution. Rust macro expansion model has no notion of token identity we can't say that a source token get turned into an expanded token. Rather, some source and expanded tokens happen to share identical source spans (but potentailly different hygiene info). For this reason, only TextRange based API is available. Expansion can map an .arg text range to a set of ranges in .expansion(). Each token has a source_span method, returning the orignal source location.

For attributes, the annotated item will have an .expansion() attribute:


#[tokio::main]
async fn main() {

}

Here, the async main hir::Fn will have .expansion() pointing to the, well, expansion.

API Sketch

Note the "Context Passing" unresovled question below!

//* Elemets *// /// All there is! struct Universe; /// Source file as written by user. struct SourceFile; /// `SourceFile`, which is a part of a specific `Crate` /// via `mod ;` or `include!()`. Aliased `SourceFile`s /// create distinct `CrateFile`s. struct PhysicalFile; // NQ: MountedFile? /// Either a physical `PhysicalFile` or a macro expansion. /// Contains tokens rather than strings? struct LogicalFile; // NQ: TokenFile? struct Crate; struct Module; enum ModuleKind { Root { file: ModuleFile } File { decl: ModuleDecl, file: ModuleFile, } Inline { body: ModuleBody, } } struct ModuleFile { file: PhysicalFile, } struct Token; //* Utility Types *// struct CrateDep { krate: Crate, name: String } /// Strongly-typed (u32, u32). type TextRange = text_size::TextRange ; /// Location in the source code written by the user. /// Corresponds to `lo, hi` bit of `SpanData` from `rustc`. struct SourceLocation { file: SourceFile, text_range: TextRange } struct PhysicalLocation { file: PhysicalFile, text_range: TextRange } struct LogicalLocation { file: LogicalFile, text_range: TextRange } //* API *// impl Universe { fn crates(self) -> Vec<Crate> /// For a given source location, the list of /// `PhysycalLocation`s it can be viewed as. /// /// For `#[test]` and other conditional compilation situations, /// this will return PLs for both versions of the crate /// (with and without `--cfg test`). It's up to the caller to /// figure out that one of them is not interesting (expands to void). /// /// Invariant: /// /// result.all(|pl| { /// pl.file.source_file() == sl.file /// && pl.text_range = sl.text_range /// }) /// /// Q?: should this group by crate? HashMap<Crate, Vec<PhysicalLocation>> /// alternatively, just sort by crate? fn source_to_physical_loc(self, sl: SourceLocation) -> Vec<PhysicalLocation> } impl Crate { fn deps(self) -> Vec<CrateDep> fn rev_deps(self) -> Vec<CrateDep> fn root_module(self) -> Module; /// For a given source locatin anchored to a crate, /// the list of logical locations it expands to. /// /// Q: If PL corresponds to cfg-ed out code, should /// this return an empty Vec? /// /// Precondition: /// pl.file.crate() == self /// /// Invariant: /// Q: logical location doesn't neccessary correspond /// to a physical location: physical locations are /// properties of the whole tokens. How do we express /// the neccessary constraint here? fn physical_to_logical_loc(self, pl: PhysicalLocation) -> Vec<LogicalLocation> /// Finds the token for the corresponding logical location /// /// Invariant: logical location belongs to this crate fn find_token(self, ll: LogicalLocation) -> Option<Token> fn find_element<E: Element>(self, ll: LogicalLocation) -> Option<E> } impl Module { fn physical_file(self) -> PhysicalFile } impl PhysicalFile { fn crate(self) -> Crate; /// `SourceFile` corresponding to this `CrateFile`. /// Q: should this return an Option<>? /// Can we manufacture a file out of thin air? fn source_file(self) -> SourceFile; } impl Token { /// Source-location bits of rustc's `.span()`. fn physical_location(self) -> PhysicalLocation; /// Position of the token in a specific macro expansion. fn logical_location(self) -> LogicalLocation; }

Unresolved Qustions

Naming

hir is a decidedly bad name here. Better versions:

  • sem for semantics
  • fir for front-end IR, homage to Kotlin
  • rst for rs tree

Contex Passing

Should nodes contain an Rc/& to global ctx? Should we thread arguments?

Symbols

Consider the following field declaration: field: module::Thing<String>,. If this is let f: hir::Field, then f.ty() can reasonably mean three things:

  • the module::Thing<String> path
  • the hir::Struct corresponding which module::Thing resolves to
  • the Ty { ctor: hir::Struct, subst: [String] } type

All APIs are useful, should they be split into syntax/symbols/ty layers or kept in one?

Deep/shallow expansion

Should intermediate macro expansion steps be visible in the API? If yes, should .expansion() be deep or shallow by default?

Eager macro expansion?

Find usages can't be implemented precisely without eagarly expanding all macros. So perhaps that our model should be that all macros are expanded eagerly, even those inside function calls? This is more of an implementation qustion. We also typically don't look for usages in libs, so being lazy&correct might be possible?

Select a repo