owned this note
owned this note
Published
Linked with GitHub
# High quality Lean 3 → 4 translation via AST generation: project proposal
Because of the extensibility of the lean 3 language, it is quite difficult for an external program to parse lean files completely correctly, which limits the approaches to those that are largely heuristic. Lean 3 parsers interact with the VM, so they need access to the VM state as well as a compatible execution environment.
To get around this, we implement the needed additions as an add-on to the parser in (a fork of) lean 3 itself. These additions will produce a data-rich export object in an easily parsable format containing the relations between the text, AST, tactics, and elaborated expressions, and this data can be mined by external tools.
In particular, we intend to use it as a source for Lean 4 importing. By having the syntax, notations, the result of name resolution, and the elaborated term, it is possible to generate lean 4 source code utilizing analogous notations, while ensuring that it elaborates correctly and having several fallback options depending on where we want to fall on the spectrum between being faithful to the source and leaving `sorry` in the term, and adding more disambiguation to the input text to ensure that the elaborated term is correct.
## Implementation plan
These are grouped into "levels" based on the quality of the result and approximate implementation order. After each level, the result can be evaluated to assess whether it is good enough that manual porting can make up the difference, or whether further fidelity improvement in the tool is desirable.
### Level 1: Basic AST extraction
* A new data structure, `ast`, is added to the lean 3 parser state. In rust-ish psuedocode:
```rust
type ast_id = u32;
struct parser {
...
next_ast_id: ast_id, // used to generate fresh IDs
ast: Vec<ast_node>, // mapping ast_id -> ast_node
commands: Vec<ast_id>, // The top level commands in the file
}
struct ast_node {
start: pos_info, // span start
end: pos_info, // span end (not currently tracked by lean 3, optional)
node_type: name, // The valid node types
children: Vec<ast_id>, // for nodes with children
value: name, // for terminal nodes
pexpr: Option<pexpr>, // the main pexpr associated to this ast node, if applicable
expr: Option<task<expr>>, // A task for blocking on the elaboration of the pexpr
}
```
* Every function that progresses the parser will add a node to the `ast` once the parsed value is clear. For example, once an `inductive` definition is parsed, before the inductive command elaborator itself is called, a `command.inductive` node is added to the AST with reference to the constructors, binders and other AST components that were parsed.
* Every `ast` node has a file-local `ast_id` index so it can be referred to by other stages.
* Every `notation`, `infix` and other notation-like command will get a `name` identifier, which is added to the pratt parser table (`notation::accepting`) along with the expansion.
* For stability across mathlib versions, the identifiers should be derived from the notation itself, similar to lean 4.
* The notation identifier will be available when the pratt parser takes a reduction step, and commits to a pexpr. An `ast` node will be added to indicate that a given `pexpr` was produced from that notation declaration (so we can tell the difference between source text `has_add.add x y` and `x + y`).
* At the end of the file, the `ast` data structure is flushed to disk in a stable and documented format (JSON? s-exprs?).
* The format will need explicit backreferences for exprs, although the AST itself should be diamond-free.
Here is an example of the AST export data for `def foo := 1` in JSON format (details subject to change):
```json
{
"version": 1,
"ast": {
// 0 is reserved for missing AST
"1": {
"start": [0, 4],
"end": [0, 7],
"type": "ident",
"value": ["foo"],
},
"2": {
"start": [0, 11],
"end": [0, 12],
"type": "num_lit",
"pexpr": 1,
"expr": 5,
},
"3": {
"start": [0, 0],
"end": [0, 12],
"type": "def",
"children": [1, 0, 0, 2], // name, universes, type, value
},
},
"commands": [3],
"pexpr": {
"1": {
"type": "num_lit",
"value": 1, // the actual number 1
"tag": 2, // points to the corresponding AST node
},
},
"expr": {
"1": { "type": "const", "name": ["has_one", "one"] },
"2": { "type": "const", "name": ["nat"] },
"3": { "type": "app", "children": [1, 2] },
"4": { "type": "const", "name": ["nat", "has_one"] },
"5": { "type": "app", "children": [3, 4], "tag": 2 },
},
}
```
* The `ast` structure is loaded into lean 4 data structures by a basic parser.
* The data structure is traversed to produce a `Syntax` object, mapping each constructor to the corresponding lean 4 syntax.
* The `Syntax` is printed into a lean 4 text file using lean 4's full file syntax printing capabilities.
### Level 2: AST data for exprs
* The parser currently maintains a mapping from `pexpr` tags to line/col information; this will be modified to point to the source `ast` node instead (which itself includes line/col information). This is needed to retain the `ast` mapping even through VM evaluation.
* Most code handling `pos_info` will be modified to instead pass around `ast` node indexes.
* Elaborating a `pexpr` into an `expr` (when the source `ast` node is known) will, as a side effect, add the `expr` to the `ast` node's data.
In the example, the `expr` field of AST node `2` is filled in by this proces.
### Level 3: Tweaks
Beyond the basic porting process, a number of alignments must be specified, some algorithmic (like "put `Mathlib.` on all names" or "camel-case types") and some specific (like "use `Nat.add` instead of `Mathlib.Nat.add`"). To support this, the porting tool requires a lot of configuration, which we encode as a lean 4 data structure.
The main function in the lean 4 import is `buildSyntax : Lean3AST -> M Syntax`. The monad `M` here maintains the imported AST table for lookups, as well as being able to elaborate terms and speculatively call tactics. (For the basic version it can be a pure reader monad on the import data.) Decoded names will be passed through a filter `mangle : Name -> M Name` which can append extra name segments, perform transformations, etc. The name unexpander can be used on the result to remove unnecessary prefixes or add disambiguation like `_root_`.
Similarly, for notations we can see the notation that was used in the import data, and can select an analogous notation, for example mapping the `+` notation for `has_add.add` into the lean 4 `+` notation for `HAdd.hadd`. Since we have access to the elaborated expression, we can elaborate it with the right type hints and check that the resulting expression is defeq to the original even if it is spelled differently.
Tweaks represent a potentially unbounded amount of work, but they can be implemented largely piecemeal, so we can implement only the tweaks necessary to make most files mostly typecheck and clean up the rest manually.