# Lean@Google Summary
## Wishlists
**Son's wishlist**:
1. custom grind attributes
2. not replay a tactic when doing a modification after
3. thread the grind state
4. not replay proofs in (mutually) recursive definitions
Andres also wants 1, 2 and 3!
- E-Matching modulo AC
**Andres' wishlist**
1. faster variant of `apply` (that does as little as possible)
2. use that `apply` to implement a `simp`-like tactic (bottom-up rewriter), and benchmark
3. `intros` as well (same story: minimal, bench, perf, rewriter under writer, etc.)
## Performance Analysis
* Containers: the Google copy of Lean4 has replaced `unordered_map` (C++ stdlib) with `flat_hash_map` (abseil, i.e. Google's C++ container library). Lean can either:
* look into integrating https://abseil.io/docs/cpp/guides/container (`absl::flat_hash_map`)
* adopt C++26 (23?) equivalent container called [`flat_map`](https://en.cppreference.com/w/cpp/container/flat_map.html)
* Not a hashmap, not viable for larger collections
* At Google we only do this in `src/kernel/replace_fn.cpp` for now.
* Call stacks: ``[[clang::musttail]]`` (and GCC/MSVC equivalents) to avoid really long call stacks (TODO JP: share Max's presentation once we make sure there's no Google confidential info) -- notably `sharecommon_quick_fn::visit` in `sharecommon.cpp`
* note that the semantics of `musttail` is that clang ought to *try harder* to generate a tail-call, and fail if it can't
* Should trace nodes be `@[specialize]`d?
PULL-REQUESTS
- using abseil: https://github.com/leanprover/lean4/pull/11743
### Perf analysis of `apply`
Found a bunch of places where `apply` does more work than it should: mvars and is_defeq slow down the tactic.
Three benchmarks are superlinear; unclear if these benchmarks accurately capture what is to be optimized.
ACTION ITEM:
- investigate more because the perf is not spent in the same parts of Lean depending on the benchmark
- see what it would take to cut out `is_defeq` and/or the `mvar` check for improved performance (if that's what the problem is)
## Usability improvements
### `match_goal`
Having a functionality similar to Coq's [match goal with](https://rocq-prover.org/doc/v8.10/refman/proof-engine/ltac.html) would allow writing `progress*` in a much more concise way
This may be provided by [Qq](https://github.com/leanprover-community/quote4) with `~q()` matching.
What Andres does: https://github.com/mit-plv/bedrock2/blob/master/bedrock2/src/bedrock2/ProgramLogic.v#L254
A lean tactic using Qq for [a similar match](https://github.com/leanprover-community/mathlib4/blob/75a49ea56542e985ff2af5b517aa84aef2ae4182/Mathlib/Tactic/NormNum/NormNumI.lean#L160).
Problem: Qq error messages.
Problem: Qq does defeq checks and Andres and Son aren't enthused.
Sebastian says there is `match_expr` (and `let_expr`) which is much simpler -- but it does not support deep embedded patterns. Need to clarify the spec of matching deeply on `mvars`.
### `Qq` example
```lean
import Lean
import Mathlib
open Qq
set_option linter.constructorNameAsVariable false
theorem foo {P Q : Prop} {f : Nat → Nat → Prop} : Exists (fun x => f 37 x) := by
run_tac do
let g ← Lean.Elab.Tactic.getMainGoal
let ⟨u, α, g⟩ ← inferTypeQ (.mvar g)
match u, α, g with
| 0, ~q(∃ n, $f n), g => Lean.logInfo m!"was conjunction: {f}"
| _, _, _ => Lean.logInfo m!"not conjunction: {α}"
sorry
```
`run_tacq goal => do s` is also interesting, but the match is static
## Incremental Proof Mode
### Tactics replay when doing a modification below
Inserting a breakline or modifying the line after a tactic often causes the tactic to get replayed. This is problematic for some expensive tactics like `progress*`.
It seems (according to Sebastian) that the solution is not trivial to solve because of the cdot syntax. We discussed a temporary solution which would be to mark some tactics as not involving nested tactic elaboration where that could be an issue.
ACTION ITEMS:
- Son to investigate whether there's more to it than whitespace
- Sebastian will take a look otherwise
### Well-founded recursion
Modifying the `termination_by` and `decreasing_by` clauses of a theorem should not cause the proof of the theorem to be replayed.
In particular:
- *adding* a `termination_by` clause to a theorem should not cause its proof to be replayed
- when modifying the `termination_by` clause of a theorem, the goals in the `decreasing_by` clause should be instantly updated (we sometimes have to wait several seconds today)
ACTION ITEM:
- "should be manageable, the code will have to tell me" (Sebastian)
### Mutually recursive theorems
When working on mutually recursive theorems:
- modifying the body of a theorem should not cause the proofs of the other theorems to be replayed
- in relation with previous section: modifying the `terminating_by`/`decreasing_by` clause of a theorem should not cause the proofs of the other theorems to be replayed
ACTION ITEMS:
- same as above (Sebastian)
### Incremental display of subgoals for tactics like `progress*`
When working with a proof script like the one below:
```lean
progress*
· ... -- failing precondition 1
· ... -- failing precondition 2
```
It would be good to display the subgoal for the failing precondition 1 and start working on it without having to wait for `progress*` to completely terminate.
ACTION ITEMS:
- Leo says he will make progress* so fast that this will not be necessary. Thanks, Leo!!
(Notes: most likely a custom mode of progress* that is made explicit in the syntax.)
## Verification microbenchmarks
### Andres' paper benchmark
* benchmark TODO: each VC ought to be constant-size, but it doesn't appear to be right now
* lean4 TODO: redefine `repeat` as below to avoid stack overflows
```lean
import Lean.Elab.Tactic
namespace Eval
-- variable (String : Type)
-- variable (a b c : String)
section
inductive byte where
inductive word where
def word_of_Int : Int → word := sorry
def word_add : Option word → Option word → word := sorry
def word_sub : Option word → Option word → word := sorry
structure partial_map (A B : Type) where
(fn : A -> Option B)
def partial_map_empty (A B : Type) : partial_map A B :=
{ fn := fun _ => none }
def partial_map_get {A B : Type} (m : partial_map A B) (k : A) : Option B :=
m.fn k
def partial_map_put {A B : Type} [DecidableEq A] (m : partial_map A B) (k : A) (v : B) : partial_map A B :=
{ fn := fun x => if x = k then some v else m.fn x }
def partial_map_remove {A B : Type} [DecidableEq A] (m : partial_map A B) (k : A) : partial_map A B :=
{ fn := fun x => if x = k then none else m.fn x }
theorem partial_map_get_put_same {A B : Type} [DecidableEq A] (m : partial_map A B) (k : A) (v : B) :
partial_map_get (partial_map_put m k v) k = some v := by
simp [partial_map_get, partial_map_put]
end
theorem partial_map_get_put_diff {A B : Type} [DecidableEq A] (m : partial_map A B) (k k' : A) (v : B)
(h : k ≠ k') :
partial_map_get (partial_map_put m k' v) k = partial_map_get m k :=
if_neg h
theorem partial_map_put_put_same {A B : Type} [DecidableEq A] (m : partial_map A B) (k : A) (v1 v2 : B) :
partial_map_put (partial_map_put m k v1) k v2 = partial_map_put m k v2 := by
sorry
variable (word_add_sub_cancel :
forall v : word,
(word_sub (@some word (word_add (@some word v) (@some word v))) (@some word v))
= v)
inductive Bopname : Type where
| add
| sub
def interp_binop : Bopname → Option word → Option word → word := sorry
theorem binop_add_to_word_add :
forall w1 w2, interp_binop Bopname.add w1 w2 = word_add w1 w2 := sorry
theorem binop_add_to_word_sub :
forall w1 w2, interp_binop Bopname.sub w1 w2 = word_sub w1 w2 := sorry
inductive Expr : Type where
| literal (v: Int)
| var (x: String)
| op (bop : Bopname) (e1 e2 : Expr)
def eval_expr (me : partial_map word byte) (le : partial_map String word) (e : Expr) : Option word :=
match e with
| Expr.literal v => @some word (word_of_Int v)
| Expr.var x => @partial_map_get String word le x
| Expr.op bop e1 e2 => some (interp_binop bop (eval_expr me le e1) (eval_expr me le e2))
inductive cmd : Type where
| skip : cmd
| set : String -> Expr -> cmd
| unset : String -> cmd
| cond : Expr -> cmd -> cmd -> cmd
| seq : cmd -> cmd -> cmd
| input : String -> cmd
| output : String -> cmd
inductive IOEvent : Type
| IN : word → IOEvent
| OUT : word → IOEvent
inductive exec : cmd -> List IOEvent -> partial_map word byte -> partial_map String word ->
(List IOEvent -> partial_map word byte -> partial_map String word -> Prop) -> Prop
| skip :
forall t m l post,
post t m l ->
exec cmd.skip t m l post
| set :
forall t x e m l post v,
eval_expr m l e = some v ->
post t m (partial_map_put l x v) ->
exec (cmd.set x e) t m l post
| input :
forall t lhs m l post [DecidableEq String],
(forall v, post (IOEvent.IN v :: t) m (partial_map_put l lhs v)) ->
exec (cmd.input lhs) t m l post
| seq : forall c1 c2 t m l post mid,
exec c1 t m l mid ->
(forall t' m' l', mid t' m' l' -> exec c2 t' m' l' post) ->
exec (cmd.seq c1 c2) t m l post
theorem exec_seq_cps :
∀ (t : List IOEvent) (c1 c2 : cmd) (m : partial_map word byte) (l : partial_map String word)
(post : List IOEvent → partial_map word byte → partial_map String word → Prop),
exec c1 t m l
(λ (t' : List IOEvent) (m' : partial_map word byte) (l' : partial_map String word) =>
exec c2 t' m' l' post) →
exec (cmd.seq c1 c2) t m l post := sorry
variable (word_add_0_r : forall x : word, word_add (some x) (some (word_of_Int 0)) = x)
variable (ex : forall A : Type, (A -> Prop) -> Prop)
variable (ex_intro : forall A (P : A -> Prop) (x : A), P x -> ex A P)
def repeated_cmds : Nat -> String -> String -> cmd
| 0, _, _ => cmd.skip
| (n + 1), t, x =>
cmd.seq
(cmd.set t (Expr.op Bopname.add (Expr.var t) (Expr.var t)))
(cmd.seq
(cmd.set t (Expr.op Bopname.sub (Expr.var t) (Expr.var x)))
(repeated_cmds n t x)
)
def generated_cmd (n : Nat) (t x : String) : cmd :=
cmd.seq (cmd.input x) (cmd.seq (cmd.set t (Expr.var x))
(@repeated_cmds n t x))
open Lean Elab Tactic
elab "repeat'" t:tactic : tactic => do
while true do
try
evalTactic t
catch _ => return
macro "solve_eq_by_rewriting" : tactic =>
`(tactic|
(simp [partial_map_get_put_diff, partial_map_get_put_same, partial_map_put_put_same, binop_add_to_word_add, binop_add_to_word_sub, word_add_sub_cancel]; try rfl))
-- n >= 7 causes us to hit maxHeartbeats unless we disable the limit
set_option maxHeartbeats 0
elab "solve_repeated" : tactic => do
evalTactic (← `(tactic| apply exec_seq_cps))
evalTactic (← `(tactic| apply exec.set))
evalTactic (← `(tactic| simp [eval_expr]))
evalTactic (← `(tactic| solve_eq_by_rewriting))
set_option maxRecDepth 10000
--set_option pp.exprSizes true
--set_option pp.oneline true
include word_add_sub_cancel in
theorem generated_cmd_correct :
let n := 100;
let t := @List.nil IOEvent;
forall m l,
exec (generated_cmd n "a" "b") t m l
(fun t' m' l' =>
And (m' = m)
(ex word (fun v =>
And (t' = List.cons (IOEvent.IN v) (@List.nil IOEvent))
(partial_map_get l' "a" = some v)))) := by
simp
intros m l
dsimp [generated_cmd, repeated_cmds]
apply exec_seq_cps
apply exec.input
intros v
repeat' (
apply exec_seq_cps
apply exec.set
--trace_state
simp [eval_expr]
solve_eq_by_rewriting)
apply exec.skip
solve_eq_by_rewriting
---- don't care about solving post-condition for now
sorry
```
PULL-REQUESTS:
- https://github.com/leanprover/lean4/pull/12051
- https://github.com/leanprover/lean4/pull/12100
- https://github.com/leanprover/lean4/pull/12101
- https://github.com/leanprover/lean4/pull/12134
### Andres' binder example
```lean
set_option maxRecDepth 10000
macro "gen_term" n:num : term => do
let mut stx ← `(True)
for _ in 0...n.getNat do
stx := ← `(let z : Nat := x + y; forall x : Nat, exists y : Nat, y = z+x /\ $stx)
`(let z : Nat := 0 ; forall x : Nat, exists y : Nat, y = z+x ∧ $stx)
example : gen_term 500 := by
repeat (intros; apply Exists.intro; apply And.intro; try apply rfl); try apply True.intro
```
Run with `lean --tstack=1000000`
* 5s for n=500; 3.6s kernel
* 22s for n=1000; 19s kernel
Proof terms do grow linearly!
With `debug.skipKernelTC`, mvar context comparison in `checkAssignment` quickly becomes a superlinear bottleneck: https://profiler.firefox.com/public/7d9zfyvjedfs9apq30qdkzxt372fapfb3q6z4b0/flame-graph/?globalTrackOrder=3201&hiddenGlobalTracks=0w2&hiddenLocalTracksByPid=2527111.3-0w2~2527111.2-02367&symbolServer=http%3A%2F%2F127.0.0.1%3A34787%2F322mvrszyb2sjg6w28zvb4cilfg497ma6h43c98&thread=c&transforms=ff-158~rec-158&v=12
### Son's `apply` example
Example studied during the hackathon, keeping it here for the reference:
```lean
set_option maxRecDepth 200000
axiom Q : List Nat → Prop
axiom Q.spec : ∀ x ls, Q ls → Q (x :: ls)
axiom Q.spec' : Q []
set_option debug.skipKernelTC true
set_option linter.all false
example :
Q (List.range 1000) := by
simp [List.range, List.range.loop]
iterate 1000 (have h : 0 ≤ 1 := by sorry)
--set_option trace.profiler true in
repeat apply Q.spec
apply Q.spec'
```
## Performance debugging for `grind`
Looking at the performance of `grind` as an alternative to `scalar_tac` for solving goals: `grind` triggers a fair amount of theorems from the standard library (related to `Nat` and `mul`). Even with precise guards (including `is_value` and `not_value`), the amount of stuff that's triggered is too big.
ACTION ITEMS:
- `grind` to have builtin support for associative-commutative e-matching
- Aeneas to devise its own `grindset` to have *only* the lemmas that are relevant, which would in turn replicate the behavior of `scalar_tac`
## Custom `grind` attributes/`grind` sets
1. In `progress*`, I want to restrict the theorems used by the calls to `grind` to a specific collection of theorems.
2. SymCrypt often requires doing things like this:
```lean
section
--- Ad-hoc scalar_tac theorem which will hurt performance generally speaking
@[local scalar_tac]
theorem ...
-- Theorem proven with progress*, and for which we need the theorem above
theorem foo_spec ... := by progress*
end
```
I don't see how to make this work with namespaces:
1. if using namespaces, `open ... in grind` allows adding theorems to `grind`, but I don't see a way of restricting to a particular collection of theorems (in particular, I want to ignore the `grind` theorems which do not have a scope)
2. the section containing `foo_spec` may itself appear inside a namespace, which is not the one in which we would put the theorems to be used by `progress*`
A solution would be to provide a way of declaring new grind attributes, similarly to what `registerSimpAttr` and `registerSimprocAttr` do.
PULL-REQUESTS:
- https://github.com/leanprover/lean4/pull/11765
- https://github.com/leanprover/lean4/pull/11770
## Non-linear arithmetic for `grind`
The example studied during the hackathon:
```lean
theorem le_mul_le_le (x y a b : ℕ) (h0 : x ≤ a) (h1 : y ≤ b) :
x * y ≤ a * b := by apply Nat.le_mul_le; omega
grind_pattern le_mul_le_le => x * y, x ≤ a, y ≤ b
where
--guard x ≤ a
--guard y ≤ b
is_value a
is_value b
not_value x
not_value y
gen < 2
```
Another important theorem:
```
theorem mod_lt (x y : ℕ) (h : 0 < y) : x % y < y := by exact Nat.mod_lt x h
grind_pattern mod_lt => x % y
```
Possibility to case-split for `0 <= i <= N` where `N` is small (ESPECIALLY N=1)
ACTION ITEMS:
- Son will send Leo a summary of what scalar_tac does for non-linear arithmetic
- Son will collect some examples that go through scalar_tac and would be good benchmarks for grind
**Examples supported by `scalar_tac`**:
```lean
example
(B0 : ℕ)
(hb0 : B0 + 3328 * 3328 + 3328 * 3498 ≤ 2^32 - 1)
(i3 : Nat) (i3_post_2 : i3 < 3329)
(i5 : Nat) (i5_post_2 : i5 < 3329)
(i6 : Nat) (i6_post_2 : i6 < 3329)
(i8 : Nat) (i8_post_2 : i8 < 3329)
(c1 : Nat) (c1_post_2 : c1 ≤ B0)
(a0b1 : Nat) (a0b1_post_1 : a0b1 = i3 * i8)
(a1b0 : Nat) (a1b0_post_1 : a1b0 = i5 * i6)
(a0b11 : Nat) (a0b11_post_1 : a0b11 = a0b1 + a1b0) :
c1 + a0b11 ≤ 2^32 - 1 :=
by scalar_tac
```
## mvcgen integration
We assessed whether Aeneas' custom `progress` tactic and Bedrock2 can/could be implemented with `mvcgen`. General agreement was that it was too soon: progress still does more stuff than `mvcgen` (notably solving goals automatically).
Things that progress does and `mvcgen` does not do yet:
- picking sensible names for variables
- `progress*?` which generates a proof script
- `mvcgen` to be overhauled to better support Loom in the upcoming quarter -- do not take a dependency on something that's going to be changed
- need to understand basic support for separation logic-ish things before depending on `mvcgen`
ACTION ITEMS:
- capture the rationale above and put it in the README of Aeneas or something (so that we can point people to it)
## Threading the `grind` state through the symbolic evaluator
The performance bottleneck of `progress*` is the time it spends on discharging preconditions. A potential performance improvement consists in exploiting the fact that `progress*` manipulates a monotonic context to thread a `grind` state through the symbolic evaluation. This would allow factoring out preprocessing time, some ematching, etc.
Here is a snippet of code which captures what `progress*` does:
```lean
inductive Result α where
| ok (x : α)
| fail
open Result
def bind {α : Type u} {β : Type v} (x: Result α) (f: α → Result β) : Result β :=
match x with
| ok v => f v
| fail => fail
instance : Bind Result where
bind := bind
def spec (x : Result α) (p : α → Prop) :=
match x with
| .ok x => p x
| .fail => False
@[simp]
theorem spec_ok (x : α) : spec (.ok x) p ↔ p x := by simp [spec]
theorem spec_bind {k:α -> Result β} {Pₖ:β → Prop} {m:Result α} {Pₘ:α → Prop} :
spec m Pₘ →
(forall x, Pₘ x → spec (k x) Pₖ) →
spec (Bind.bind m k) Pₖ := by
intro Hm Hk
cases m
· apply Hk
apply Hm
· apply Hm
theorem spec_mono {P₁ : α → Prop} {m:Result α} {P₀: α → Prop} (h : spec m P₀) :
(∀ x, P₀ x → P₁ x) → spec m P₁ := by
cases m <;> grind [spec]
structure U32 where
val : Nat
hLt : val < 2^32
attribute [grind! .] U32.hLt
example (x y : U32) : x.val + y.val < 2 * 2^32 := by grind
-- # Simple example:
def add (x y : U32) : Result U32 :=
if h: x.val + y.val < 2^32 then ok ⟨ x.val + y.val, by omega ⟩
else fail
def add_spec (x y : U32) (h : x.val + y.val < 2^32) :
spec (add x y) (fun z => z.val = x.val + y.val) := by
grind [spec, add]
def add_many (x : U32) (b : Bool) : Result U32 := do
let y ← add x x
--
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
--
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
--
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
--
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
--
if b then
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
ok y
else
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
let y ← add y x
ok y
theorem add_many_spec (x : U32) (b : Bool) (h : 50 * x.val < 2^32) :
spec (add_many x b) (fun z => z.val = 50 * x.val) := by
unfold add_many
repeat all_goals (first | apply spec_bind; apply add_spec; grind; intros | split)
all_goals simp <;> grind
-- # Example with a more complex post-condition:
def add_to_tuple (t : U32 × U32) (a : U32) : Result (U32 × U32) := do
let (x, y) := t
let x ← add x a
let y ← add y a
ok (x, y)
theorem add_to_tuple_spec (t : U32 × U32) (a : U32) :
spec (add_to_tuple t a) (fun t' =>
/- We wouldn't write the post-condition this way in practice:
we do this here for the purpose of introducing an ∃ and a ∧ -/
∃ (x y : U32), t' = (x, y) ∧ x.val = t.fst.val + a.val ∧ y.val = t.snd.val + a.val
) := by sorry
example (t : U32 × U32) (a : U32) :
@spec (U32 × U32)
(do let t' ← add_to_tuple t a; ok t')
(fun _ => True) := by
-- `progress` needs to do something like this:
-- 1. look up and apply the spec theorem
apply spec_bind; apply add_to_tuple_spec
intros tmp h
-- 2. we sometimes need to simplify the post-condition (here this does nothing):
simp -failIfUnchanged at h
-- 3. decompose the post-condition:
have ⟨ x, y, h0, h1, h2 ⟩ := h
clear h
-- end of the proof after the call to progress:
simp
```
ACTION ITEMS:
- This is something Leo wants, and will be implemented next quarter
PULL-REQUESTS (many of them, this is a selection):
- https://github.com/leanprover/lean4/pull/11787 (incremental grind processing)
- https://github.com/leanprover/lean4/pull/11788 (initial PR for SymM monad)
- https://github.com/leanprover/lean4/pull/11838 (benchmark for intro/apply)
- https://github.com/leanprover/lean4/pull/11870 (benchmark for simplifier)
- https://github.com/leanprover/lean4/pull/11898 (benchmark for lambdas / funext)
## Lean in google3 (monorepo)
Hacks of interest:
* Lean's agressive symlink resolution with `realpth` is at odds with Google's content-addressed filesystem. Patching internally is easy enough.
### Storing the `.lake` config outside the source tree
Putting the `.lake` folder in the working directory is hostile to the filesystem too. We solve this with
```diff
--- a/src/lake/Lake/Load/Lean/Elab.lean
+++ b/src/lake/Lake/Load/Lean/Elab.lean
@@ -10,6 +10,7 @@ import Lake.DSL.Attributes
import Lake.Load.Config
import Lake.Build.Trace
import Lake.Util.Log
+import Lake.Google
/-! # Lean Configuration Elaborator
@@ -252,10 +253,11 @@ def importConfigFile (cfg : LoadConfig)
return env
else
elabConfig (← acquireTrace h) trace.options
+ -- always recreate the symlink, in case we changed our mind about where it should point
+ Google.mkLakeSymlink cfg.lakeDir
if (← traceFile.pathExists) then
validateTrace <| ← IO.FS.Handle.mk traceFile .read
else
- IO.FS.createDirAll cfg.lakeDir
match (← IO.FS.Handle.mk traceFile .writeNew |>.toBaseIO) with
| .ok h =>
h.lock; elabConfig h cfg.lakeOpts
```
where `Google.mkLakeSymlink` runs some commands to determine the "workspace root", perhaps analogous to `git rev-parse --show-toplevel`. Ultimately this creates a symlink from `/workspaces/foo/mathlib4/.lake` to `/var/cache/lean-workspaces/foo/mathlib4`.
### When importing third party packages (e.g. Aesop), we patch them along the lines of
```diff
--- lakefile.toml
+++ lakefile.toml
[[require]]
name = "batteries"
+path = "../batteries"
rev = "v4.22.0-rc4"
```
and similarly for `lakefile.lean`. It would be great if there was a global config somewhere to supply this indirection from reservoir identifiers to _relative_ local paths.
One possible implementation would be to have a hierarchy of configs, perhaps:
* In parent directories of the current lake project
* In a per-toolchain config managed by elan
* In a global system-wide config
### Sanitizer and -fwrapv
clang [documents](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html#:~:text=Note%20that%20checks%20are%20still%20added%20even%20when%20%2Dfwrapv%20is%20enabled) that -fwrapv does not automatically turn off integer overflow sanitizer, and alphaproof linking lean does not turn off that sanitizer anyway. let's patch lean to not need -fwrapv
(done in https://github.com/leanprover/lean4/pull/12098)
### GMP allocation api
Allow setting a maximum `Nat` size via `mp_set_memory_functions`, to avoid `Nat` "hand grenades".
https://gmplib.org/manual/Custom-Allocation
AlphaProof does this to the lean-internal GMP.
### Lean resource limits
Andres would like to be able to disable *all* resource limits in lean in a documented way. Hacky would be fine, e.g. a bunch of lines to copy paste in. But it'd be good if somebody who maintains relevant code also maintained this mechanism so I don't have to learn about every new limit and associated knob.