owned this note
owned this note
Published
Linked with GitHub
# Thoughts on prototyping polonius
One of my goals for the year is that we have a prototype of polonius working on nightly. [We've made good progress](https://blog.rust-lang.org/inside-rust/2023/10/06/polonius-update.html), and in a recent meeting Remy, Amanda Stjerna and I were discussing how we might make the next steps. One of the ideas was to try and implement the [polonius revisited][pp] analysis in a reasonably naive way just to try it out and gain experience. This blog post is me braindumping how we might do that.
[pp]: https://smallcultfollowing.com/babysteps/blog/2023/09/29/polonius-part-2/
## Types of variables
In the location insensitive Polonius, there is a single set of variables associated with the current MIR, and those variables have exactly one type. We'll just write `Ty(X)` to mean that type.
## "Characteristic" origins and loans
When we talk about a *loan* L, we mean some borrow expression `&<place>` in the MIR. We typically talk about origins as a *set of loans*. However, in the implementation, it's convenient to say that each loan has a *characteristic origin* `O_L`, which we typically write as part of the expression `&'O_L <place>`. Now we know that if `O_L` is a subset `O_L : O` of some origin `O`, then `O` contains the loan `L`. Another way to look at this is that loans are just a designated subset of origins, and the "value" of an origin `O` is the set `{O_L}` of loan origins where `O_L : O` (transitively) in the subset graph.
## Analysis today
Our existing, flow-insensitive polonius integration has roughly this shape:
```mermaid
flowchart TD
MIR --> TypeCheck
MIR --> Liveness
TypeCheck -- emits --> RegionConstraints
Liveness -- emits --> LiveLoans
RegionConstraints --> SubsetGraphConstruction --> SubsetGraph
SubsetGraph --> Liveness
LiveLoans --> Dataflow
Dataflow --> ActiveLoans
ActiveLoans --> DetectErrors
MIR[(MIR)]
RegionConstraints[("Region constraints:\nRC = a set {O1:O2 @ P}")]
SubsetGraph[("Subset graph:\nSG = a set {O1:O2}")]
LiveLoans[("Live loans:\nLL = a set {L @ P}")]
ActiveLoans[("Active loans:\nAL = a set {L @ P}")]
SubsetGraphConstruction["
Derive SG by removing points from region constraints.
SG = { (O1 : O2) | (O1:O2 @ P) ∈ RC }
"]
Liveness["
Liveness analysis where a loan L is live at P
if there is a live variable X at P
whose type Ty(X) contains an origin O
and (L : O) in the subset graph SG.
"]
Dataflow["
Dataflow analysis where a loan is
GEN when it is issued and
KILL when it is no longer live or
the place that was borrowed is reassigned
"]
DetectErrors["
Traverse the MIR and check the places
accessed by a statement to see if they
constrained by any active loan.
"]
```
## TL;DR: Flow-sensitive version of the analysis
This is how the new analysis will look:
```mermaid
flowchart TD
MIR --> TypeCheck
MIR --> Liveness
TypeCheck -- emits --> RegionConstraints
ComputeLiveVars -- emits --> LiveVariables
Liveness -- emits --> LiveLoans
RegionConstraints --> SubsetGraphConstruction --> SubsetGraph
SubsetGraph --> Liveness
LiveLoans --> Dataflow
Dataflow --> ActiveLoans
LiveVariables --> Liveness
LiveVariables --> SubsetGraphConstruction
ActiveLoans --> DetectErrors
MIR[(MIR)]
RegionConstraints[("Region constraints:\nRC = a set {r1:r2 @ P}")]
SubsetGraph[("Subset graph:\nSG = a set {r1@P : r2@P}")]
LiveLoans[("Live loans:\nLL = a set {L @ P}")]
ActiveLoans[("Active loans:\nAL = a set {L @ P}")]
LiveVariables[("Live variables:\nLV = a set {X @ P}")]
Dataflow["
Dataflow analysis where a loan is
GEN when it is issued and
KILL when it is no longer live or
the place that was borrowed is reassigned
"]
Liveness["
Liveness analysis where a loan L is live at P if
there is a live variable X at P ((X, P)∈LL)
whose type contains an origin O (X : O)
and O contains L at P ((O@P : L@P) ∈ SG)
"]
DetectErrors["
Traverse the MIR and check the places
accessed by a statement to see if they
constrained by any active loan.
"]
```
## Naming origins
In the [blog post][pp], I described having a distinct type for each variable at each program point. In practice, we'll implement that a bit differently. Just as in the location insensitive analysis, there will be one environment `{X:T}` where each variable `X` has a type `T` that references (location-independent) origins. But when we construct the subset graph, we will consider the nodes of the graph to be a *origin at a point*, i.e., `O@P`, where a *point* `P` is either a statement or a terminator. Points can be uniquely identified by `BB/N`, i.e., the name of a basic block and an index.
What this means in practice is that we can reuse the existing MIR type check machinery. We can also continue to write `Ty(X)` to refer to the type `T` of `X`, since there is only a single environment. But whenever we add things into the subset graph, we will have to *localize* that type `T` to a particular point. We write `T @ P` to mean "the type `T` but with every origin `O` in that type replaced with `O @ P`".
**Efficiency note:** This does mean that the number of origins is now (in principle) a function of the number of statements. In real life, this will be the biggest source of overhead. I believe we can do a lot to reduce this, but that's not the focus of this post.
## Computing and representing the subset graph
The subset graph in the location sensitive analysis is much richer. Instead of just relating two origins, it relates origins at particular points. This means that a node in the graph is effectively a 64-bit quantity, with 32 bits for the origin index and 32 bits for the point. This also means that the graph will be conceptually much larger, now being of size `|O| * |CFG|` rather than just `|O|`. I believe we can reduce the number of nodes we actually need in practice with some optimizations, but that's not the subject of this post.
Even though the graph has a lot of nodes, we never need to represent the set of nodes explicitly. We only care about the edges. So we can represent the raw graph as a set of edges `O1@P1 : O2@P2` which is basically a `Map<u64, Vec<u64>>` (a sorted vec of tuples might even be good in practice).
### Strongly connected components
In practice we don't use raw origins in the compiler. Rather, we take that naive graph and then detect [strongly connected components][sccs] (SCCs). This makes for a more compact graph since all cycles become a single node. The [SCC code exists already][scc.rs] and is quite generic, though it may need some slight tweaks.
[sccs]: https://en.wikipedia.org/wiki/Strongly_connected_component
[scc.rs]: https://github.com/rust-lang/rust/blob/3d0e99d632f4bbeaab979d32bd8700f170ddb6b1/compiler/rustc_data_structures/src/graph/scc/mod.rs
Given this SG representation, given two origins `O1` and `O2` and two points `P1` and `P2`, we can determine whether `O1@P1 : O2@P2` by
* computing the SCC node `S1 = SCC(O1@P1)` for `O1@P1`
* computing the SCC node `S2 = SCC(O2@P2)` for `O2@P2`
* checking for a path in the SCC graph between `S1 ~~> S2`
### Edges in the subset graph due to variable liveness
There are two kinds of edges we have to include in the variable graph. The first kind are the edges due to variable liveness.
Note that, in the old analysis, the subset graph constructed was based purely on the output of MIR type check. In the new analysis, it also needs to know the set `{X}` of live variables at each point `P`. If there is a control-flow edge `P1 -> P2` between two points, and a variable `X` is live at both `P1` and `P2`, then we need to relate `Ty(X) @ P1` and `Ty(X) @ P2`.
In fact, because the types have the same "spine", all we really need to know is the variance of `Ty(X)` with respect to each contained origin `O`. We can then add edges into the graph between `O@P1` and `O@P2` based on that:
* Invariant: add both edges
* Covariant: add `O@P1 : O@P2`
* Contravariant: add `O@P2 : O@P1`
* Bivariant: no edges
### Edges due to type check
Type check emits constraints of the form `O1 : O2 @ P`. We can basically translate these to an edge `(O1 @ P) : (O2 @ P)` in the subset graph, but there is one subtlety. As discussed in the [previous polonius post][pp], when we are doing an *assignment* we have to relate the *types on entry* to the *type of the place on output*. Distinguishing this properly will probably require some tweaks to the compiler — ideally I think we would change the region constrains to actually track both points, so that we can represent this difference explicitly.
In MIR, there are actually two places assignments occur. One of them is in statements but the other is in terminators -- the final part of a basic block, which can have multiple successors. This sounds like it might present a challenge, but in fact, for each of these terminators that do assignments, the assignments only occur on one branch (in the case of a call, the "success" branch that indicates no panic occurred). So when relating the return type of a function to its target place, we can use the type of the targt place in the "success" branch.
## Implementing the subset graph and computing SCCs
The ultimate representation of the graph that we use is based on the idea of a [strongly connected component (SCC)][scc],
which is basically a cycle.
The idea is that you can reduce any graph to a [DAG][] where the nodes are SCCs.
[scc]: XXX
[DAG]: XXX
To start, I imagine building up a fairly naive representation of the full subset graph.
Note that the set of nodes in the graph does not need to be represented explicitly.
We can effectively represent a node `type Node = u64` using a 64-bit integer, where 32 bits are the `O` and 32 bits are the `P`.
Then we can represent the graph as just a list of edges (`Map<Node, Vec<Node>>`).
## Computing live loans at a given point
The process for computing live loans is very similar to what we did before. If a variable `X` is live on entry to a program point `P` we can look at each origin `O` that appears in its type `Ty(X)`. We can walk backwards through the subset graph for `O@P` to compute the set of loan predecessors. Those are the live loans at that point.
We can compute this set by a single walk of the SCC DAG. We track for each SCC a (bit)set of loans. We start walking at SCC node and visit each of its children, returning back a set of loans that get unioned together. We also include the loans directly contained in the SCC.
## Computing active loans
Computing active loans works the same way it did before.