owned this note
owned this note
Published
Linked with GitHub
---
breaks: false
---
# Split entity history
For some kinds of entities, like Uniswap tokens, we have a large number of
versions for each entity as each entity gets updated very frequently. At
the same time, such entities are usually only queried close to the subgraph
head, and rarely are very old versions of these entities needed, even
though we still promise users to keep that history available and serve
queries for historical data.
Over time, the tables for such entities consist almost entirely of
historical entity versions, and queries get slowed down by the presence of
these historical entity versions that are not needed to form the result of
a query. It's common to see tens of thousands of entity versions per entity
in such tables. In one example, the `token` table contains about 20M rows
(versions) for 1000 distinct tokens and therefore 20,000 versions per
entity. These ratios will only get worse as time passes.
The basic idea of the special treatment for such entities is to separate
the versions that are close to the subgraph head from historical versions,
so that most queries can be executed against a much smaller table and will
therefore be much faster. While this will mostly help queries, with the
right setup this will also help indexing especially reverts in such tables
as we'll see.
The overriding concern for split entity history is to make it possible to
decide quickly whether a query should be run against the large table
containing historical data, or the much smaller table with recent data by
organizing entity versions in such a manner that when running a query at
block b all entity versions that are needed to form the result are in one
of the two tables. Unfortunately, that means that the tables do not
partition the data: there are entity versions that need to be present in
both tables. As an example, think of an entity that is created at block 0
and then never modified; such an entity is needed to respond to a query at
_any_ block.
### Setup
For a version of entity e that is valid from blocks l to u, we will write
e[l,u). Here, l is the block at which the entity version was created and u
the block at which it was updated or deleted. The current version of an
entity always has u = ∞.
We will store versions of an entity type in two tables instead of one: a
table H for historical versions and a table R for recent versions close to
the subgraph head. Both H and R have exactly the same structure, and in
particular the exact same columns.
We pick a distance d as a parameter and set the _cut point_ c = h - d where
h is the current subgraph head. A query at block b will then either be run
against H (if b < c) or against R (if b ≥ c). The distance d should be
large enough that we can run most queries against R, but still small enough
that R will be a relatively small table. The table H is therefore
responsible for entity versions overlapping [0, c) whereas R is responsible
for entity versions overlapping [c, ∞).
To enable this query strategy, the table H needs to contain all entity
versions that are visible before block c, in other words, an entity version
e[l,u) must be in H if [l, u) overlaps [0, c), i.e., if l ≤ c. Similarly,
table R must contain all entity versions that are visible after block c. An
entity version e[l, u) must be in R if [l,u) overlaps [c, ∞), i.e., when c
< u. These conditions also mean that some entity versions need to be in
both H and R: exactly those versions e[l, u) for which c∊[l, u). For the
next section, it is useful to also think about when an entity version e[l,
u) does not belong into one of these tables: if l > c, it should not be in
H, and if c ≥ u it should not be in R.
This table summarizes these conditions. Since this can get very confusing,
[this
diagram](https://whimsical.com/entity-versions-CpqZ6xXCKyc5L8UV7RQfRp)
shows some examples of how entities are organized.
| e[l,u) | membership |
|-----------|------------|
| l ≤ c | ∊ H |
| c < u | ∊ R |
| l ≤ c < u | ∊ H ∩ R |
| l > c | ∉ H |
| c ≥ u | ∉ R |
These formulas also make intuitive sense: an entity version e[l,u) is
purely historical if it's in H but not in R, which translates to l < u ≤
c. Similarly, an entity version is entirely recent if c < l ≤ u.
### Operations
We perform a number of operations on subgraph data. Here is how they affect
H and R:
#### Insert
When inserting an entity, we insert a version e[h, ∞) at the subgraph head
h. We insert it into R but since c < h, that version does not need to go
into H.
#### Delete
When deleting an entity, we find its old version e[l, ∞) and change its
range to e[l, h). Here it can happen that the old version is in both H and
R, and we need to perform the update of the block range in both tables.
#### Update
An update is always a delete followed by an insert and follows the above
two steps.
#### Advancing the subgraph head
When we advance the subgraph head forward from h' to h, i.e. h' < h, we
also advance the cut point from c' to c. Nothing changes for entity
versions e[l, u) that were already in H since l ≤ c' < c. Entity versions
e[l, u) that were in R but not in H before need to be copied from R to
H. That is the case for entity versions with c' < l ≤ c.
There might also be entity versions e[l,u) that were in R but now do not
belong there any longer: that's the case when c' < u ≤ c. Those entity
versions should be deleted from R. Whether they are deleted though does not
affect correctness: we only require that all entity versions e[l,u) with c
< u are in R, having versions with c ≥ u in R does not make queries
incorrect.
#### Rewinding the subgraph head
When we move the subgraph head backwards from h' to h, where h < h', we
need to do what we already do: delete all entity versions e[l,u) for which
l > h and 'unclamp' all entity versions where l ≤ h < u < ∞ which amounts
to undeleting that version. This can affect entity versions in both the H
and the R table.
We also move the cut point from c' back to c, c' > c. To maintain the
property that c = h - d, we have to copy entities from H to R, namely the
ones that should now be in R but were not previously, i.e., when c < u ≤
c'.
If we choose d to be much larger than the reorg threshold, and we accept
that the distance between the subgraph head h and the cut point c will not
always be exactly d, we can avoid copying from H to R by _not_ moving the
cut point on a revert, as long as h - c remains larger than the reorg
threshold.
In that situation, we can also use the result of reverting in R to avoid
reverting in H: as we delete and unclamp in R, we can look at the block
ranges affected by these operations, and decide based on the block ranges
if they are in H and therefore need to be reverted there, too. When we
delete or unclamp an entity version e[l, u) in R, we ony need to delete or
unclamp it from H if l < c. If we delete an entity version with l = c in R,
we need to unclamp it in H.
#### Queries
The discussion so far assumed that all changes are made atomically and in
isolation. Since the data will be queried at the same time that these
changes are made by the subgraph writer, we can only guarantee that queries
are executed against the right table if none of these operations happen
while a SQL query is executing. That would require that we use a stronger
transaction isolation level than `READ COMMITTED` or some manual locking,
both not desirable from a performance perspective.
We already have a mechanism in place that detects that a revert happened
while a GraphQL query was executing and returns an error in that case. But
we also need to consider the effect of moving the block pointer, and with
it the cut point, forward. If that happens while a query executes, an
entity version from R that is needed to form the query result might vanish
from R. We can exploit the fact that we do not have to delete 'invisible'
entity versions from R right away to avoid locking: if we wait with the
deletion for at least as long as the slowest query can take, we can
guarantee that queries always see all the data they need even as the
subgraph head moves forward.
We can therefore pick another parameter s that corresponds to the number of
blocks that might be incoming while a query is executing, and only remove
entities from R once u ≤ c - s rather than clean up right away, i.e., when
u ≤ c.
#### Syncing
There is no point in mainting the H and the R table while a subgraph is
still syncing since we know that everything we write during syncing will
eventually wind up in table H. Syncing should therefore always write to
table H, and table R should only be populated once the subgraph is
synced.
While the subgraph is syncing, we set the cut point to a value that will
send all queries to table H. When the subgraph is synced, we set the cut
point to the block at which it synced, and start populating table R.
### Metadata and configuration
There are two parameters that need to be set through configuration that
influence the behavior of split history: the distance d from the subgraph
head that is used to set the cut point, and the safety distance s that
keeps entity versions in R around for a little longer than strictly needed
to ensure queries execute correctly when the subgraph head moves at the
same time.
The metadata that needs to be maintained is (1) whether split history
should be enabled for a certain entity type, which for now will be toggled
through a `graphman` command and (2) the cut point c. Because we want to
avoid moving the cut point for reverts or shallow rewinds, we need to store
it explicitly and only advance it when h - d becomes bigger than c. In
other words, when we switch from storing all entities in table H to storing
them in tables H and R when the subgraph becomes synced, we set c to the
subgraph head at that point. It remains at that value until the subgraph
has progressed sufficiently so that the subgraph head is more than d blocks
ahead of that value.
If a subgraph has multiple entity types that store their history in split
form, they can all use the same cut point and the cut point can therefore
be stored as a proprety of the subgraph and does not need to be specific
for an entity type/table.
### Future enhancements
#### Splitting into more than two tables
There is actually nothing special about splitting history into just two
tables H and R in the above. What we've described could be used to split
history into more than two tables T<sub>1</sub>, T<sub>2</sub>, ..,
T<sub>n+1</sub> with cut points c<sub>0</sub> = 0 < c<sub>1</sub> <
c<sub>2</sub> < .. < c<sub>n</sub> < c<sub>n+1</sub> = ∞ such that we use
table T<sub>i</sub> for a query at block b if c<sub>i-1</sub> < b ≤
c<sub>i</sub>. To maintain our sanity, and to avoid a lot of data copying,
we would only ever change the last cut point c<sub>n</sub> as described
above and set the other cut points as fixed values. For example, as history
grows we would at some point decide to add a new table and one more cut
point according to the state of the subgraph at that time.
#### Automatically determining which tables to split
We can tell from Postgres statistics if a table contains a large number of
versions per entity. Currently, tables that should have a split history
will exhibit that, but unfortunately tables that maintain aggregations also
have a large number of versions per entity. Once subgraphs move from
manually maintained aggregations to aggregations that are known to and
maintained by `graph-node`, we should be able to detect which tables would
benefit from split history. Until then, split history needs to be enabled
manually.