---
breaks: false
---
# Aggregation
This document describes a new feature to make calculating aggregations during indexing easier, and to help speed up queries retrieving these aggregates. This is achieved by making `graph-node` aware that certain pieces of data are time series so that it can both take tedious coding work off a subgraph author's hands, and handle that data more intelligently.
## Example
As an example, consider a subgraph that has a token entity and wants to track various statistics about tokens such as volume traded and last USD price. To calculate hourly and daily aggregates of these statistics, the subgraph schema can define a time series:
```graphql
type Token @entity {
id: ID!,
symbol: String!,
volume: BigDecimal!
}
type TokenData @timeseries(intervals: ["hour", "day"], args: ["priceUSD", "amount"]) {
id: Int8!
timestamp: Int8!,
token: Token!,
priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD"),
volume: BigDecimal! @aggregate(fn: "sum", arg: "amount")
}
```
We mark `TokenData` as a `@timeseries` rather than an `@entity`; the `intervals` argument indicates for which time intervals to precompute aggregates, in this example, hourly and daily aggregates. The `args` arguments make it possible to pass data to the aggregations that is not readily available from the `token`; in this example, `graph-cli` would generate code for a `TokenData::add` method that will take a `Token` entity, and a `BigDecimal` `priceUSD`. The `TokenData::add` method will then take care of recording these changes to the timeseries, and make sure that hourly and daily aggregates are updated.
Each time series must have a field `timestamp`. Timestamps are taken from the block at which a handler executes (for Ethereum,`block.timestamp`). To be precise, we convert the raw value to seconds, and define the start of an hour as `timestamp mod 3600 == 0` and the start of a day as `timestamp mod 86400 == 0`. That way, we do not need to understand in what timezone the timestamp is expressed.
Any field in the time series that does not have an `@aggregate` annotation is a dimension (or label) that acts as a grouping key; in the example, we use `token` since we want to calculate separate aggregations for each token.
The actual aggregations are marked with `@aggregate` and must have a `fn` argument that indicates how to aggregate data points for the time series. It must also have a `arg` argument that specifies which of the `args` is to be used for the aggregation.
With these definitions, a handler that deals with token swaps can then update the time series like this:
```typescript
export function handleSwap(event: SwapEvent): void {
let token = findTokenOut(event)
let volume = event.amountOut
let priceUSD = caclulatePriceUsd(token)
TokenData::add(token, priceUSD, volume)
token.save()
}
```
GraphQL queries can access time series values with queries like the following:
```graphql
query {
tokenData(timestamp_gte: .., timestamp_lt: ..,
interval: "day", token: "GRT") {
timestamp
priceUSD
volume
}
}
```
## Defining a time series and aggregations
A time series is declared by adding the `@timeseries` annotation to the type declaration in the subgraph schema. It accepts the following arguments:
- `intervals`: a list of time intervals for which to compute aggregates
- `args`: a list of additional arguments for computing aggregates
A time series must have a `timestamp` field of type `Int8!` (once we have a native `Timestamp` type we'll use that). The time stamps that queries return are always rounded down to the beginning of the time interval. Additional fields that do not have an `@aggregate` annotation define the dimensions of the time series.
Aggregations are defined through fields that carry the `@aggregate` annotation. The fields must have a type of `Int`, `Int8`, `BigInt`, or `BigDecimal`. It accepts the following arguments:
- `fn`: one of the predefined aggregation functions (see below)
- `arg`: the name of the argument from the `args` in `@timeseries` to aggregate over
### Available aggregation functions
The aggregations are always calculated for the time intervals defined for the time series, so that for an hourly aggregation `last` below would use the last value during that hour
| Name | Aggregation |
|--------|----------------------------------|
| first | first value |
| last | last value |
| min | smallest value |
| max | largest value |
| count | number of times `add` was called |
| sum | sum of values |
| avg | average of values |
| var | variance of values |
| stddev | standard deviation of values |
### The `TYPE.add` method
Given a time series definition, `graph-cli` will generate a class named the same as the time series with a single static `add` method. The method takes the following arguments, in the order described here:
- an argument for each dimension of the time series, in the order in which
they are listed in the schema
- an argument for each of the entries in the `args` argument. The types are
determined based on the type of the aggregated field where each `arg` is used
### Querying time series
Users will need to be able to indicate the following to query time series:
- the range of timestamps for which data should be returned
- the granularity (interval) for the time series data (e.g., hourly)
- whether to include an entry for the currently open time interval
- the dimensions. To start with, all dimensions need to be specified with
equality predicates
For the granularity, we'll initially only allow getting data for the exact intervals defined for the time series; it would be possible to allow some limited query time aggregations, for example, to get aggregations over 3h time windows given an hourly time series. Whether that's a good idea will depend on the performance impact of such queries.
Given a time series `Data`, the GraphQL API schema will contain a collection `data` which accepts the following arguments in its `where` clause:
- `timestamp_(lt|lte|gt|gte)`: include data whose `timestamp`
falls into the given range. Without such a constraint the time range defaults to from earliest available to latest available
- `interval` (*required*): one of the intervals specified in the
`@timeseries` decalaration
- for each dimension `dim`, a `dim: VALUE` filter for equality, and a
`dim_in: [VALUE,..]` filter to select multiple dimensions.
## Implementation
___This section needs some major revisions___
The most important aspect of implementing all this is how the data is stored in the database. For each time series `Data`, we need several tables
- `data_ts`: stores the raw data points derived from calls to `add`
- `data_<interval>`: for each time series interval, one table to hold the
aggregates for that interval
In the `TokenData` example, we'd have tables `token_data_ts`, `token_data_1h`, and `token_data_1d`.
The `data_ts` table will have the following attributes:
- a `timestamp int8` attribute
- for each dimension, an attribute for that dimension in the same way the a
normal `@entity` would have them
- for each aggregate `agg`, an attribute for that aggregate; if the
aggregate uses the `avg` function, we create two attributes `agg_sum` and
`agg_count`; if the aggregate uses the `var` or `stddev` function, we
create three attributes `agg_sum`, `agg_sum_sq`, and `agg_count` where
`agg_sum_sq` contains the sum of the squares of data points (There's some
opportunity for optimization if the time series already calculates sums or
counts of the same fields)
- a `block int4` attribute that indicates at which block the data was
recorded
The primary key for `data_ts` is the combination of `timestamp`, all dimensions and the `block` column.
The `data_ts` table does not need any indexes beyond the primary key, which should help speed up writes.
The `data_<interval>` tables will have a similar schema, except that `avg`, `var`, and `stddev` aggregates are calculated when a new entry is added from the `data_ts` table. The `block` attribute will indicate after which block the data is visible.
Data is entered into the `data_<interval>` tables whenever the current time crosses a boundary: for hourly time series, at the end of each hour, for daily time series at the end of each day. There are two strategies we could use to populate a `data_<interval>` table, which one we will use will depend
on performance considerations:
1. Use SQL aggregation queries at the end of each interval to enter a new
row. With this approach `data_ts` holds the raw data points entered with
calls to `add`
1. Keep running totals in `data_ts` and copy them over. Some finesse is
needed so we don't need to add all data points for each aggregation
interval
The first strategy is preferable and more flexible, assuming it can be made fast enough.
Only dimensions for which `data_ts` actually contains entries for a given interval will receive an entry in `data_<interval>`. Filling in missing values (which would all have default values, mostly `0`, for all aggregates) could either be punted to users by not having those entries in the GraphQL response, either, or be filled automatically by query execution.
## Open questions
- do we need the `block` attribute? If not, how does aggregation and a block constraint interact? We could ignore block constraints for time series since we know the timestamps we are looking for from the query. For reverts, we could use the block's timestamp instead of the block number.
- GraphQL filtering right now is extremely limited. We will need some more filtering possibilities so that popular queries like 'list the top 5 tokens by trading volume' can be issued. As soon as we allow that though, we will have to put indexes for each field on the `data_<interval>` tables
- Can we get away with a fixed set of aggregation intervals (like `hourly` and `daily`) or should we support aggregation with small (second?) granularity?
- Subgraphs also keep aggregations over all time (e.g., total volume traded for a token) Should we try and model that as a time series, too? There's not much performance advantage to doing that.
- Pruning: we could prune data in the `data_ts` table to help keep it small. We need to keep at least enough data to cover non-final blocks, but things would work if we prune data outside of that window. Except that it would make it almost impossible to graft onto such a subgraph, and that it is not possible to rewind the subgraph to a point outside the reorg window. The precise condition is that we need all the data from the start of all current intervals to the graft block or the block we are rewinding to in `data_ts`.
- Some desirable aggregates are not very amenable to preaggregation, in particular percentiles such as the median. There are ways to approximate them within a given error tolerance by bucketing data, but precise percentiles will require query time aggregation