## Overview
The bulk pricing for coretime depends on factors from several sources:
1. Hardcoded in the pallet (implementation) or configurable in the runtime
1. Parametrised configuration items (set through governance)
1. Market forces (the impact of which can be increased or decreased through the configuration set by governance)
Ok this might develop in a bit of a wild stream-of-consciousness but I'll go back and edit it at some point.
I'll use rococo code to demonstrate any points or reference anything. I don't anticipate any difference in implementation for kusama and polkadot. The only change in governance is that things will be configured via referenda instead of via sudo calls. The change in market forces is pretty profound though, as KSM and DOT have real-world values whereas ROC and WND do not (except in one case where apparently somebody tried to buy Rococo coretime for KSM 😅)
## 1. Implementation
### Pallet level
The main structure for the progression of the prices is in the pallet [here](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/broker/src/tick_impls.rs#L166-L190) inside rotate_sale. rotate_sale is called at the end of each region.
The overall function is defined as:
```rust
let price = {
let offered = old_sale.cores_offered;
let ideal = old_sale.ideal_cores_sold;
let sold = old_sale.cores_sold;
let maybe_purchase_price = if offered == 0 {
// No cores offered for sale - no purchase price.
None
} else if sold >= ideal {
// Sold more than the ideal amount. We should look for the last purchase price
// before the sell-out. If there was no purchase at all, then we avoid having a
// price here so that we make no alterations to it (since otherwise we would
// increase it).
old_sale.sellout_price
} else {
// Sold less than the ideal - we fall back to the regular price.
Some(old_sale.price)
};
if let Some(purchase_price) = maybe_purchase_price {
T::PriceAdapter::adapt_price(sold.min(offered), ideal, offered)
.saturating_mul_int(purchase_price)
} else {
old_sale.price
}
};
```
So we have three paths:
If cores were offered, then we check if the price was more or less than the target number of cores (this number is specified via configuration). If it is more, then we take the price of the last core sold as the `purchase_price`.
If it is less, we use the old_sale base price as the `purchase_price`.
If cores were offered, the `purchase_price` is then multiplied by the factor returned by the `adapt_price` method of the `PriceAdapter`.
If no cores were offered, then we shortcut the price progression and the next sale is just the old sale's price.
The `PriceAdapter` is specified in the broker Config in the runtime. At this level of the pallet implementation, not much is clear and prices could go up and down depending on how the pallet is configured in the runtime, governance configuration and market conditions.
The one thing that is defined at this level is that when no cores are offered in the previous sale, the price remains the same for the next sale.
For more insight we need more details. Onwards.
### Runtime level
The `PriceAdapter` is a type defined in the `broker::Config` in the runtime which implements the `AdaptPrice` trait.
The price adapter is defined as:
```rust
/// The algorithm to determine the next price on the basis of market performance.
type PriceAdapter: AdaptPrice;
```
This AdaptPrice trait has two methods - one which defines the lead-in factor (`leadin_factor_at`) and one which defines how the price changes from sale to sale (`adapt_price`).
`leadin_factor_at` changes the price during the lead-in period based on how far through the lead-in period we are.
`adapt_price` changes the base price for the next sale depending on the number of cores sold, the ideal number of cores sold and the number of cores for sale.
#### Linear
There is one type that implements `AdaptPrice` so far - `Linear`. This is what we have set for Rococo (and likely will set something similar for Kusama and Polkadot), but this could change in the future for any of the chains with a runtime upgrade.
Linear just means that each unit time that elapses changes the value by a fixed amount, i.e. that plotting it on axes with evenly-spaced increments would give you a straight line. So for a hypothetical sale that had a lead-in period that is 4 blocks long, the price would start off at double the base price, then decrease by a quarter of the base price each block until it hit the base price itself at the end of the lead-in period.
##### `leadin_factor_at`
```rust
fn leadin_factor_at(when: FixedU64) -> FixedU64 {
FixedU64::from(2).saturating_sub(when)
}
```
The Linear `leadin_factor_at` gives the factor which is multiplied by the base price to give the sale price at a given block in the lead-in period. In this implementation it starts at a maximum of 2, then reduces as we get further through the lead-in period until we get to a factor of one. The `when` parameter is a fraction between 0 and 1 representing how far through the lead-in period (0 at the start, 1 at the end, and linearly interpolated between)
For a sale start @ block 0, interlude of length 1, lead-in length of 4 and this `Linear` PriceAdapter, base price of 100 DOT.
```
block 0 is the interlude - no sales allowed, only renewals
block 1: 200 DOT
block 2: 175 DOT
block 3: 150 DOT
block 4: 125 DOT
block 5 onwards: 100 DOT (the base price, we're now out of the lead-in period)
```
##### `adapt_price`
This is the function referred to in the overview for the bulk pricing progression, which is multiplied by the `purchase_price` from the previous sale to get the new base price. The linear implementation is show below:
```rust
fn adapt_price(sold: CoreIndex, target: CoreIndex, limit: CoreIndex) -> FixedU64 {
if sold <= target {
FixedU64::from_rational(sold.into(), target.into())
} else {
FixedU64::one().saturating_add(FixedU64::from_rational(
(sold - target).into(),
(limit.saturating_sub(target)).into()
))
}
}
```
This is relatively simple with two cases
1. The same or fewer cores were sold than the target
2. More cores were sold than the target
In the first case, `adapt_price` is a number less than or equal to one as `sold <= target`, so the purchase price will be reduced by a factor proportional to the ratio of the sold to target cores if the same or fewer cores are sold than the target.
In the second case, `adapt_price` is a number greater than one as we are adding the ratio of two positive numbers to one as `sold > target` and `limit >= sold` (we cannot sell more cores than we offer for sale), therefore `limit > target`.
To give a few key examples in the case where we had a sale with 5 cores offered (limit) with a configuration that specified 40% as the ideal ratio (meaning 2 cores is the target) and the `purchase_price` was 90 DOT.
```
0 cores are sold: 0 (the next price is zero!)
1 core was sold: 0.5 (the price is halved to 45 DOT)
2 cores were sold: 1 (the price is kept the same at 90 DOT)
4 cores were sold: 1 + (3-2)/(5-2) ~ 1.67 (the price increases by two thirds to 150 DOT)
5 cores were sold: 1 + (5-2)/(5-2) = 2 (the price is doubled to 180 DOT)
```
## 2. Governance
In the previous section I used toy numbers throughout but with the real implementation of the equations. The actual numbers used as inputs depend on two broker pallet extrinsics used to set it up on the coretime chain:
1. `configure`
2. `start_sales`
### Configure
The `configure` extrinsic is called first and sets the following variables that are relevant to pricing:
```rust
/// The length in blocks of the Interlude Period for forthcoming sales.
pub interlude_length: BlockNumber,
```
`interludeLength`: We want to give people ample time to renew without shortening the sale too much. Not much impact on pricing but relevant for renewals.
```rust
/// The length in blocks of the Leadin Period for forthcoming sales.
pub leadin_length: BlockNumber,
```
`leadinLength`: The number blocks over which the price will drop, then the price is fixed. At the start of this period the price will be double that of the base price for this sale. The base price is worked out using `adapt_price` based on the previous sale and when the sale is rotated - see section 1.
```rust
/// The length in timeslices of Regions which are up for sale in forthcoming sales.
pub region_length: Timeslice,
```
`regionLength`: This is the length in number of timeslices sold as an off-the shelf bulk region, Timeslices are 80 blocks long (set at the runtime level). This is what people buy and can partition, interlace etc.
```rust
/// The proportion of cores available for sale which should be sold in order for the price
/// to remain the same in the next sale.
pub ideal_bulk_proportion: Perbill,
```
`idealBulkProportion`: perbill is just parts per billion, I haven't seen much of a discussion on what people want this to be, but it depends on the choice for the cores offered in each sale too. With a low number of cores offered per sale, I would expect this to be quite a high percentage expected to sell.
```rust
/// An artificial limit to the number of cores which are allowed to be sold. If `Some` then
/// no more cores will be sold than this.
pub limit_cores_offered: Option<CoreIndex>,
```
`limitCoresOffered`: This is the limit on the number of cores offered in each sale. The actual number of cores offered in a given sale may be lower if there are not enough cores at that point in time.
```rust
/// The amount by which the renewal price increases each sale period.
pub renewal_bump: Perbill,
```
`renewalBump`: This bump is applied every 28 days, which means that you need to apply it about 13 times (365.25/28) to give an annual figure. People have spoken about a 2% bump which is roughly 1.02^13 ~ 30% annual increase.
### Start sales
This is the way we bootstrap our way into the self repeating sales.
We have two numbers to specify here:
`initial_price`: The price of Bulk Coretime in the first sale.
`core_count`: The number of cores which can be allocated.
From this point, the first sale is scheduled and the sale is rotated automatically at the start of every region.
## 3. Market forces
This part is more of a discussion of extreme examples, as most points about where market forces affect prices have already been discussed.
The market forces are powerful, but the configuration can be adapted over time if the price corrections become too extreme. For example changing the renewal bump, limit of number of cores offered or the ideal bulk proportion will all serve to keep this in check.
Changing the cores offered would fix a situation where bulk prices increase rapidly across sales. If every sale sells out, then the base price is doubled each rotate sale, but additionally if all cores sell out immediately, the purchase price of the last sale also doubles, which means a 4x price difference between base prices in adjacent sales. BUT if the same market forces persist in the next sale, then it is a 16x increase over two sales... this very quickly spikes the sale price.
Changing the renewal bump would fix a scenario where everybody who buys a core just keeps renewing it, leading to maximal occupancy of the chain (no free cores to sell) but this doesn't mean maximal usage, these people sitting with cores might not be actually using them and therefore should not be incentivised to stay. Inflation is at 10%, so the renewal bump should always be more than this annually, now there's a 20pp pressure to leave unused bulk cores.
Changing the bulk proportion could also fix a scenario where we have sellout sales every sale, as we could set it to be much closer to the number of cores sold and reduce the impact on prices of each sale period.
Some interesting reading:
[Learning to bid in sequential Dutch auctions](https://www.sciencedirect.com/science/article/abs/pii/S0165188914002516)
[Simplified Sequential Dutch Auctions](https://blog.oighty.com/simplified-sda)
[Bidding Behaviour in Dutch Auctions](https://www.jstor.org/stable/26771939)
The price of bulk coretime is a periodic Dutch auction and would thus be well modelled by a second order underdamped system. The factors that we set in the price adapter are essentially forming a parametrised damping force. If we set factors that are much too high, we will actually take much longer to reach the ideal price (equilibrium of the system) as the overshoot and undershoot will be much larger and will then take more cycles to reach the ideal price. This is contrary to what other people have been suggesting: that starting with a low price and pumping the factors really high are the best way to go.
I actually think that setting the most realistic starting price we can and keeping the adaption factors relatively low is the best way. I think this is especially the case with a low number of cores per month, as the percentage deviation of the cores sold from the target is the main input to the adaption factor, and with low number of cores the difference between selling the target vs selling one over or one under could blast the price out of equilibrium again for mulitple sales.
Obviously we don't want to go too far and critically damp the system to the point where the market has limited power in affecting the price, which instead would be set mostly by the intitial conditons and first sale, but going the other extreme would lead to high volatility.
## Going deeper
### Zero price behaviour
At the minute with the default implementation of `AdaptPrice` in the broker pallet (`Linear`) if in one sale nobody buys a core, the price drops to zero, and there is no way to come back from that in a dutch auction model as implemented here as all the corrections are purely multiplicative. This will need to be discussed and fixed before being deployed in production chains.
To demonstrate this issue take the following scenario:
There's a sale which offers cores
Nobody buys a core
The price for the next sale is zero. Lead in factor is 2, but 2*0 is still zero
People "buy" cores for nothing in the next sale (cores are sold but the last_price is 0)
Those people can renew forever for 0 dot
The next sale rotation comes around and even if all cores sell out, it calculates the next sale price as ... 0
Ad infinitum
Plotting the current implementation of `adapt_price` for `Linear` gives us the following:
![current_adapt_price](https://hackmd.io/_uploads/SklvVsN3a.png)
Where the price doubles if the upper limit is sold, whereas if the lower limit is sold (0) we end up with a zero price, which as discussed has no way to come back from with the current implementation.
### The fix
There are two distinct issues here:
1. The Linear implementation in the broker pallet has behaviour that does not seem like a sane default for the scenario where prices fall to 0
2. There is no way in the pallet to specify a minimum value
1. can be trivially solved for Kusama and Polkadot by just not using this implementation and either setting a minimum value, or going for a solution discussed in the next section (or both)
2. could be added as a new paramater which is configurable in the runtime through the `pallet_broker` `Config` - if one were able to set a minimum value below which it could not fall, we would not have the zero point issue. This could also be set in the implementation directly (feeding into point 1)
If instead we aim for a more (conceptually) "symmetrical" function, we can make it so that the price is halved if we sell the lower limit and doubled if we sell the upper limit, with linear interpolation between these extremes and the case where `sold == target` keeping the base price the same is maintained.
This implementation looks like this:
```rust
fn adapt_price(sold: CoreIndex, target: CoreIndex, limit: CoreIndex) -> FixedU64 {
if sold <= target {
// Range of (0.5..=1)
FixedU64::from_rational(1, 2)
.saturating_add(FixedU64::from_rational(sold.into(), (2 * target).into()))
} else {
// Range of (1..=2)
FixedU64::one().saturating_add(FixedU64::from_rational(
(sold - target).into(),
(limit - target).into(),
))
}
}
```
The second branch of the if is unchanged, whereas the first branch is adapted such that the sold/target ratio (previously had a range of 0..=1) is halved and then added to 0.5. This has the effect of keeping a maximum of 1 while allowing us to have this minimum of 0.5.
Plotting this looks like the following:
![updated_adapt_price](https://hackmd.io/_uploads/SJjmLsNnT.png)
### Deeper and deeper
If we are to suggest a general `PriceAdapter` to define the price of a bulk core at some time within a sale making only minimal assumptions, we would separate it into three factors:
$$ p_{n}(t) = p_{\textrm{last}_{n-1}} \cdot A(c_{\textrm{sold}_{n-1}}, c_{\textrm{target}_{n-1}}, c_{\textrm{limit}_{n-1}}) \cdot L(t) $$
Where:
$p_{n}(t)$ is the price in sale $n$ at a given point in time $t$ within the sale,
$p_{\textrm{last}_{n}}$ is the price of the last core to be sold in the previous sale,
$L(t)$ is a function yielding the lead-in factor at time $t$ in the current sale,
$A(...)$ is a function representing the function which adapts the price between sale $n-1$ and sale $n$,
$c_{\textrm{sold}_n}$, $c_{\textrm{target}_n}$ and $c_{\textrm{limit}_n}$ are the cores sold, ideal number of cores, and number of cores offered respectively in sale n.
Currently there is an assumption in the `AdaptPrice` interface that the `leadin_factor_at` will be passed a fraction between 0 and 1 representing how far through the lead-in period we are, let's call it $f_{leadin}$.
We can define this in terms of t as:
$$ f_{leadin} = \frac{t - t_{sale\_start} - l_{interlude}}{l_{leadin}} $$
where $t$ is a block number and $l$ is the length of the given period.
I think that the baseline is to think about the number of cores sold in terms of three boundary conditions between which we can define an interpolation function (next discussion):
1. Ideal number sold (`sold == target`)
2. Maximally sold (`sold == limit` and all cores sold immediately at the start of the lead-in period)
3. Maximally undersold (`sold == 0`)
#### Constraint 1
I think that no matter what function we suggest, the price adapter function at constraint 3 should give 1. Maintaining the base_price at the level of the last core sold in the last sale when the target is hit -- anything other than this would make the target less meaningful. The price in sale $n$ at time $t$ ($p_n(t)$)thus reduces to:
$$ p_{n}(t) = (p_{\textrm{base}_{n-1}} \cdot L(t_{last})) \cdot L(t) $$
Where $t_{\textrm{last}}$ is the time in sale $n-1$ when the last core was sold.
#### Constraint 2
For constraint 1, the behaviour here is defined by the lead-in price as well as the number of cores sold, so it's important to take that into consideration. If the cores are all sold in the first block of the lead-in period, then we get the maximum lead in factor from the previous sale
$$ p_n(t) = (p_{\textrm{base}_{n-1}} \cdot L_{\textrm{max}}) \cdot A_{\textrm{max}} \cdot L(t) $$
#### Constraint 3
Here the lead-in factor of the previous sale is irrelevant and the price_adapter takes its minimum value.
$$ p_n(t) = (p_{\textrm{base}_{n-1}} \cdot L_{\textrm{min}}) \cdot A_{\textrm{min}} $$
where $p_n$ denotes the base price at sale $n$ and $L_{\textrm{min}}$ denotes the maximum value of the lead-in factor, which occurs when the previous sale's lead-in period ends throughout the fixed period.
## My suggestion within the current model
[Kusama configuration](https://hackmd.io/LgD1J5OnSnKK5Rv4j2-lTg)
## Maximal depth (WIP)
We can go further and define the base price at sale $n$ after setting the price in `start_sales` with a general `PriceAdapter` implementation (look away now):
$$ p_{\textrm{base}_n} = p_{start} \cdot \prod_{k=2}^{n} L(t_{\textrm{last}_{k-1}}) \cdot A(c_{\textrm{sold}_{k-1}}, c_{\textrm{target}_{k-1}}, c_{\textrm{limit}_{k-1}}) $$