## Discussion notes 2021-07-26
### Transactions, Txids and wrapping
#### Should signatures be native or not?
Prior art: Ethereum has signature checks built-in. Bitcoin does not, but has "standard transactions" that are are similar.
Both might have the ability to change, and disregarding the very conservative stance Bitcoin has, it could probably change more easily than Ethereum.
There are many issues wrt signatures:
- Signatures could be tied to L1 keys
- There are light clients that need to sign, and piggy-back servers that need to sign.
- We have L1 representations and L2 representations of the data we embed, and we need to decide what payload to sign.
Issue: Having the signature as part of the transaction could be a problem
If you need to sign a transaction that does not exist (yet).
### The case for non-native signatures
This is how it could potentially look
In this example we have an incoming transaction tx0, likely coming from a piggy-back server.
The piggy-back will put multiple CW transactions into its own transaction.
- `txN` (N=1..3) are simply DATA actions (no code) that contain the to-be-unwrapped transaction (code).
- `txN_sig` (N=1..3) contain signatures for the txN DATA actions, however, the XCO in `tx1` and `tx2` were issued by the "standard" `check_sig` contract, while XCO spent by `tx3` is owned by a more expensive private ZK contract.
- 3 x send actions sending the pairs (`txN`, `txN_sig`) (N=1..3) to `check_sig_contract`
```mermaid
flowchart TB
subgraph tx0 [ Original Tx received from L1 ]
direction TB
foo[" "]
subgraph tx1 [ tx1 - $0 ]
%% direction LR
out1_1[Send: 2.99 XCO to 'check_sig'<br/>AGG: *]
info1_1[DATA: for Alice ]
utxo1_1[DATA: Spend 1 XCO from Bob]
utxo2_1[DATA: Spend 2 XCO from Anthony]
end
style tx1 fill:#f9f,stroke:#333,stroke-width:4px
subgraph tx1_sig [ tx1_sig - $1 ]
sig1_1[Bob signature for tx1]
sig2_1[Anthony signature for tx1]
end
tx1 --> sig1_1[Bob signature for tx1]
tx1 --> sig2_1[Anthony signature for tx1]
subgraph tx2 [ tx2 - $2 ]
%% direction LR
out1[Send: 2.99 XCO to 'check_sig']
info1_2[for Alice ]
utxo1_2[DATA: Spend 1 XCO, another, from Bob]
utxo2_2[DATA: Spend 2 XCO from Mark]
end
subgraph tx2_sig [ tx2_sig - $3 ]
sig1[Bob signature for tx1]
sig2[Mark signature for tx1]
end
subgraph tx3 [ tx3 - $4 ]
%% direction LR
out1_3[Send: 2.99 XCO to 'check_sig']
info1_3[for Alice ]
utxo1_3[DATA: Spend 1 XCO from Oscar]
utxo2_3[DATA: Spend 2 XCO from Helen]
end
subgraph tx3_sig [ tx3_sig - $5 ]
zero_knowledge_proof
end
tx3 --> tx3_sig
tx2 --> sig1[Bob signature for tx1]
tx2 --> sig2[Mark signature for tx1]
other1[send: to check_sig_contract with some gas<br/>agg: tx_to_unpack: $1<br/>tx_sig: $2<br/>is_lead: $row == 0 ]
other2[send: to check_sig_contract with some gas<br/>agg: ...]
other3[send: to check_private_contract with some gas<br/>agg: ...]
other4[send: to check_sig_contract with some gas<br/>agg: tx_to_unpack: $1<br/>tx_sig: $2<br/>is_lead: $row == 0 ]
xco_utxo[spend: XCO from piggyback server]
end
subgraph tx1_output
out1_out[Send: 2.99 XCO to check_sig]
info2_1[for Alice ]
utxo1_out[DATA: Spend 1 XCO from Bob]
utxo2_out[DATA: Spend 2 XCO from Mark]
end
subgraph tx2_output
out1_out2[Send: 2.99 XCO to check_sig]
info2_2[for Alice ]
utxo1_out2[DATA: Spend 1 XCO from Bob]
utxo2_out2[DATA: Spend 2 XCO from Mark]
end
subgraph tx3_output
out1_out3[Send: 2.99 XCO to check_sig]
info2_3[for Alice ]
utxo1_out3[DATA: Spend 1 XCO from Bob]
utxo2_out3[DATA: Spend 2 XCO from Mark]
end
l1_raw_data --> parsing_contract --> tx0
other1 --> check_sig_contract --> tx1_output --> check_sig_contract3
other2 --> check_sig_contract --> tx2_output --> check_sig_contract3
other3 --> check_private_contract --> tx3_output --> check_sig_contract3
check_sig_contract3[check_sig] --> store[Create claim Alice has 8.7 XCO]
%% three --> two
%% XXY --> xxx2
```
```mermaid
flowchart TB
subgraph tx0 [ Original Tx received from L1 ]
direction TB
tx1[Transfer<br/> -1.00 XCO from Bob<br/>-2.00 XCO from Anthony<br/>+2.99 Alice]
tx1_sig[Signatures for tx1<br/>Bob<br/>Anthony]
tx1 --> tx1_sig
tx2[Transfer<br/>-1.00 XCO from Bob<br/>-2.00 XCO from Mark<br/>+2.99 Alice]
tx2_sig[Signatures for tx2<br/>Bob<br/>Mark]
tx2 --> tx2_sig
subgraph tx3
%% direction LR
out1_3[Send: 2.99 XCO to 'check_sig']
info1_3[for Alice ]
utxo1_3[DATA: Spend 1 XCO from Oscar]
utxo2_3[DATA: Spend 2 XCO from Helen]
end
subgraph tx3_sig
zero_knowledge_proof
end
tx3 --> tx3_sig
tx2 --> sig1[Bob signature for tx1]
tx2 --> sig2[Mark signature for tx1]
other1[send: to check_sig_contract with some gas]
other2[send: to check_sig_contract with some gas]
other3[send: to check_private_contract with some gas]
xco_utxo[spend: XCO from piggyback server]
end
l1_parsed_raw_tree_or_even_raw_bytestring --> parsing_contract
parsing_contract --> rx0
subgraph tx1_output
out1_out[Send: 2.99 XCO to check_sig]
info2_1[for Alice ]
utxo1_out[DATA: Spend 1 XCO from Bob]
utxo2_out[DATA: Spend 2 XCO from Mark]
end
subgraph tx2_output
out1_out2[Send: 2.99 XCO to check_sig]
info2_2[for Alice ]
utxo1_out2[DATA: Spend 1 XCO from Bob]
utxo2_out2[DATA: Spend 2 XCO from Mark]
end
subgraph tx3_output
out1_out3[Send: 2.99 XCO to check_sig]
info2_3[for Alice ]
utxo1_out3[DATA: Spend 1 XCO from Bob]
utxo2_out3[DATA: Spend 2 XCO from Mark]
end
other1 --> check_sig_contract --> tx1_output --> check_sig_contract3
other2 --> check_sig_contract --> tx2_output --> check_sig_contract3
other3 --> check_private_contract --> tx3_output --> check_sig_contract3
check_sig_contract3[check_sig] --> store[Create claim Alice has 8.7 XCO]
%% three --> two
%% XXY --> xxx2
```
Problem: For complex signature checks, (ZK and others), or actually all signature checks, the "unwrapping" needs to happen before any gas payments happen, so we don't know beforehand whether there is enough gas.
We already have that claims are owned by the code that create them.
This means, that the XCO spent by Oscar, Anthony, Bob, and Helen were created by the `check_sig_contract` contract, and thus they can only be spent using the same contract.
We introduce a primitive smart contract called `bitcoin_xco
So if Oscar has XCO claims create by a ZK contract,
Quote: "claim owned by ZK-contract foo"
#### Compression
### Actions:
- Read action will have one or more flags to indicate whether they should block on read. If there's no value to be returned, a block read will suspend the whole transaction till there's a value to be returned, or a maximum number of steps has passed. When there are several block reads, it would be possible to specify some sort and-or-tree behaviour. The max number of steps to wait will be part of the action arguments (i.e, one willing to wait longer will cost more).
- Though the cost of each defined actions is possible to be determined statically; Scan like queries (like regex) will be allowed by specifiying the number or max rows to scan and the offset where start scanning.
- The read actions will be flexible enough to allow regex filters, combine with some sort of sorting (like sort by the beginning of the key of the key-value-pair).
- On the aggregation expression language would be able
### Governance:
#### Burning rate:
To update a parameter, an smartcontract would need to burn:
$$
previous\_wining\_bind · histeresis · deprecition^{current\_block - wining\_bind\_block}
$$
The idea is that the campaigner for a gobernance change(s) would make a contract where people will send xco to, after a time period, the contract will either reach the required amount to introduce the change(s) [meaning the xco will be burnt] or will return the accumulated xco to their owners.
Having relative big values for the hysteresis parameters and for the inverse of depreciation parameter (i.e $1/deprecition$) would prevent the expected time required to change a value, to depend less on the XCO price fluctuation. This means we can get as close as we want to have a periodic auction on the gobernance parameter change.
For economists we could explain this as a [Dutch auction](https://en.wikipedia.org/wiki/Dutch_auction) that gets restarted with a premium (i.e histeresis).
We can also use "target" and "accuracy (of target time)" as concepts. The whole process is like a Dutch auction, but where the price reductions are exponential. This means that "the real" dutch auction happens around approximately the target time, but whether it is exactly at this time is governed by the accuracy of the target time.
#### Burning rationale:
(see quotes)
### Quotes
"we're not using staking when nothing is at stake".
"no representation without taxation^Wburning".
"prevents changes to be tweeter pouplarity contests".
"prevents power centralization".
#### Compilation
Assuming the following
```javascript
async function foo(arg1): Promise<undefined> {
console.log(1)
await undefined;
console.log(2)
await undefined;
console.log(3)
}
```
is converted into
```javascript
function foo(idx: number = 0, arg1) {
switch (idx) {
case 0:
console.log(1);
return Promise.resolve(undefined).then(() => foo(1, arg1))
case 1:
console.log(2);
return Promise.resolve(undefined).then(() => foo(2, arg1))
case 2:
console.log(2);
return Promise.resolve(undefined);
}
}
```
```javascript
async function foo(arg1) {
console.log(1)
await undefined;
console.log(2)
await undefined;
console.log(3)
}
```
```javascript
foo() {
console.log(1);
return Promise(
() => {
console.log(2)
return Promise(
() => {
console.log(3)
}
)
}
)
```
From https://babeljs.io/docs/en/babel-plugin-transform-regenerator
```javascript=
// Make 2 fetch calls from our API to get a user's messages
async function getUserMessages(username) {
// fetch to endpoint that has our users
const userResults = await fetch('/users');
// find the user's id from the username input
const userObject = userResults.find( user => user.username === username);
// Make another fetch call to get the user's messages
const userMessages = await fetch(`/messages/${userObject.id}`);
// Return that user's messages
return userMessages;
}
```
```javascript=
var userResults, userId, userMessages;
return regeneratorRuntime.wrap(
function _callee$(_context) {
while (1) {
switch ((_context.prev = _context.next)) {
case 0:
_context.next = 2;
return fetch("/users");
case 2:
userResults = _context.sent;
userId = userResults.find(function(user) {
return user.username === username;
});
_context.next = 6;
return fetch("/messages/" + userId);
case 6:
userMessages = _context.sent;
return _context.abrupt("return", userMessages);
case 8:
case "end":
return _context.stop();
}
}
},
_callee,
this
);
```
#### The library problem
1. How to deduplicate code used in two contracts
2. How to upgrade a library without paying too much
3. How to organize libraries (tree vs big blob)
4. How does the module system in WASM work (can you have multiple versions of the same symbol included from different modules (javascript-style), or a global symbol table where there are collisions if multiple versions are used)
#### Numbers you should know
100 EVM GAS = 0.01 EUR (40 GWei gas price) - 0.1 EUR (400 GWei gas price)
#### What if we have a driver that gets the list of transactions?
(This flowchart is not finished, just a copy)
```mermaid
flowchart TB
subgraph tx0 [ Original Tx received from L1 ]
direction TB
tx1[Transfer<br/> -1.00 XCO from Bob<br/>-2.00 XCO from Anthony<br/>+2.99 Alice]
tx1_sig[Signatures for tx1<br/>Bob<br/>Anthony]
tx1 --> tx1_sig
tx2[Transfer<br/>-1.00 XCO from Bob<br/>-2.00 XCO from Mark<br/>+2.99 Alice]
tx2_sig[Signatures for tx2<br/>Bob<br/>Mark]
tx2 --> tx2_sig
subgraph tx3
%% direction LR
out1_3[Send: 2.99 XCO to 'check_sig']
info1_3[for Alice ]
utxo1_3[DATA: Spend 1 XCO from Oscar]
utxo2_3[DATA: Spend 2 XCO from Helen]
end
subgraph tx3_sig
zero_knowledge_proof
end
tx3 --> tx3_sig
tx2 --> sig1[Bob signature for tx1]
tx2 --> sig2[Mark signature for tx1]
other1[send: to check_sig_contract with some gas]
other2[send: to check_sig_contract with some gas]
other3[send: to check_private_contract with some gas]
xco_utxo[spend: XCO from piggyback server]
end
l1_parsed_raw_tree_or_even_raw_bytestring --> parsing_contract
parsing_contract --> rx0
subgraph tx1_output
out1_out[Send: 2.99 XCO to check_sig]
info2_1[for Alice ]
utxo1_out[DATA: Spend 1 XCO from Bob]
utxo2_out[DATA: Spend 2 XCO from Mark]
end
subgraph tx2_output
out1_out2[Send: 2.99 XCO to check_sig]
info2_2[for Alice ]
utxo1_out2[DATA: Spend 1 XCO from Bob]
utxo2_out2[DATA: Spend 2 XCO from Mark]
end
subgraph tx3_output
out1_out3[Send: 2.99 XCO to check_sig]
info2_3[for Alice ]
utxo1_out3[DATA: Spend 1 XCO from Bob]
utxo2_out3[DATA: Spend 2 XCO from Mark]
end
other1 --> check_sig_contract --> tx1_output --> check_sig_contract3
other2 --> check_sig_contract --> tx2_output --> check_sig_contract3
other3 --> check_private_contract --> tx3_output --> check_sig_contract3
check_sig_contract3[check_sig] --> store[Create claim Alice has 8.7 XCO]
%% three --> two
%% XXY --> xxx2
```
### Problem: When mapping over data, some stragglers might need many steps
Solution: Add a `wait` aggregator.
`wait[x](y, z)` will wait a maximum x rounds, while y is false, and then return z.
Imagine a map over 200 elements, but some take more time
Some options:
```yaml
result: {{ array_agg(x) }}
thread_completed: {{ sum(1) }}
Iam_driver: min(row()) = row()
_: wait[12](count() = 33)
```
```python
identity[l]
check( True # False for not row() === 0
, wait[l]( count() == 200
, array_agg(x)
)
)
```
### Problem: Aggregators that aggregate over all step results
When aggregations happen over all results from all steps, it means we cannot schedule a subset of the transactions, but all have to be scheduled at the same time.
A GPU for example, will schedule a pack of pixels at the same time, as wide as the hardware can allow. However, if it required processing the first step of all pixels before going to the next pipeline step, it would be slow. Our aggregators are like something that can aggregate over all pixels on the screen after the first graphics pipeline stage.
We need a way to limit the aggregation, and make it statically know what the limit is. This way we can process a subset of the "pixels" and keep the working memory usage reasonably low.
#### Idea 1
Let steps be represented by symbols. A step is associated with a
smart contract invocation and it is sticks to that invocation and is
inherited by any send_message until all "leaf invocations" emit that
symbol as a "Step symbol action".
A set of step symbols are initially used when a block starts, and
then processing proceeds until all active smart contract invocations
have emitted their "Step symbol action".
A step is "open" until all active smart contracts have emitted their "step symbol action". When a step is open, no aggregation can happen as it would be scheduler-dependent.
An aggregation expression specifies which steps it will aggregate
over.
This should remove the need for `wait` in the aggregation expression, but requires in principle the programmer to specify when an aggregation can begin.
The idea is to be able to schedule only code related to a
"Step symbol ", and then being able to proceed with aggregations
that are only needing those step symbols.
Differences and similarities:
- This is quite similar to just putting all the code in one big smart contract
- However,