# Introduction
Bitcoin blockchain analysis is ubiquitous today because the data is easily
accessible to anyone. Blockchain data give rise to the many websites that
provide statistics about the Bitcoin network such as the network's hashrate,
transaction throughput and fees, and many more.
For some investigations, however, the blockchain-based view can be too narrow.
For one thing, blockchain data lacks precise transaction and block timestamps,
rendering it useless for fine-grained analyses of transaction and block
propagation in the network. For another, it misses transactions that did not
make it into the blockchain, which provide crucial clues about the demand for
block space, fee dynamics, replace-by-fee usage, and more. Also missing are
invalid blocks and transactions sent in the network, which can be a basis for
anomaly detection.
Fortunately, comprehensive off-chain data sets can be extracted from Bitcoin
Core nodes. Unfortunately, however, as of now there is no standardized way
to collect the data in a robust and automated way.
Multiple projects appear to profit in the collection of off-chain data, and it
turns out that the overlap of the data each of the individual projects is seeking is
rather high. Consequently it makes sense to try to pool efforts to create a
generic way to collect off-chain data that allows users to easily customize
what data to collect. So far, the projects are:
- Monitoring the operational health of the Bitcoin network
- Creating a public mempool data set
Before starting with an actual implementation, this document attempts to reach
consensus on **what** data should collectable and **how** to best collect it.
# Relevant data
In most cases the reconstruction of historical off-chain data is not possible, so a
philosophy of overcollection should be followed (i.e., erring on the side of collecting too much
vs. too little). The following table records what data
people think should be collectable: each entry comprises a name, a short description and the
project name that requires the data; in line with the overcollection
philosophy, an explicit reason to collect a specific piece of data is not
required.
Note that the first part of the list focuses on transactions while the latter
focuses on non-transaction data such as blocks. Feel free to extend the list.
## Transaction data
| Name | Description | Required by |
|--------------|---------------------------------------|------------------------|
| timestamp | The timestamp the transaction event occured. | OpHealth, PublicData |
| txid | The id of the transaction | OpHealth, PublicData |
| tx event | Added to mempool, removed from mempool (including reason for removal) | OpHealth, PublicData |
| tx event | Invalid transaction (including reason for invalidity) | OpHealth, P2P monitoring |
| raw data | The raw transaction data. Compared to storing deserialized transaction data, recording raw data has the advantage of allowing to go back and fix errors caused by (infrequent!) failures of the witness detection heuristic. | OpHealth, PublicData |
| tx metadata | Ancestor, descendant, and RBF data provided by the `getrawmempool` API call. In theory, this data can be reconstructed from a full historical data set but; however, in pratice, this can be expensive and error-prone, so the pragmatic solution is to waste some extra storage and record the state data provided by Bitcoin Core | OpHealth?, PublicData |
| tx fees | While transaction fees can be computed when all parent transactions make it into the blockchain, querying all pervious outputs can be a time-consuming task. Directly storing the fee only takes up marginally more space. | OpHealth?, PublicData |
## Block Data
| Name | Description | Required by |
|--------------|---------------------------------------|------------------------|
| timestamp | The timestamp the block event occured.| OpHealth |
| block hash | The hash of the block | OpHealth |
| block height | The height of the block. (Could also be extracted from the raw block) | OpHealth |
| block event | valid block added to chain, chain reorg, invalid block (including reason for invalidity) | OpHealth |
| raw data | The raw block data of blocks not recorded on-chain. | OpHealth |
# Data collection approaches
So far, the following approaches to collect some or all of the [relevant
data](#relevant-data) have been identified.
## API-based data collection
This approach is based on taking snapshots of a node's mempool state at regular
intervals using the API call `getrawmempool`. Comparing successive snapshots
yields the list of transactions added and removed from the mempool. Detailed
transaction information for transactions added in the interval can be obtained
using the `getrawtransaction` API call.
Pros:
- The resulting data set will be self-healing in case of downtime
- Information might be missed during downtime, but even though the time
between snapshots might increase significantly, both snapshots always
represent valid mempool states, and data derived from them will be
consistent
Cons:
- The following relevant data can not be collected with this approach:
- Exact timestamp: Accuracy of timestamps is limited by the snapshotting
frequency. Note: Due of the low performance of API calls and database
operations (use of a database is recommended over e.g. a plain file to
guarantee consistency of snapshots), the lowest, realistically possible
sampling rate is in the range of hundreds of milliseconds or even seconds.
- Invalid transactions and blocks are missing
- The reason for removal of transactions from the mempool are missing
- All transactions that were added to and removed from the mempool in the time
between two successive snapshots will be missing. This includes some RBF
transactions; data on transactions that were removed because they were
included in a block can be reconstructed using blockchain data.
- Querying large mempools (sevaral hundred MB) via `getrawmempool` can take a
significany amount of time. Guaranteeing a minimum sampling rate might lead
to undesirable hardware requirements
## ZMQ-based data collection
This approach collects data from various ZMQs provided by Bitcoin Core.
Pros:
- Event notification is immediate
- Exact timestamps
- No missing data due to inadequate sampling as in the API-based approach
Cons:
- Data set can become inconsistent in case of downtime
- Example: A transaction removed from the mempool during downtime will lead
to the transaction to be stuck in a reconstruction of the mempool. Removal
times can approximated for mined transactions using block timestamps, but not
for transactions removed for other reasons.
- For monitoring the operational health of the network, this should not be a
problem, since a node that is down does not contribute to any health
information. When it comes to the potential creation of a public mempool data
set, it should not be a problem either, as data is collected in a
decentralized way by multiple nodes, so as long at least one node is running,
the resulting data set should be complete.
- The following relevant data cannot be collected out of the box:
- Invalid transactions and blocks, including reason for invalidity
- Reasons for removal of transaction from mempool
- Information provided by `getrawmempool`
- In theory, it should be possible to make all data available via ZMQ by
adding the necessary functionality to Bitcoin Core. An existing [patch](https://github.com/0xB10C/bitcoin-zmq-mempool-chain-events/blob/v23.0-zmce/PATCH.md)
by 0xB10C already makes some of the relevant data availble.
## USDT-and-eBPF-based data collection
This approach does not rely on an external tool to collect data from Bitcoin
Core via some interface. Instead, it is based on adding tracepoints to the
mempool and other subsystems of Bitcoin Core to enable logging all relevant
data directly via Bitcoin Core.
- Pros:
- Can collect all relevant data
- In case it gets merged, data collection can occur using vanilla Bitcoin Core
- Cons
- Probably most work
- requires Linux hosts, and doesn't really play well with Docker and Co.
- might require elevated privileges on host to hook into the tracepoints via the kernel
- Want to avoid expensive computations (e.g. serialization of transactions) as these are executed even if we don't hook into the tracepoints
- See https://github.com/bitcoin/bitcoin/blob/master/doc/tracing.md#no-expensive-computations-for-tracepoints
- Could guard this with a hidden `-expensivetracepoints` runtime argument
- Data set can become inconsistent in case of downtime
- Same comments as for ZMQ-based approach apply