# Issues with pagination on address page's items list ## Current pagination approach Address page's pagination with items list (transactions, token transfers, logs, internal transactions) works pretty slow in Blockscout because of slow execution of DB query in Postgres. Currently, keyset pagination approach is used for address page. *Note* limit/offset pagination is not considered for address page because the downside of this approach, that the query became slower proportionally to the number of the page. ## Current pagination problems The current page loading speed is ~ 900 - 1000 ms. An example: https://blockscout.com/xdai/mainnet/address/0x17574e5d87932224b665f7d03b86847fd27345bc/transactions?block_number=22108745&index=8&items_count=1200 And there is no way to access to random pages. In the current implementation, pagination is based on `block_number` and `index` params. Ideally, we should be able load data from the page by page number. ## The pagination performance requirements - A random page of transactions list of each address should be available to open. - Each page of each address should open in < 500ms. Should be no speed degradation of loading on far pages. ## Related DB table structure. Pagination query involves a single table `transactions` which may contain hundreds of millions rows. Key parameters of `transactions` table for understanding: ``` \d transactions Table "public.transactions" Column | Type | Collation | Nullable | Default ----------------------------------+-----------------------------+-----------+----------+--------- hash | bytea | | not null | index | integer | | | block_number | integer | | | from_address_hash | bytea | | not null | to_address_hash | bytea | | | created_contract_address_hash | bytea | | | ... Indexes: ... "transactions_from_address_hash_recent_collated_index" btree (from_address_hash, block_number DESC, index DESC, hash) "transactions_to_address_hash_recent_collated_index" btree (to_address_hash, block_number DESC, index DESC, hash) "transactions_created_contract_address_hash_recent_collated_inde" btree (created_contract_address_hash, block_number DESC, index DESC, hash) ... ``` ## Current pagination query There are 3 queries which run in parallel to get all transactions on address for the current page. After the slowest query finished execution, results are combined. The queries are divided into 3 due to performance considerations. Otherwise, we should have 3 _or_ inside a single query in order to combine `in`, `out`, and transactions with contract creations for given address. Omitting details of selected columns, the current queries with pagination except 1st page(query for it a bit simplier) look like as follows: ``` SELECT t0.* FROM transactions AS t0 WHERE ((t0.block_number < $1) OR ((t0.block_number = $2) AND (t0.index < $3))) AND ((t0.error IS NULL) OR (t0.error != 'dropped/replaced')) AND (t0.from_address_hash = $4) ORDER BY t0.block_number DESC, t0.index DESC LIMIT $5; ``` ``` SELECT t0.* FROM transactions AS t0 WHERE ((t0.block_number < $1) OR ((t0.block_number = $2) AND (t0.index < $3))) AND ((t0.error IS NULL) OR (t0.error != 'dropped/replaced')) AND (t0.to_address_hash = $4) ORDER BY t0.block_number DESC, t0.index DESC LIMIT $5; ``` ``` SELECT t0.* FROM transactions AS t0 WHERE ((t0.block_number < $1) OR ((t0.block_number = $2) AND (t0.index < $3))) AND ((t0.error IS NULL) OR (t0.error != 'dropped/replaced')) AND (t0.created_contract_address_hash = $4) ORDER BY t0.block_number DESC, t0.index DESC LIMIT $5; ``` ## Research TimeScaleDB ### Migration to time series DB TimeScaleDB `transactions` table contains `block_timestamp` column after denormalization https://github.com/blockscout/blockscout/pull/5322, which can be used as a key for time series data. We tried to migrate `transactions` table to TimeScaleDB instance of distributed hypertable to check if the pagination query on TimescaleDB is faster. Spoiler: it is not. #### Minimal changes to convert `transactions` table into distributed hypertable: In order to create distributed hypertable based on `block_timestamp` column we are required to change the schema of the table: - `block_timestamp` column of `transactions` table should be set to not null. - `block_timestamp` column of `transactions` table should be added to the table's primary key. Thus, the primary key became composite of 2 columns (hash, block_timestamp). - `block_timestamp` column of `transactions` table should be added to the table's all unique indices. Luckily, `transactions` table has only one unique index. After the change it became look like this: ``` transactions_block_hash_index_index UNIQUE, btree (block_hash, block_timestamp, index) ``` - Foreign keys to `addresses` and `blocks` tables have been removed. - Additionally, B-Tree index was created to facilitate where clauses on `block_timestamp` and `index` columns: ``` CREATE INDEX transactions_recent_collated_index_new ON public.transactions(block_timestamp DESC, index DESC); ``` #### Changes to pagination query Instead of pagination based on `block_number`, we can change it to `block_timestamp`. The base pagination query became look like: ``` SELECT t0.* FROM transactions AS t0 WHERE ((t0.block_number < $1) OR ((t0.block_timestamp = $2) AND (t0.index < $3))) AND ((t0.error IS NULL) OR (t0.error != 'dropped/replaced')) AND (t0.from_address_hash = $4) ORDER BY t0.block_timestamp DESC, t0.index DESC LIMIT $5; ``` #### Results and comparison Postgres vs TimeScaleDB Result of some `n` execution of the changed query on TimescaleDB: ``` EXPLAIN ANALYZE SELECT t0.hash, t0.block_number, t0.block_consensus, t0.block_timestamp, t0.cumulative_gas_used, t0.earliest_processing_start, t0.error, t0.gas, t0.gas_price, t0.gas_used, t0.index, t0.created_contract_code_indexed_at, t0.input, t0.nonce, t0.r, t0.s, t0.status, t0.v, t0.value, t0.revert_reason, t0.max_priority_fee_per_gas, t0.max_fee_per_gas, t0.type, t0.has_error_in_internal_txs, t0.old_block_hash, t0.inserted_at, t0.updated_at, t0.block_hash, t0.from_address_hash, t0.to_address_hash, t0.created_contract_address_hash FROM transactions AS t0 WHERE ( (t0.block_timestamp < '2022-05-23T09:26:21.000000Z') OR ((t0.block_timestamp = '2022-05-23T09:26:21.000000Z') AND (t0.index < 116)) ) AND ((t0.error IS NULL) OR (t0.error != 'dropped/replaced')) AND (t0.from_address_hash = '\xba4cd55471ff7254042d88bc2a3ee740def0a5cd') ORDER BY t0.block_timestamp DESC, t0.index DESC LIMIT 50; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- ------------------------ Limit (cost=300.02..6091.44 rows=50 width=1294) (actual time=726.884..726.914 rows=50 loops=1) -> Custom Scan (AsyncAppend) (cost=300.02..393408962.98 rows=3396480 width=1294) (actual time=726.883..726.908 rows=50 loops=1) -> Merge Append (cost=300.02..393408962.98 rows=3396480 width=1294) (actual time=0.012..0.034 rows=50 loops=1) Sort Key: t0_1.block_timestamp DESC, t0_1.index DESC -> Custom Scan (DataNodeScan) on transactions t0_1 (cost=100.00..124339536.93 rows=1073600 width=1294) (actual time=0.006. .0.023 rows=50 loops=1) -> Custom Scan (DataNodeScan) on transactions t0_2 (cost=100.00..141294914.69 rows=1220000 width=1294) (actual time=0.002. .0.003 rows=1 loops=1) -> Custom Scan (DataNodeScan) on transactions t0_3 (cost=100.00..127730612.48 rows=1102880 width=1294) (actual time=0.002. .0.002 rows=1 loops=1) Planning Time: 32.401 ms Execution Time: 729.082 ms ``` Average execution time is **~700ms in case of TimeScaleDB** Whereas, "old" implementation of pagination query in Postgres DB: ``` EXPLAIN ANALYZE SELECT t0.hash, t0.block_number, t0.block_consensus, t0.block_timestamp, t0.cumulative_gas_used, t0.earliest_processing_start, t0.error, t0.gas, t0.gas_price, t0.gas_used, t0.index, t0.created_contract_code_indexed_at, t0.input, t0.nonce, t0.r, t0.s, t0.status, t0.v, t0.value, t0.revert_reason, t0.max_priority_fee_per_gas, t0.max_fee_per_gas, t0.type, t0.has_error_in_internal_txs, t0.old_block_hash, t0.inserted_at, t0.updated_at, t0.block_hash, t0.from_address_hash, t0.to_address_hash, t0.created_contract_address_hash FROM transactions AS t0 WHERE ((t0.block_timestamp < '2022-05-23T09:26:21.000000Z') OR ((t0.block_timestamp = '2022-05-23T09:26:21.000000Z') AND (t0.index < 116))) AND ((t0.error IS NULL) OR (t0.error != 'dropped/replaced')) AND (t0.from_address_hash = '\xba4cd55471ff7254042d88bc2a3ee740def0a5cd') ORDER BY t0.block_number DESC, t0.index DESC LIMIT 50; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------- Limit (cost=0.83..222.79 rows=50 width=485) (actual time=31.791..31.867 rows=50 loops=1) -> Index Scan using transactions_from_address_hash_recent_collated_index on transactions t0 (cost=0.83..85464.40 rows=19252 width=485) (actual time=31.790..31.859 rows=50 loops=1) Index Cond: (from_address_hash = '\xba4cd55471ff7254042d88bc2a3ee740def0a5cd'::bytea) Filter: (((error IS NULL) OR ((error)::text <> 'dropped/replaced'::text)) AND ((block_timestamp < '2022-05-23 09:26:21'::timestamp without time zone) OR ((block_timestamp = '2022-05-23 09:26:21'::timestamp without time zone) AND (index < 116)))) Rows Removed by Filter: 37841 Planning time: 0.219 ms Execution time: 31.896 ms ``` Average execution time of "old" query is **~30-50ms in case of Postgres DB** On Postgres DB we use a specific index to speed-up each pagination query execution. For instance, to sppedup query by `from_address_hash`: ``` CREATE INDEX transactions_from_address_hash_recent_collated_index_new ON public.transactions(from_address_hash, block_number DESC, index DESC, hash); ``` I created a similar index on TimescaleDB: ``` CREATE INDEX transactions_from_address_hash_recent_collated_index_new ON public.transactions(from_address_hash, block_timestamp DESC, index DESC, hash); ``` After its creation, **pagination query execution time on TimeScaleDB reduced to the same level as in Postgres**: ``` EXPLAIN ANALYZE SELECT t0.hash, t0.block_number, t0.block_consensus, t0.block_timestamp, t0.cumulative_gas_used, t0.earliest_processing_start, t0.error, t0.gas, t0.gas_price, t0.gas_used, t0.index, t0.created_contract_code_indexed_at, t0.input, t0.nonce, t0.r, t0.s, t0.status, t0.v, t0.value, t0.revert_reason, t0.max_priority_fee_per_gas, t0.max_fee_per_gas, t0.type, t0.has_error_in_internal_txs, t0.old_block_hash, t0.inserted_at, t0.updated_at, t0.block_hash, t0.from_address_hash, t0.to_address_hash, t0.created_contract_address_hash FROM transactions AS t0 WHERE ( (t0.block_timestamp < '2022-05-23T09:26:21.000000Z') OR ((t0.block_timestamp = '2022-05-23T09:26:21.000000Z') AND (t0.index < 116)) ) AND ((t0.error IS NULL) OR (t0.error != 'dropped/replaced')) AND (t0.from_address_hash = '\xba4cd55471ff7254042d88bc2a3ee740def0a5cd') ORDER BY t0.block_timestamp DESC, t0.index DESC LIMIT 50; QUERY PLA N ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------ Limit (cost=300.02..6091.44 rows=50 width=1294) (actual time=54.254..54.282 rows=50 loops=1) -> Custom Scan (AsyncAppend) (cost=300.02..393408962.98 rows=3396480 width=1294) (actual time=54.252..54.276 rows=50 loops=1) -> Merge Append (cost=300.02..393408962.98 rows=3396480 width=1294) (actua l time=0.011..0.031 rows=50 loops=1) Sort Key: t0_1.block_timestamp DESC, t0_1.index DESC -> Custom Scan (DataNodeScan) on transactions t0_1 (cost=100.00..124 339536.93 rows=1073600 width=1294) (actual time=0.005..0.021 rows=50 loops=1) -> Custom Scan (DataNodeScan) on transactions t0_2 (cost=100.00..141 294914.69 rows=1220000 width=1294) (actual time=0.002..0.002 rows=1 loops=1) -> Custom Scan (DataNodeScan) on transactions t0_3 (cost=100.00..127 730612.48 rows=1102880 width=1294) (actual time=0.002..0.002 rows=1 loops=1) Planning Time: 32.004 ms Execution Time: 55.696 ms ``` ### Challenges of migration to TimeScaleDB - DB schema of the Blockscout should be significantly changed in order to support time-series data: time-series column of the table should be added to the primary key and all unique indexes in order to convert a regular table into hypertable. - `timescaledb-parallel-copy` utility for parallelized import of the data has a bug which prevents importing of `bytea` type of data. The issue has been reported by us https://github.com/timescale/timescaledb-parallel-copy/issues/61. - We cannot use distributed hypertables because of the unsupported feature: `** (Postgrex.Error) ERROR 0A000 (feature_not_supported) ON CONFLICT DO UPDATE not supported on distributed hypertables`. And Blocksout heavily depends on `ON CONFLICT DO UPDATE` upsert operation. Thus, we cannot gain profit from parallelized requests in multi-nodes infrastructure with TimescaleDB. The related issue is https://github.com/timescale/timescaledb/issues/2963. - After running of the Blockscout Elixir application on TimescaleDB, it is impossible to query anything from DB even after the application has been switched off:` connection to server at "..." (...), port 5432 failed: FATAL: remaining connection slots are reserved for non-replication superuser connections ` Even if using the maximum value of pool size = 500 (this is the max, what TimeScaleDB allows to set) on the cluster. And the pool size on the application side has been reduced from initial 300 to 100. And restart of the cluster doesn't resolve the issue. As, per support response, this is `ecto` (Elixir native DB wrapper) framework issue, which does not close connections to the database. However, we doesn't observe such issue on Postgres. ## Research AuroraDB We made Postgres 10 RDS snapshot from ETH Mainnet instance, updraded it to Postgres 13.7, and restored in AuroraDB Serverless by using AWS API. Cluster specs are as follows: Serverless v2 cluster (32 - 74 ACUs), which is equivalent to 64Gb - 148Gb of memory. A single instance is inside. The advantages: - We don't need to change application DB schema in order to run the explorer on AuroraDB. - Automatic scaling of the resources: storage, CPUs, memory. ### Challenges of migration to AuroraDB - Migration from Postgres to Aurora doesn't work from Console UI (the corresponding button is disabled). It was checked without snapshot upgrade and after upgrade to 11, 12, 13, as well. From CLI it is viable, but not a single-step command. ## Research YugabyteDB We tried 2 clusters of YugabyteDB: - the 1st one is for migrating of one of the huge table transactions (+blocks, +addresses in order to satisfy FK constraints) and try problematic queries on it. Parameters of the cluster: | Parameter | Value | |-----------------|--------| | TOTAL NODES | 4 | | TOTAL vCPU | 32 | | TOTAL MEMORY | 128 GB | | TOTAL DISK SIZE | 800 GB | We stuck at some point because migration of transactions table from ETH Mainnet instance took more than 3 weeks and not finished. - the 2nd cluster is for trying run of the application on emty DB. Parameters of the cluster: | Parameter | Value | |-----------------|--------| | TOTAL NODES | 4 | | TOTAL vCPU | 16 | | TOTAL MEMORY | 32 GB | | TOTAL DISK SIZE | 5600 GB | The application succesfully started but because of instability of the cluster (reaching OOM) application worked with huge downtimes. Multiple times it caused cluster to fail. And it was unresponsive until YugabyteDB engineer manually restarted yet. The cluster wan't responsing to restart through management dashboard at that time. ### Challenges of migration to YugabyteDB - Because of specificity how YugabyteDB stores the data on the disk, the same data on YugabyteDB occupates much more space (<=~ 2x) than on Postgres. - Ecto DB migrations do not work: https://github.com/yugabyte/yugabyte-db/issues/8012. - Pganalyze is not supported: https://github.com/yugabyte/yugabyte-db/issues/12715. - Instability of DB cluster when running explorer on it. - Very long restoring of the data (longer than in other DBs using native to Postgres pg_restore). ## Research CitusDB (Microsoft) We tried to migrate Gnosis Chain Mainnet there. ### Challenges of migration to CitusDB - In order to distribute table between the nodes of the cluster it should be converted to distributed table and as a side effect all tables which are referenced by this table (through foreign keys) should be converted to distributed table or reference table. A reference table means that it is replicated to each node in the cluster. The side effect of this is increasing of the DB size. - Also, if the application has joins between distributed tables and regular tables (local tables in terms of CitusDB documentation), then it is prohibited: ``` ** (Postgrex.Error) ERROR 0A000 (feature_not_supported) direct joins between distributed and local tables are not supported ``` Thus, local tables which are used in joins with distributed tables should be converted to reference tables. - We faced with OOM when migrating a huge table like `internal_transactions` > \copy internal_transactions FROM '/data/internal_transactions.csv' WITH (FORMAT CSV); ERROR: out of memory DETAIL: Failed on request of size 1048576 in memory context "AfterTriggerEvents". This issue didn't mitigate even when doubling resources on each cluster. ## Research AlloyDB (GCP) (tbd) ### Challenges of migration to AlloyDB (tbd) ### Queries execution time comparison AlloyDB vs RDS for ETH Mainnet (tbd) ## Comparison of different queries between Postgres, Aurora and TimeScale, Citus *Note: Since Gnosis chain has been migrated to Citus instead of Ethereum Mainnet. Thus it had ~16x less transactions in total. Please, take it into account and take the numbers of comparison with other DB with caution. More accurate comparison is in the next chapter between Postgres AWS RDS and Citus on the same chain (Gnosis chain Mainnet).* *Note2: ETH Postgres instance on AWS RDS has db.m5.4xlarge instance class. Spec for other DBs specified in related chapters before.* 1. Last 100,000 transactions sent from address ``` SELECT t0.* FROM transactions AS t0 WHERE ( (t0.error IS NULL) OR (t0.error != 'dropped/replaced') ) AND (t0.from_address_hash = $address_hash) ORDER BY block_timestamp DESC -- in case TimeScale ORDER BY block_number DESC -- in case Postgres LIMIT 100000; ``` where `$address_hash`: - `'\xba4cd55471ff7254042d88bc2a3ee740def0a5cd'` - Ethereum Mainnet address in case of Postgres, Timescale, Aurora - `'\x10AaE121b3c62F3DAfec9cC46C27b4c1dfe4A835'` - Gnosis chain Mainnet address in case of Citus | | Postgres* | TimeScale | Aurora** | Citus | |----------------|-----------|-----------|----------|--------| | Execution time | 116,072 ms | 1,116 ms | 15,275 ms |26,496 ms| *Postgres on Amazon RDS **Amazon Aurora 2. Limit/offset pagination: 38th page of transactions sent from address ``` SELECT t0.* FROM transactions AS t0 WHERE ( (t0.error IS NULL) OR (t0.error != 'dropped/replaced') ) AND (t0.from_address_hash = $address_hash) ORDER BY block_timestamp DESC -- in case TimeScale ORDER BY block_number DESC -- in case Postgres LIMIT 50 OFFSET 1900; ``` where `$address_hash`: - `'\xba4cd55471ff7254042d88bc2a3ee740def0a5cd'` - Ethereum Mainnet address in case of Postgres, Timescale, Aurora - `'\x10AaE121b3c62F3DAfec9cC46C27b4c1dfe4A835'` - Gnosis chain Mainnet address in case of Citus | | Postgres* | TimeScale | Aurora** | Citus | |----------------|--------------|----------|--------|----| | Execution time | 2,500 ms | 300 ms | 1.8 ms | 17.5 ms | *Postgres on Amazon RDS **Amazon Aurora 3. Count of transactions on address with ~4M transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = '\x00192Fb10dF37c9FB26829eb2CC623cd1BF599E8'; ``` | | Postgres* | TimeScale | Aurora** | |----------------|--------------|----------|--------| | Execution time | > 3 min | > 3 min | > 3 min | *Postgres on Amazon RDS **Amazon Aurora 4. Count of transactions on address with ~650K transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = '\xA7EFAe728D2936e78BDA97dc267687568dD593f3'; ``` | | Postgres* | TimeScale | Aurora** | |----------------|-----------|-----------|----------| | Execution time | > 3 min | 228 sec | 71 sec | *Postgres on Amazon RDS **Amazon Aurora 5. Count of transactions on address with ~300K transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = $address_hash; ``` where `$address_hash`: - `'\x25eAff5B179f209Cf186B1cdCbFa463A69Df4C45'` - Ethereum Mainnet address in case of Postgres - `'\xd7de65b79d01a94f8e0acc2c82c5520cc31bc927'` - Gnosis chain Mainnet address in case of Citus | | Postgres* | Citus | |----------------|-----------|-------| | Execution time | > 3 min | 14,492.397 ms | *Postgres on Amazon RDS 6. Count of transactions on address with ~100K transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = $address_hash; ``` where `$address_hash`: - `'\x25eAff5B179f209Cf186B1cdCbFa463A69Df4C45'` - Ethereum Mainnet address in case of Postgres, Timescale, Aurora - `'\x1f4fe9D5E315c12004dFd0e5D628Abb270D84032'` - Gnosis chain Mainnet address in case of Citus | | Postgres* | TimeScale | Aurora** | Citus | |----------------|----------|-----------|--------|----------| | Execution time | > 3 min | 1,253.016 ms | 256.611 ms | 5,537.169 ms | *Postgres on Amazon RDS **Amazon Aurora 7. Count of transactions on address with ~6K transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = $address_hash; ``` where `$address_hash`: - `'\x4862733B5FdDFd35f35ea8CCf08F5045e57388B3'` - Ethereum Mainnet address in case of Postgres, Timescale, Aurora - `'\x98DB3a41bF8bF4DeD2C92A84ec0705689DdEEF8B'` - Gnosis chain Mainnet address in case of Citus | | Postgres* | TimeScale | Aurora** | Citus | |----------------|--------------|----------|--------|----| | Execution time | 12,193.969 ms | 2,268.463 ms | 1,551.802 ms | 298.884 ms | *Postgres on Amazon RDS **Amazon Aurora ## Comparison of different queries between Postgres AWS RDS and Citus Comparison based on Gnosis chain Mainnet data*. *Postgres DB was loaded by the production application at the time of comparison. Postgres RDS instance: db.m6g.2xlarge (**8 vCPU, 32 GiB**) Citus instance: **16 vCPU per 3 nodes and 1 coordinator** (4 vCPU per each node) 1. Last 100,000 transactions sent from address ``` SELECT t0.* FROM transactions AS t0 WHERE ( (t0.error IS NULL) OR (t0.error != 'dropped/replaced') ) AND (t0.from_address_hash = '\x10AaE121b3c62F3DAfec9cC46C27b4c1dfe4A835') ORDER BY block_number DESC LIMIT 100000; ``` | | Postgres* | Citus | |----------------|-----------|--------| | Execution time | 55,670.967 ms | 26,496 ms | *Postgres on Amazon RDS **Amazon Aurora 2. Limit/offset pagination: 38th page of transactions sent from address ``` SELECT t0.* FROM transactions AS t0 WHERE ( (t0.error IS NULL) OR (t0.error != 'dropped/replaced') ) AND (t0.from_address_hash = '\x10AaE121b3c62F3DAfec9cC46C27b4c1dfe4A835') ORDER BY block_number DESC LIMIT 50 OFFSET 1900; ``` | | Postgres* | Citus | |----------------|-----------|-------| | Execution time | 924.026 ms | 17.5 ms | *Postgres on Amazon RDS **Amazon Aurora 3. Count of transactions on address with ~300K transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = '\xd7de65b79d01a94f8e0acc2c82c5520cc31bc927'; ``` | | Postgres* | Citus | |----------------|-----------|-------| | Execution time | 167,237.334 ms | 14,492.397 ms | *Postgres on Amazon RDS 4. Count of transactions on address with ~100K transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = '\x1f4fe9D5E315c12004dFd0e5D628Abb270D84032'; ``` | | Postgres* | Citus | |----------------|----------|----------| | Execution time | 55,100.493 ms | 5,537.169 ms | *Postgres on Amazon RDS 5. Count of transactions on address with ~6K transactions ``` SELECT COUNT(t0.*) FROM transactions AS t0 WHERE t0.from_address_hash = '\x98db3a41bf8bf4ded2c92a84ec0705689ddeef8b'; ``` | | Postgres* | Citus | |----------------|--------------|----| | Execution time | 3,111.033 ms | 298.884 ms | *Postgres on Amazon RDS