See our docs here.
While all the core functionality you expect from Dune is available in Dune v2 the new version of Dune brings changes to how queries and refreshing work, both on the surface and under the hood.
By default, new queries are not saved to your queries so you can play around and test various ideas quickly. If you want to keep your work, remember to hit the Save button and give your query a name.
This means that if you navigate away from a page without saving, your work is lost. If you allow notifications from duneanalytics.com in your browser, the app will notify you when you navigate away from unsaved content.
The same applies to forks. A fork is not saved to your queries before you click the Save button and give your fork a name.
We've merged the old save and execute buttons into a single button, Run. When you click Run on a saved query, any changes you've made to the query text is saved and the query is queued for execution.
Useful tip: Press
CMD / CTRL + Enter
to run your query.
Running query snippets
If you mark parts of your query, the button changes to Run selection, and executes the part of the query that you've marked. This will updates the results and visualizations in the process but not save the query.
Useful tip: if you want to execute the whole query without saving it, select all of the query (CMD / CTRL + A) and then hit Run selection.
There's also changes to how results are refreshed. Over the last months we've been redesigning our entire query execution backend to bring large improvements to how Dune works.
When a visualization is viewed, either on a query page or on a dashboard, the Dune backend will inspect the age of the most recent result. If the result is stale (currently defined as >3 hours old), Dune will automatically queue an execution for this query and run it in the background.
This means that your dashboards will always be kept up to date when they are being viewed.
Query creators does not need to set a refresh scheduele and the scheduling feature has been deprecated as it is no longer needed to keep results fresh.
We're also bringing changes to how queueing works in Dune. When you manually click Run on a query an execution is queued. Every user on Dune can queue 3 queries at the same time (pro users have more, see our pricing page for more info).
Automatically refreshed results (as we talked about above) does not use the query creators queuing credits nor the viewing user's.
A faster and better looking Dune - the whole Dune app including all the charts have gotten a big facelift. The app is now also many times faster.
Query parameters - what is probably Dune’s most underrated feature has gotten a major upgrade. It’s now super easy to filter any query or dashboard for a token, address or anything really. Meaning you can turn your queries and dashboards into interactive apps for onchain data.
See all content without having to log in - simply copy and share the link from the page you are on. The exception is for paid users that make their content private, more details on our pricing page.
Errors show on the relevant line - when running your queries you will see error messages directly on the relevant line in your code.
Click query title to go to query from a dashboard
We’d love to get your feedback on your experience with Dune v2 in the #feedback channel in our Discord. You can also contact support@duneanalytics.com if you face any issues.
Happy querying!
PS. There are a few visualisation types that are currently not supported in Dune v2: pivot, cohort, funnel, heatmap and bubble. We’ve emailed the users that are affected. If you have not been emailed all your queries and visualisations should work seamlessly in v2.
We've changed price providers from Coincap to Coinpaprika, and in turn now have prices for 230+ assets in the prices.usd
and prices.layer1_usd
tables! A slight caveat is that while previously we had prices for all minutes up to current time, we now have prices up to 5 minutes before current time.
prices.layer1_usd
We've moved all assets that are not tokens on Ethereum to their own table prices.layer1_usd
. This table is partitioned on symbol
and has (symbol, minute)
as primary key as before.
Column │ Type
────────┼──────────────────────────
minute │ timestamp with time zone
price │ double precision
symbol │ text
decimals
on prices.usd
We've added a new column decimals
to prices.usd
so that you can avoid the additional join with erc20.tokens
for calculating correct volumes.
prices.usd
Previously prices.usd
was partitioned on the token symbol
, and it's primary key was (symbol, minute)
. In this release we've made a change to this scheme to allow for multiple contract_address
es to map to the same symbol
. This could happen e.g. in times where a token migration is happening. prices.usd
is now partitioned on contract_address
and it's primary key is (contract_address, minute)
.
Note that this might yield some weird results for queries that rely on doing SELECT minute, price FROM prices.usd WHERE symbol='x'
in certain cases where there are two contracts mapped to the symbol x
in a time window. You may then receive several rows per minute. It is better to then use the primary key contract_address
for filtering and joining.
Column │ Type
──────────────────┼─────────────────────────
minute │ timestamp with time zone
price │ double precision
decimals │ smallint
contract_address │ bytea
symbol │ text
Big release on the data front this month.
block_time
and block_number
We've added block_time
and block_number
to all decoded events and calls, with names evt_block_time
, evt_block_number
and call_block_time
, call_block_number
respectively. This will eliminate the most painful joins on the platform, joining with ethereum.transactions
in order to get a time dimension into your queries.
Where previously would need to do
SELECT date_trunc('day', tx.block_time), count(*)
FROM uniswap."Exchange_ev t_TokenPurchase" tp
INNER JOIN ethereum.transactions tx ON tp.evt_tx_hash = tx.hash
GROUP BY 1;
in order to get the number of daily TokenPurchase
-events on Uniswap, you can now simply do
SELECT date_trunc('day', evt_block_time), count(*)
FROM uniswap."Exchange_evt_TokenPurchase"
GROUP BY 1;
eliminating a painful and costly join. Joining across two decoded tables can also be done without involving any of the ethereum
-tables by using evt_block_number
or call_block_number
.
We're also "decoding" events that are emitted without parameters. These events have their own tables, but only include Dune-added fields i.e.:
Column | Type |
---|---|
contract_address |
bytea |
evt_tx_hash |
bytea |
evt_index |
bigint |
evt_block_time |
timestamptz |
evt_block_number |
bigint |
We've added function return values to decoded call
tables. Whenever a function has a named return value it will be decoded into output_{{name}}
, and when it is not named it will be decoded into output_{{i}}
where i
is a counter that starts at 0. Consider the following case, counting success and error codes for calls to the Compound CERC20 mint
function:
SELECT output_0, count(*)
FROM compound_v2."cErc20_call_mint"
WHERE call_success
GROUP BY 1;
TLDR: we've added a success
field to ethereum.traces
that you can use to figure out if an exact call was successful. Useful for e.g. calculating balances. Here's an example:
SELECT sum(amount)
FROM (
SELECT address, sum(amount) / 1e18 as amount
FROM (
SELECT "from" AS address, -value AS amount
FROM ethereum.traces
WHERE "from" = '\x390435245F2f95f7443eBb045357DA743E9A65a4'
and success = true
UNION ALL
SELECT "to" AS address, value AS amount
FROM ethereum.traces
WHERE "to" = '\x390435245F2f95f7443eBb045357DA743E9A65a4'
AND success = true
UNION ALL
SELECT "from" AS address, -gas_used * gas_price as amount
FROM ethereum.transactions
WHERE "from" = '\x390435245F2f95f7443eBb045357DA743E9A65a4'
) t
group by 1
) a
;
Longer Story: Dune ingests transaction traces from Parity OpenEthereum. OpenEthereum returns a tree-like datastructure of function call traces, where some can have a non-null error
field. Where previously we ingested the traces more or less as is, today we've added a success
field to ethereum.traces
. This success
field is true
for a given trace if no trace above it in the trace hierarchy has a non-null error
field.
It came to our attention that if a parent trace-entry has a non-null error
field, the child call is considered failed in the EVM as well. Previously in Dune, in order to correctly assess whether or not a given function call's state change was included in the blockchain you would've needed to write a slightly complex query to check if any traces in the same branch of the trace tree had an error. With the addition of the success
field today, this has become much easier.
Note that the field tx_success
field denotes the success of the transaction as a whole, and that a true
tx_success
-field, does not necessarily mean that every function call in the transaction has a true
success
field. Here are the potential combinations
tx_success | sucess |
---|---|
true |
true |
true |
false |
false |
false |
As you can see a call can be not successful in a successful transaction, but can not be successful in a not successful transaction…
Also note that where previously the field call_success
on decoded tables where calculated as traces.tx_success && !traces.error
, it is now directly copied from traces.success
.
Upgraded the databases to postgresql 12.2. Changelog here.
Renamed some curvefi-contracts:
schema | name | Address |
---|---|---|
curvefi | compound | 0xe5fdbab9ad428bbb469dee4cb6608c0a8895cba5 |
curvefi | usdt | 0x52ea46506b9cc5ef470c5bf89f17dc28bb35d85c |
curvefi | y | 0x45f783cce6b7ff23b2ab2d70e416cdb7d6055f51 |
curvefi | compound_v2 | 0x2e60cf74d81ac34eb21eeff58db4d385920ef419 |
curvefi | busd | 0x79a8c46dea5ada233abaffd40f3a0a2b1e5a4f27 |
curvefi | compound_v3 | 0xa2b47e3d5c44877cca798226b7b8118f9bfb7a56 |
Moved onesplit contracts to their own schema.
schema | name | address |
---|---|---|
onesplit | OneSplit | 0x2ad672fda8a042c4c78c411bf9d4f1b320aa915a |
onesplit | OneSplit | 0xdff2aa5689fcbc7f479d8c84ac857563798436dd |
onesplit | OneSplit | 0x0a236b41add0073df05eac5fc8505ad745c******7859d |
We've upgraded the database from postgres 11 to postgres 12.1. This should result in performance improvements across the board.
You can now query the erc20."ERC20_evt_Transfer"
and erc20."ERC20_evt_Approval"
tables to get decoded token transfers and approvals. These tables should include all events that could be decoded using the ABI of the ERC20 standard. This means that all transfers from all tokens can be queried through this table. The table is large (240M rows at time of writing), so please let us know your experience of the query performance.
erc20."ERC20_evt_Transfer"
schema:
Column | Type |
---|---|
"from" |
bytea |
"to" |
bytea |
value |
numeric |
contract_address |
bytea |
evt_tx_hash |
bytea |
evt_index |
bigint |
erc20."ERC20_evt_Approval"
schema:
Column | Type |
---|---|
owner |
bytea |
spender |
bytea |
value |
numeric |
contract_address |
bytea |
evt_tx_hash |
bytea |
evt_index |
bigint |
Here contract_address
refers to the contract emmitting the event, i.e. the token address, while evt_tx_hash
and evt_index
conveniently lets you join with ethereum.logs
using a query like
SELECT *
FROM erc20."ERC20_evt_Approval" apps
INNER JOIN ethereum.logs
ON apps.evt_tx_hash = logs.tx_hash AND apps.evt_index = logs.index
LIMIT 100;
Also note that you can join these tables with erc20.tokens
to get human readable token symbols and the number of decimals like
SELECT value/10^decimals, tr.*
FROM erc20."ERC20_evt_Transfer" tr
LEFT JOIN erc20.tokens t
ON t.contract_address = tr.contract_address
WHERE symbol = 'MKR'
LIMIT 10
Note though that
WITH transfers AS (
SELECT
evt_tx_hash AS tx_hash,
tr."from" AS address,
-tr.value AS amount
FROM erc20."ERC20_evt_Transfer" tr
WHERE contract_address = '\x6810e776880c02933d47db1b9fc05908e5386b96' --GNO
UNION ALL
SELECT
evt_tx_hash AS tx_hash,
tr."to" AS address,
tr.value AS amount
FROM erc20."ERC20_evt_Transfer" tr
WHERE contract_address = '\x6810e776880c02933d47db1b9fc05908e5386b96' --GNO
)
SELECT address, sum(amount/1e18) as balance
FROM transfers
GROUP BY 1
ORDER BY 2 desc
LIMIT 10
WITH transfers AS (
SELECT
evt_tx_hash AS tx_hash,
contract_address AS token_address,
-tr.value AS amount
FROM erc20."ERC20_evt_Transfer" tr
WHERE tr."from" = '\xdeadbeef' -- insert real address here
UNION ALL
SELECT
evt_tx_hash AS tx_hash,
contract_address AS token_address,
tr.value AS amount
FROM erc20."ERC20_evt_Transfer" tr
WHERE tr."to" = '\xdeadbeef' -- insert real address here
)
SELECT token_address, sum(amount/1e18) as balance
FROM transfers
GROUP BY 1
ORDER BY 2 desc;
The above tables are generated with a new feature we call "fallback decoding". Essentially it breaks down to being able to decode logs regardless of the events contract address or contract bytecode. If you know other cases where this decoding can be useful feel free to let us know at hello@duneanalytics.com
gnosis_sight
, gnosis_safe
and gnosis_dfusion
schemas respectively.Synthetix
and are found in the synthetix
schemaprices.usd_dai
have been renamed to prices.usd_sai
, and the symbol changed to SAI
- we still don't have DAI
prices unfortunately, as our price provider has not listed it.prices.usd_rep
now has the correct token address for REP
for entries after the migration. Previsouly all entries had the old address.If you are forking old queries you need to be aware of these changes to you can replace old table and column names with new ones.
Previously the usd price table was named coincap."tokens/usd"
this is now changed to prices.usd
with the fields
symbol -- unchanged
contract_address -- previously "asset"
price -- previously "average"
minute -- previously "time"
We add some data fields to all decoded tables for calls
and events
. These are now named evt_index, evt_tx_hash, contract_address
. Previously these were simplt named index, tx_hash, address
but we wanted to make it clear that these are added by Dune and not extracted directly from the blockchain.