Develop a DeFi-focused dataset for the purpose of fine-tuning language models, which, if integrated, will result in millions of dollars of additional value from more accurate Risk estimates as well as improving Governance Communications.
Build the industry's first DeFi-focused dataset. The data will be used to tune a Large Language Model (BLOOM). A fine-tuned model integrated into Maker's pipelines will help gradually relax current Risk CU modeling assumptions, thereby delivering millions of dollars of missed value for the DAO & Maker users. A fine-tuned model may also provide opportunities to expand and improve on written work performed by the GovComms CU. Furthermore, a DeFi-focused dataset offers extensibility across other DAOs.
This SPF will fund the creation of industry first DeFi-focused dataset, which will be used to fine-tune general purpose Large Language Models. Such model will be used for the benefit of Maker's Risk Modelling and Governance Communications.
I. The new model will augment and extend current Risk CU's Model.
The goal is to turn Risk CU Model's conservative CONSTs into variables. (see appendix for details)
Risk premium estimates could be reduced by relaxing the current conservative CONSTs when replacing input values with values learnt from real life data =>
For instance, if ETH-A Jump Severity goes from 50% to 45% and Jump Frequency from 2 to 1 per year:
II. A Large Language Model fine-tuned on this dataset for DeFi sentiment analysis could also be integrated into GovComms' pipeline for the purposes of improved topic identification and research (more on the need covered here) as well as social sentiment analysis.
III. As the industry's first DeFi-focused language model, it opens up a number of collaboration opportunities with other DAOs and DeFi protocols, who can benefit from better identifying risk, improving marketing techniques, and preserving brand reputation.
This project is a continuation of the work on web3 native intelligence.
In the previous proof-of-concept step I showed that the UST stablecoin crash in May could have been predicted by significant downward movement in UST sentiment beginning mid April, which resulted in the gradual loss of confidence => panic => bank run => death spiral supported by the mechanics of the protocol.
However, DeFi conversation on Twitter/Discord/Discourse is full of domain-specific slang, nuances, deep context. A general purpose model must natively understand these details to output the most adequate results. Moreover, the SemEval-2017-4A dataset, used for fine-tuning on the previous step, contains only 50k tweets.
A bigger and more native fine-tuning dataset will improve accuracy of our model for sentiment detection and summarisation tasks.
Pioneer DeFi-focused language dataset for the benefit of Risk modelling & GovComms
This SPF will fund the next step of the web3 native intelligence program as follows:
Fund the steps from the previous paragraph as follows:
Technical appendix
Risk Premium estimate is at the center of Maker Risk CU's Collateral Risk Model.
Currently Model’s inputs “Jump Frequency” and “Jump Severity” estimates for a given period are conservative CONSTs — twice per year 45% for WBTC, 50% for ETH, and 60% for all other volatile assets.
Model’s output, in turn, Risk Premium (+ some other estimates) is used to propose key Maker’s parameters: vault’s Stability Fee, Debt Ceiling, System Surplus Buffer (via protocol-wide Capital at Risk metric).
The goal of the project: offer a tool to learn model’s inputs “Jump Severity” and “Jump Frequency” estimates for a given period IRL from crypto community sentiment towards collateral types as variables.
Since current CONSTs are by design conservative, input values learnt from real life data will generally allow to lower Risk Premium estimates =>
For instance, if ETH-A Jump Severity goes from 50% to 45% and Jump Frequency from 2 to 1 per year:
At the first stage Sentiment dynamics will be wrapped into the dashboard as a tool for Computer-Aided Governance: a qualitative factor to be considered, allowing to relax assumptions when proposing protocol parameters for the MKR vote.
Then it will gradually be integrated as an input into the quantitive Collateral Risk Model. One approach: there are two components: EMH and non-EMH. EMH is modelled as a GBM. It’s augmented by a non-EMH (tail volatility) component modelled with a Sentiment Model.
Once enough time-series data is accrued a regression model could be learnt with sentiment + vault activity as input and (price volatility => liquidation) Risk Premium prediction as output.
Finally, a sentiment + vault activity model could be augmented with a Maker Vault Liquidation ML Model into a singe cause => effect model.
The eventual goal could be for protocol parameters becoming fully autonomously derived from onchain and crypto community/offchain data.