---
tags: G&R
---
# Episode 176: January 27th, 2022
## Agenda
- [00:03](https://youtu.be/SDd6ndcIy9A?t=3): Introduction
- [02:02](https://youtu.be/SDd6ndcIy9A?t=122): Votes and Polls
- [03:37](https://youtu.be/SDd6ndcIy9A?t=217): MIPs Update
- [07:08](https://youtu.be/SDd6ndcIy9A?t=428): Forum at a Glance
- [14:24](https://youtu.be/SDd6ndcIy9A?t=864): Presentation - Initiative Tracking and Cross-CU Collaboration
- [50:03](https://youtu.be/SDd6ndcIy9A?t=3003): Discussion - Upgrading Surplus Actions (FLAPS)
- [1:24:32](https://youtu.be/SDd6ndcIy9A?t=5072): Conclusion
## Video
<https://youtu.be/SDd6ndcIy9A>
## Introduction
### Agenda and Preamble
#### David Utrobin
[00:03](https://youtu.be/SDd6ndcIy9A?t=3)
- Welcome to today's Governance and Risk call, number 176. PSA LongForWisdom is not going to be hosting today. Likewise, Prose is actually on vacation this week.
- This is a public call at MakerDAO where we bring together internal and external stakeholders and go through a handful of different things.
- The call's contents involve four main components of governance roundup, where we go through the votes, MIPS, and the forum update for the week. We have a new segment called the initiative updates. I will be covering that shortly; we will cover three initiatives that we have been tracking and coordinating stakeholder alignment, including layer two and then on off-chain collateral management.
- We will discuss upgrading the surplus options. We will talk about options in order of resource investment, other considerations, and timelines. Lastly, there will be an open discussion if there is time at the end of the call. Anybody can bring up what they want. We have an anonymous question box.
## General Updates
### Votes
#### David Utrobin
[02:02](https://youtu.be/SDd6ndcIy9A?t=122)
*Polls:*
- 2 ongoing Greenlight Polls, ending on Monday, Feb 7th.
- STABLE-TBILL, STABLE-NOTE.
- 11 Ratification Polls Concluded on Monday
- To be detailed in our MIPs segment
*Executive:*
- Last Week’s Executive - Passed and Executed - MOMC Parameter Changes
- Monday’s Out-of-Schedule Executive - Passed and Executed - Raise system surplus buffer to temporarily disable surplus auctions, and MKR burn.
- Tomorrow’s Executive Contents:
- New Core Unit Budgets (including oracle gas budget)
- Revoking Content Production Budget
### MIPs
#### Pablo
[03:37](https://youtu.be/SDd6ndcIy9A?t=217)
[Weekly MIPs Update #71](https://forum.makerdao.com/t/weekly-mips-update-71/12826)

— RFC and FS Dates for reference and existing proposals can be seen [here.](https://docs.google.com/spreadsheets/d/10qxaRjm9T5LCgnYwwYqPqEjKu9yl2F1YHqdDD5SktJc/edit?usp=sharing)
The beginning of week four marks the closure of the Ratification Polls for January, which went live two weeks ago on Monday, January 10th. All the proposals passed. *Visit the MIPs Update post for a breakdown of the votes for each proposal.*










### Forum at a Glance
#### Artem Gordon
[07:08](https://youtu.be/SDd6ndcIy9A?t=428)
Post: [Forum at a Glance: January 20th - January 27th, 2022](https://forum.makerdao.com/t/forum-at-a-glance-september-10th-16th/10344)
Video: [Forum at a Glance](https://youtu.be/SDd6ndcIy9A?t=416)
> PDF version available [here.](https://pdfhost.io/v/lCrpbFWUs_Forum_at_a_Glance__January_20_27_2022)
## Initiative Tracking and Cross-CU Collaboration
### Initiative Update
#### David Utrobin
[14:24](https://youtu.be/SDd6ndcIy9A?t=864)
- For this next segment, we will talk about the initiative updates presented by stakeholder alignment call needs, which two of those happen to be so. This is a recurring segment; we are figuring out how often we update the various initiatives. We need to balance giving too much information and to update you guys. There is much information and many initiatives, as you will see in one of my coming slides. In general, we are iterating and improving the reporting format. This is the first pilot run of the format, and we are open to suggestions.
- This initiative update section is powered by the Initiative Tracking and Reporting project and the Cross-Core Unit collaboration involved. Our team GovComms has been taking on a brand new project to track all of the various initiatives at Maker to coordinate the different stakeholders involved in these initiatives, with the modular framework that we are developing in partnership with Wouter from SES.
- Why is stakeholder alignment and initiative tracking necessary? We are in a DAO; it is a headless organization. There are many reasons to do this, but the main ones are: it prevents stagnating initiatives, produces natural accountability, produces feedback loops, improves accuracy and projected timelines, identifies existing resources and potential needs and creates efficiency and execution. It reduces responsibility gaps for people who do not know who is responsible for what. It syncs the globally diverse cross-team planning is involved in any of these initiatives. We improve clarity by focusing on initiative output rather than team output because we recognize that initiatives have a vast ecosystem with overlapping CUs.
- Unless you keep them accountable, it does not make sense to follow CoreUnit updates individually. We believe this is important for stakeholders and thinking about MakerDAO from a more holistic roadmap perspective for focusing on initiative output. Also, it creates confidence for external stakeholders and clarifies the work we are doing, not just who is doing it. In general, decentralized organizations need a modular shared framework for work planning, execution, and reporting; individual Core Units need to collaborate to deliver results in a DAO. This is a natural part of that process. It also reduces the bus factor. When you bring all of these different cross-functional teams, cross-functional knowledge sharing and cross-team training happen.
- Moreover, it also produces shared roadmaps, which I already mentioned, are essential for clarity. There are 11 buckets that we have identified, like L2 collateral management for on and off-chain, Oracle's technical upgrades, organizational improvements, growth, stakeholder content, special purpose funds, risks, security. All of these have projects underneath.
- There are some process requirements for all these Core Units that should self-organize, identify stakeholders, identify common goals, identify the solutions that need to be implemented, and agree on the work breakdown of tasks associated with those milestones. Dependencies have to be mapped, understood, and agreed upon. At the same time, timelines have to be estimated correctly and orchestrated between all these different teams that do all this different work. Autonomous execution is what we are going for, and it requires well-understood best practices to avoid amateurism and stagnation of initiatives due to responsibility gaps. This whole process can be leveraged to create a lot more transparency through precisely this process of reporting. This is how the framework is structured, but every initiative has four stages that can be in.

[20:00](https://youtu.be/SDd6ndcIy9A?t=1200)
- There is an ideation workshop stage for an initiative, but a lot of thought has to go into producing the right game plan for it. Eventually, that results in a solution design. Solution design is relevant to technical projects, with multiple dependencies and considerations. Every initiative goes through a work breakdown format. This is where an initiative decides, and all the stakeholders decide on the main milestones that have to happen over the next quarter to a year.
- They take the initiative and map out all of the individual work to be done. After that, the status and owners are placed, and the stakeholder alignment call moves into this production coordination phase. This is where all the deliverables are flattened out, and it becomes formulaic for people. Before the call, they updated everything that happened.
- The call is used to surface blockers, discuss key points to move forward on various milestones, and share information on the Governance and Risk status updates. We are already doing stakeholder alignment calls and project tracking on our collateral management on-chain initiative. The off-chain version of that same initiative is separate, but there is a nuance between the two. The third one is L2, which we first piloted the stakeholder alignment calls on. Those are the three that we will bring you today. Some next ones that are planned are the security/emergency response planning initiative and the new education and onboarding initiative.
#### L2 Stakeholders Coordination Update
[22:26](https://youtu.be/SDd6ndcIy9A?t=1346)


- Derek: As David said, this is about bringing together cross-functional stakeholders, defining milestones, ensuring a work breakdown to align tasks, and understanding the following steps about dependencies blockers. It is a weekly process on the L2 sides to understand what those blockers are talking about the accomplishments and provide GovComms and others with a weekly progress status update. Moreover, that helps us keep an eye on the expectations, understand the tasks, their relative priority, and the relationships between those various types of work. As you will see on the next slide, there are a lot of different stakeholders involved.
- I have not captured all of them because they are updated regularly, as people said in the sidebar. As the following slide shows, ensuring that we have engagement from different CUs is critical. David mentioned L2. I will be talking about collateral onboarding and offboarding. Then the two coming shortly, for security stakeholders and Maker Academy, have already been talking to the Maker Academy guys and understanding how protocol engineering can contribute to that and work with other Core Units. We have a coordinated response to these dedicated topic meetings; we still maintain the mandated active meeting, as that is an essential roll-up to coordination across facilitators in the different groups. I will not spend more time on that. I will move to the next one as David covered everything here.
#### L2 Progress
[24:57](https://youtu.be/SDd6ndcIy9A?t=1497)
- First, let us talk about the scope of L2 progression specifically. We are talking about all efforts related to scaling Dai adoption across L2 roll-ups and side chains. You notice I am not discriminating between roll-ups or side chains. It is about the fact of scaling Dai adoption. Anything contributing to that is fair game for this focus, this meeting of stakeholders. You will see the snapshot here on the right side of the page. It brings out the entire product lifecycle, from inception, architecture, development, testing, auditing, and working with third parties.

- Third parties can be external to Maker and internal CUs, then deployment and maintenance once live. A couple of the key points from the snapshot I have here is working on technical specification and documentation so our work can be scaled towards other domains. By working with growth on understanding domain liquidity, we can work with third parties and get them more involved in building the ecosystem in these different domains. Also, Oracle Support has been working on a proof of concept, which is functional, and ensuring that that integrates with the work we are doing on the protocol engineering side. We have upcoming discussions with the risk team's parameters and integrating with growth and GovComms.
- We ensure integration with other teams and that communication to the forum, Twitter, etc., is all included. Also, we are working with keepers, tech ops, and third-party keeping networks to ensure they are engaged in these new ecosystems. Moreover, that includes L2 user interfaces. In terms of working with third parties, there is a lot of testing auditing within this case, chain security, and making sure that the contracts are as we expect them to be. This all rolls up to the milestones, which I will break out on the next slide. So we split these into three groups.
- Our multi-chain strategy referenced flow withdrawals, fast withdrawals, and MCT on L2 as the main milestones with the respected domains. Before I go into the plotting of these on a roadmap, we have received many discussions around risk considerations. This is a multifaceted and ongoing discussion about roll-ups for side chains.
- What are the implications of wrap Dai versus canonical Dai? What is the behavior of Dai in the wormhole and across domains? Then, what are the attack vectors present as a result of that? These security implications and discussions steer our approach for side chains, such as in Polygon and others. This is not restricted to rollouts but also includes side chains. It is a broad discussion that introduces many risk considerations to have a broader view. There will be a community call next week that I will schedule as soon as I talk to David after this call. That call will dive into precisely these topics and help illustrate that the roadmap on the next slide is not the complete end-to-end roadmap.

- Here, we talk about milestones for three domains: Optimism, Arbitram, and Starknet in blue. The technical work that is enabling them is in beige. There will be others, and over time, this will expand as we look at the various liquidity pools that exist and the ecosystems we built. You will know that the Optimism and Arbitram have been delivered so that you can click through with me, David, slowly as I go through these.
[29:16](https://youtu.be/SDd6ndcIy9A?t=1756)
- The StarkNet Dai bridge is currently targeting the end of February; we have medium confidence for that. There are some dependencies here. We are working with it, and this is the StarkNet Core Unit team, predominantly led by Louis and Maciek. We have got some dependencies on the StarkNet UI. Some event audits will be completed mid-February, and there is also work to integrate Oracles. We have medium confidence there because of those dependencies.
- The technical work enabling the fast withdrawals and Dai wormhole is ongoing. I alluded to some of the previous points: liquidity requirements, technical specifications, working with risk, and ensuring keeper third party involvement. We target the end of Q1 for the fast withdrawals space (Optimism and Arbitrum). We have high confidence in this one.
- Some test net reviews need to be done internally, and some audit responses are our dependencies here. We have got medium confidence for these Starknet fast withdrawals because there are still audits and quite a bit of healthy discussion that has to happen around the approver trust assumptions. They do not have the slow withdrawals that the optimistic rollouts have, so further discussion is needed in this space. Following that, we have got technical work for enabling MCD on L2. There is quite a lot of work here. This includes technical documentation governance processes and the documentation and understanding of message relaying. We need governance engagement with L2 liquidations and integration that ensures third parties participation. We have liquidity and involvement, but there is not much point in being in the space without the liquidity.
- Working with third parties market makers will be a key part of leading us to the MCD L2 implementations for Optimism and Arbitrary. Again, we have medium confidence here for the end of Q2; dependencies include audits and MCD deployment. There is quite a lot of work across the CoreUnit, hence the medium confidence. StarkNet has a slightly different implementation with DVD on the exact date there. I am sure Louis and the team will provide input on that as we get closer, have those audits, and understand the trust assumptions of the approver.
[31:48](https://youtu.be/SDd6ndcIy9A?t=1908)
- That is the high-level roadmap we are working towards all of these steps in the tiny thumbnail that I showed earlier. They have a breakdown of individual tasks that we work through. On the next slide, I wanted to reiterate that the vertical engineering L2 team will be deploying Optimism by Wormhole to Kovan Testnet next week, which is awesome. Stay tuned for more updates on this and opportunities to test. We will have more news on this soon as it is a massive move towards the milestone overall, and keep an eye out for this one, which is cool. Just a reminder: next week, Bartek and the political engineering team will be holding a multi-chain strategy meeting. We will discuss those risk considerations I mentioned on the previous slide, scaling Dai beyond roll-ups; the key participants of this meeting will be protocol engineering, risk, growth, to continue the discussions we have had already. I also see value in the community and delegates. We mentioned the CoreUnit on one of the first slides there, but it is broader than that; it is community and delegates. This will be a good opportunity for that broader communication.
#### Collateral Management On-Chain (CMON) Status Update
[33:48](https://youtu.be/SDd6ndcIy9A?t=2028)
- David Utrobin: On-chain collateral management has a handful of involved stakeholders, mainly CU, Comm SAS, Risk, Protocol engineering, CES, SES, Growth, and Oracles. Many people are involved in our chain collateral management, a key part of the MakerDAO business. In the work breakdown phase of our stakeholder alignment calls, we have identified six significant milestones, the fifth one we asked for the last call, so forgive us for not updating the milestone numbers. The first is to deliver the next major revision for the collateral onboarding process. The current process is a little bit outdated. It does not scale as well as we would like it to, etc.
- Milestone Two is liquidations infrastructure cleanup, which involves sanity checking ourselves on the auctions infrastructure that exists, ensuring the handoff tests to SAS, and more. I will cover a little more detail.
- Milestone three is to onboard curved stake ETH through this renewed process that maps all the touchpoints of the different CUs involved in launching collateral; we want to integrate our public communications around collateral onboarding, etc.
- Milestone four is to fully define the criteria for collateral offboarding. This is important for regular business practice. A vault exists and there is Dai does not mean that it is necessarily profitable. For example, overhead costs, like Oracles, may make it unworth it. We are skipping milestone five and going straight into six and seven. These are more bonus milestones, but the goal is to fully clean out the stable coin vaults in the next two quarters, not just TUSD, the old USDC vaults, and more. Lastly, there is a desire and milestone for gas optimizations for the PSM.
- For the first milestone, I created a little legend at the bottom right: yellow dot means progress, blue dot means pending, and green means have done.

- In general, these are the big components, which have mapped out work breakdowns under each. I will go through it briefly. One is to provide the renewed criteria for onboarding admittance and prioritization for the next six applications. There is also a review of the diagram and process currently hosted at the Collateral Maker.com site. Once the new processes are drafted, we must communicate them with the existing greenlit application authors; there must be complete process documentation. Then we have to determine the formal role of SES in liquidations and auctions; this is ongoing. According to this new criteria and process, we also need to prioritize collateral types with a two-quarter-time horizon.
- Lastly, we need to attach the parallel work stuff. That has to happen on the engineering side, like crafting spells and partner relations. That is the general overview. The second milestone is the liquidations infrastructure cleanup. First of all, we need to collect relevant information to determine the pinpoints that are currently in progress. We also have to ensure that SES has full ownership over the process of extending auction services that are also in progress. We also have to ensure relevant infrastructures are up to date and risk minimized, which is a big part of our work this past week due to Flappy Friday.
- Lastly, we need to clarify and document auctions and liquidations aspects in the different lifecycle steps. The third milestone is to onboard crvstETH. These components are part of every single onboarding. There is a MIPS six application, Greenlight poll; the collateral is chosen as the next priority by the teams. Then there are collateral assessments that have to happen: the Oracle risk and smart contracts assessments.
- Then, after that happens, there is a governance poll. After the governance poll passes, various implementations have to happen, and that is where there are many cross CU touchpoints. About a month before the projected launch of collateral, there needs to be an established partner initiative, so we reach out to the collateral partner to organize all of the needed comms and any special promotions. Lastly, there is the final deployment and activation. This is more like the previous milestones; there are a lot of overlapping steps. However, this is a lot more sequential. Maybe it is easier to follow than the other ones.
[40:04](https://youtu.be/SDd6ndcIy9A?t=2404)
- As you can see, we are close. We are in the implementation stage, and it will happen soon, likely at the end of February.
- Milestone four is to define the criteria for collateral offboarding fully. CES's Robert Jordan facilitates the initial discussions in progress. Other CUs involved are Risk Oracles and Growth. There is also an initial draft document that is in progress to define these. After that, there will be a few feedback rounds that the document goes through. Also, access to it will be expanded to other stakeholders. Finally, a forum post for public feedback before the final documentation is republished.
- Ignore milestone five.
- These are some bonus milestones that we added last week after reviewing the major milestones. Milestone six will fully clean out the stable coin vaults as I covered. Milestone seven is the gas optimizations for the PSM. It is in the backlog, but the goal is to get it done by the end of Q2.
#### Collateral Management Off-Chain (CMOFF) Status Update
[41:30](https://youtu.be/SDd6ndcIy9A?t=2489)
- I also happen to be the coordinator on the Collateral management off-chain stakeholder alignment calls, so here is the status update for those. The involved CUs include Real-World Finance, SES, CS, Growth, StarkNet, Risk, and us, GovComms. Here is the overview of the milestones. The goal for the next two quarters is to lead the existing deals through the due diligence pipeline, not all the deals, just the most feasible ones. Milestones two is to graduate the different real-world asset-related CUS from the SES incubation program.
- Milestone three is to develop shared documentation around leading these deals: what are the common touchpoints? What are the things that we could standardize around these deals? What are the things that are essentially different for each deal?
- Milestone four is a revamp of MIP6 in partnership with CES. There is a revamp being planned, in general, for MIPS six applications to do more pre-filtering of applicants so that they know the criteria ahead of time. If they do not fit it, they should not be applying. They should fulfill the criteria before the application hits the forum.
- Milestone five is the infrastructure solution design. This is the technical and user experience side of interacting with these deals. When we set up a real-world asset deal, what is the client experience? What are the UIs they use? What is the backend stuff that has to happen? Let us move into the details.
- To lead existing deals through the due diligence pipeline, the timelines for these deal completions will be withheld until they go through. This is important because significant partners do not want this. They do not want timelines because that exposes them to insider trading, just like there are real-world considerations for tremendous partners around being public on timelines for these kinds of things. As a result, they will be withheld.
[44:30](https://youtu.be/SDd6ndcIy9A?t=2670)
- Big partners and deals, especially public companies and sensitive timelines, find it important to withhold timelines. Timelines change, and people trade accordingly. Until the deal is done, MakerDAO will not know that it is well done, but rest assured, progress is happening.
- The two main components of this milestone are the work breakdowns. We have very high granularity on the two major deals we are focusing on and all the others to varying degrees. Additionally, much progress has been made on defining all the touchpoints for CUs and other stakeholders involved in each deal. The two primary deals focused on for this milestone are SocGen and Monetalis. There is a parallel set of components for each of these deals. First of all, the business side of the partnership has to be solved; there has to be a point of contact for the partner and multiple point people depending on the part of the deal. An agreement on the term sheet has to happen. Legal engagement with partners counterparties has to happen. Components of the transaction need to be mapped out and documented. After all that, the IT infrastructure integration has to be planned and mapped out.
- While this is happening, usually after the term sheet agreement and the legal engagement with counterparties and the components of the transactions, the risk work can happen. All of these things are in progress for both SocGen and Monetalis. That is the basic overview. The purpose is to educate our stakeholders on the complexity of the work. The second milestone is to graduate the different real-world asset-related CUs from the SES incubation program; this will be a key step for Maker succeeding in the real-world asset space. We need to upgrade our operations regarding handling these deals. Three main real-world asset-related core units are in the works. The first one is the real asset portfolio Core Unit incubator with Will Remor as a facilitator. We also have the real-world asset, legal review Core Unit. The facilitator, TBD, works with Christian Peterson. Lastly, we have a compliance CU, which is facilitated or will be facilitated by the prosperity.
- Each of these is in various stages. So the first one is to get the starter agreements in place. This is part of the incubation program itself.
- After the incubation program, OKRs are set up; then the bootstrapping OKRs are set up. After that, finally, the team publishes their MIPS set MIP 39, 40, and 41 sub proposals, which govern the facilitator. The 39 is the mandate, 40 is the budget, and 41 is the facilitator. Lastly, the graduation is marked by the reception of the first payment from the protocol.
- That means that they are ratified, sailing full speed ahead, and no longer a part of the incubation program. So likewise, the Real World Asset Legal Review Core Unit is at an earlier stage than the previous Core Unit I mentioned. It is in progress looking for a facilitator candidate. In the meantime, SES has to implement a temporary budget to ensure that it can fund this particular Core Unit. After that, all of the other components have to happen. Finally, there is a lending compliance Core Unit with a precarious that would be facilitator, the starter agreements are in place, and it is currently in its incubation program, OKRs and bootstrapping OKRs phase. Those are all happening, and the MIP set has not been published yet. Stay tuned for that.
## Team-Led Discussion
### Upgrading Surplus Auctions (FLAPs)
[50:40](https://youtu.be/SDd6ndcIy9A?t=3040)
- David Utrobin: This is a pertinent topic because of the liquidations on Friday. Surplus auctions underperformed and incurred, according to Proses' analysis, an average of 20% slippage overall. The community acted quickly through governance to pause them in order for the DAO to take a step back and consider their next steps.
- The next lowest hanging fruit is the rate limiter. Sam has already built some code for it, which seems to be in the pipeline. The real open questions around us are whether Dutch auctions are a high priority. This is what liquidations 2.0 did for collateral auctions. I converted them from English auctions into Dutch auctions. That same consideration is happening for FLAP auctions. I will pause and give it over to anybody who wants to comment.
#### Options for Resource Investment
[52:35](https://youtu.be/SDd6ndcIy9A?t=3155)

- UltraSchuppi: Over the past couple of months, the community sentiment was a strong favorite over doing some burning. I have been advocating for no burn for quite some time, but at some point, I realized there is no majority for doing that. From there, the whole idea of looping back the surplus buffer was born. If we do not have any majority on reverting this plan to something different, like no burning at all, we should default to coming back to the looping option. I do not like this option, but it is the current sentiment. Therefore, I put up the signal to bring back the bond. I would personally like to see a majority within the signal request not to do this. However, there are good things about occasionally doing at least a bit of burn. If we want a bit of burn, we should prepare ourselves following the rate-limiting that Sam put up.
- We have two main questions. Do we stray away from our path of the past months and decide to have an insanely high surplus buffer? This will ensure that we will not go to interfacing again. Or do we stick to our current path with some burn? We would then need to prepare ourselves by contributing to local engineers working on different things. In the end, there are priority questions. What are our opportunity costs of distracting protocol engineers from their roadmap? We must ask ourselves that question.
[54:49](https://youtu.be/SDd6ndcIy9A?t=3169)
- David Utrobin: Kurt Berry spoke a lot about this in the forum. How do you increase the connection between what Maker holders want, what is wise, and what PE and other engineering teams need to prioritize? Many potentially higher priorities are on the table, even from the layer two initiative update. Over the last few quarters, the risk team has identified this risk with the FLAP. This was not something that people did not expect. Everybody knew this was going to happen in the event of a large liquidation event. It is an interesting operational and governance question; how do we balance that need for prioritization? Also, how do Core Units push back with reasoning? This is a big aspect of this conversation that potentially scales other future issues.
- Wouter frames it nicely: the better question is should we prioritize X? Should we prioritize Dutch auctions? And if we do, should we deprioritize Y? Why should we favor Dutch auctions? There is much debate around Dutch auctions. One part of it was the prioritization piece. Another part asks if it makes sense to do Dutch auctions, given that they introduce Oracle costs to the FLAPs.
- Niklas Kunkel: There are no extra Oracle costs, but there is an extra risk because the auctions do not have Oracle's dependency. That is almost seen as protection against an Oracle attack. Wait, we would not need an MRK Oracle, so there are Oracle costs. Let us factor in at least 300k annually for an MKR Oracle. This can be a bit expensive. However, we can think of this in terms of slippage. We have a five percent slippage and expect better performance through Dutch auctions. So, instead of five percent, we have two and a half percent slippage. If you take the Oracle cost and divide it by two and a half percent slippage, the inflection point is about 12 million. This nets out an economic advantage if you think you will do burns greater than our current burning scale.
[58:42](https://youtu.be/SDd6ndcIy9A?t=3522)
- UltraSchuppi: I have been asking for the cap for quite some time already. It is not about the time frame (2-3 months) of when this can come. It is about many resources are we going to bind with this initiative? If only Dereck must do the extra work, I am fine with that. However, I am not okay with taking three engineers out of his team. I am exaggerating again, Derek. The concern is the time frame and our inactions and recognizing the delay. Honestly, I do not think we should burn at all. Should we optimize for perfect burning and rethink our attitude about the burning amount in the first place? One of the questions should be to allow some burning to take place. Derek said in some forum posts that it would take a couple of weeks until we have optimized the rate-limiting Flapper in place. I am fine with not having a couple of weeks of burning. Is this going to bind to engineers' full time? Or is this just waiting for some audits and then spending a couple of days on the results? How much is it hurting Derek to engage in this process? He probably supports it if he says it is reasonable and makes sense. I will wait for his advice.
[1:00:33](https://youtu.be/SDd6ndcIy9A?t=3633)
- Mark: We should not divert any resources to implementing improvement on the FLAP design but instead be 100% focused on making Maker and Dai a better product. Designing is a waste of time. Figuring out a way to optimize Maker burning does not do anything for our customers and is 100% selfish for Maker holders. It will drive significantly less value long term, especially when we can better allocate that capital by hiring more resources, particularly developers. I am firmly against allocating any significant number of resources (really any resources at all) to building a more robust burning mechanism.
- David Utrobin: For context, Mark Phillips is the new facilitator strategic clients Core Unit.
- Mark: I speak as a personal person outside of the Core Unit.
[1:01:46](https://youtu.be/SDd6ndcIy9A?t=3706)
- Niklas Kunkel: Rune will publish his new version of the previous tokenomics revamp, and having looked at it, it does not have a burning mechanism.
- David Utrobin: I have conflicting information there, Nik. Even in the governance chat room, I thought Rune said it preserves burn, but the goal is to alter that mechanism over five to seven years.
- Niklas Kunkel: Interesting, things must have changed. But still, until we know how we are going to change the tokenomics, it does not make sense to invest a bunch of resources into a mechanism that may not even exist anymore in six months to a year. If you invest resources into building something, it should be around for at least a couple of years. That would make sense. It would not make sense to create something that you will discard soon. We can do a band-aid type fix. Rate limiting seems like a good idea in avoiding the 15-20% or worse slippage days. You completely solve the zero-bidding problem. However, it is not worth the resources to completely redesign a flapper to get the five percent slippage down to something less. There is low-hanging fruit here, a rate limiter in resources invested in fixing the core problem to a certain degree.
[1:03:46](https://youtu.be/SDd6ndcIy9A?t=3826)
- Brian McMichael: This was also the first time this had happened. Since Friday, several community members have upgraded existing tools and have built new tools to participate in these auctions. So, the real problem was that we had too many of them go out at once. We showed the community a 20% discount available here overall, which is essentially offering a coupon to keepers.
#### Related Considerations
[1:04:47](https://youtu.be/SDd6ndcIy9A?t=3887)

- David Utrobin: I hear a common sentiment from the engineering teams and a few key people. Do recognized delegates on the call have any considerations towards this prioritization question? There are also some related considerations. For example, what can improve with the FLAPs besides rate-limiting? Are there parameter updates that can be adjusted? There is also a topic in the governance channel and other discussions. This is hiring a third/external party not just for market-making OTC but also for managing surplus auctions, perhaps completely off protocol. That is another interesting thread of conversation around this whole topic.
- I wanted to pause there and read what Brian was saying. His point was that new tooling has existed since Friday to manage FLAPs better. After each event, the system battle hardens itself by bringing in new participants who want to close the Alpha gap. This is very true—the beauty of anti-fragile.
[1:06:22](https://youtu.be/SDd6ndcIy9A?t=3982)
- Unknown: I am in favor of keeping a high surplus buffer. If we want to go back to some burning activity, sending it to a centralized party market may sound weird, but it is not a bad idea. They can do more efficient market-making; that is their whole thing. They also have smaller feeds than the three to four percent we get on the flapper. This is only if we want to go that route. Preferably, we want the high surplus but do not worry about it. It is not an issue, and we have got higher priorities.
[1:07:45](https://youtu.be/SDd6ndcIy9A?t=4065)
- Kirk: There is a third position between paying some centralized entity, continuing to the FLAP, or leaving no solution for market-making by establishing protocol-owned liquidity. You would buy back MKR through some mechanism, then create Dai MKR LPs owned by the protocol on its balance sheet. This has a lot of beneficial effects across the whole ecosystem by making MKR a more liquid asset. That is important because MKR is part of the tokenomics of the Maker system. Subsidizing that liquidity in some way is also essential; when the protocol owns it, it can be trustless. There is no other third party to interface with. It would be less capital efficient than a decentralized market Maker and does not touch centralized exchanges. However, an active arbitrage community is looking to extract energy from the chain. Our that arbitrage with centralized exchanges will take care of itself if there is depot and chain liquidity. People should keep that in mind as a third option market-making front or use surplus buffer for MKR liquidity.
[1:09:35](https://youtu.be/SDd6ndcIy9A?t=4175)
- Chris Mooney: Some of the mandated actors heard my arguments about this. There are no dumpster fires to put out, but there are a lot of comments to address here. In this case, we will be better off staying focused on the problem of too many auctions going off at once and fixing it. How do we fix that? We rate limited. There is almost no impact on rate-limiting because it is not like a liquidation to take losses. It is the lowest hanging fruit to implement, and we also do not drag in the additional burning questions. I will put up a passionate defense of some amount of burn because our game theory is based on that. I do not want to drag in this additional argument.
- The mandated actors heard me make this argument in response to Jasu’s comments. I think even Rune and Sam are in favor of this. We should not pick a direction where we take the protocol elements and put them on more humans. This breeds centralization and creates weakness. We want to do is keep pushing towards automation and decentralization. We are so close on this, just a couple of tweaks, and it is okay. It does not make sense to farm this out to some human. I like to think of the protocol as like the Hoover Dam. Some neutrino bomb goes off, and all the humans die. How long after that does the Hoover Dam continue to operate? Due to some attack, how long can we get the protocol to continue to operate with complete automation? I like thinking about that as a way to drive future development. This also frees our time to work on new stuff like L2 expansion.
[1:11:59](https://youtu.be/SDd6ndcIy9A?t=4319)
- Unknown: I will take the other side, a centralized Maker. Even with this fix, we are taking three to four percentage slippages. That does not sound like a lot, but that is 103 to 104 million in losses. If we want to turn on some amount of burn, having ideological purity with not having centralized parties in all aspects can be damaging—bottom line. The FLAP is not about restlessness and decentralization. The goal of it is to burn the most Maker. If shipping it to a centralized party in a streaming fashion will burn more Maker, I think that is a good thing. We must look at what our risk exposure is. In the worst-case scenario, they start stealing Dai. Then we turn it off. We can do that in a two-day delay, so the most they can steal is about 200k. We are losing that amount already just by the FLAPs' inefficiency.
[1:13:54](https://youtu.be/SDd6ndcIy9A?t=4434)
- Chris: I support what Chris Mooney said and add an additional clarifying point. The more nuanced problem is that too many auctions hit the market simultaneously. Within that was not enough time to react to those auctions. There is something that is not being discussed here. Increasing the auction duration would have given us (and the market) more time to react. In addition to the rate-limiting, is there a small tweak in zero engineering effort other than writing the spell that would help harden this aspect? Then we can focus on efficiency, the surplus buffer, and using the resources for growth outside of this one-off emergency to alleviate the problems that caused this particular incident.
- Kirk: I need to jump in and provide a counterpoint. It was not just a problem that too many auctions went out at once. The English auction mechanism for FLAPs is fundamentally sub-optimal for the blockchain environment. It has broken game theory that is centralizing and encourages winner take all dynamics, even if you limit the rate at which they go out. That is a rather technical discussion for a call like this. I am posting this thread in the chat from the forum. I encourage everyone who wants to have an opinion on this to read every single reply under that thread and understand the entire discussion of how the FLAP auction mechanism is fundamentally centralizing, anti-competitive, and broken. A compromise that is a band-aid and trickles it out is okay because some people insist on burning and others do not. However, I push back that the only problem is too many auctions went out at once. Too many went out at once, and we do not have a scalable or effective long-term mechanism.
[1:16:22](https://youtu.be/SDd6ndcIy9A?t=4582)
- Mark: I have several points to make on this subject. First, with Rune likely releasing the new tokenomics, I do not see any rush to get back to burning immediately. Is this even relevant? If the community loves it, we need some time to assess which direction to go. We also need to take a step back and think about what drives value to Maker. To me, it is our long-term cash flow generation. In the stock market, you will see cyclical companies like auto, steel, and other commodities with extremely low PE ratios because they go through boom-and-bust cycles, just like crypto today. The income fluctuates significantly. You may see Ford with a PE ratio of six at the peak of the cyclical market. Simultaneously, you have businesses not affected by market cycles, such as Procter & Gamble and Costco, with a PE ratio of 25 or 30 in normal times.
- Part of the issue with Maker today is that our incomes rely on these boom-and-bust cycles for crypto. This may or may not be changing. Regardless, one of the biggest things we need to prioritize is investing in real-world assets. This will require a lot of capital. Firstly, real-world assets have hundreds of trillions of the total addressable market. Second, it is entirely uncorrelated to crypto markets. The valuation will increase when you have income streams that are not correlated. There will be more stability in the income that will drive the protocol. I hate to see us burn MKR today when there are so many different things we can reinvest in. Chris and Kurt made some comments in the forum for items they would love to do today. However, we do not have the resources to do it from the technical side. I favor increasing the surplus buffer because that will allow us to invest more in real-world assets and take upon more risk when onboarding real-world assets.
- Luca: I wanted to double down on your two points. Firstly, what you said on the corporate finance side is essential. The simplest way for non-corporate finance experts to consider it is by paying out capital outside the protocol. We implicitly admit that we do not have better ways to use that capital internally. This is not the case for a protocol with growth potential, such as Maker. As Mark suggested, we could use that capital in many ways to expand the revenues of the protocol. Now I move on to my other points. In normal crypto markets, you can buy back and expect a positive impact on price. Crypto markets are not there yet. Our current effort is insufficient to sustain the price and is not a good use of capital at this stage of the crypto markets.
- I agree with Mark's second point regarding real-world finance. Firstly, we have huge potential to grow in several avenues that will require capital, so it is better to keep that capital within the protocol. Secondly, we should not forget that we are running a currency. And we have seen what happens in a currency that keeps a very thin buffer, like Abracadabra, for example. We should never forget that it is much better to sacrifice some yield for MKR holders. It is not even there yet because the markets are not mature enough to appreciate us buying back the token to get some extra buffer inside. Our credibility as a protocol is that our asset number one. All the elements favor keeping these resources within the protocol to ensure better buffer efficiency and incentivize MKR holders.
[1:21:38](https://youtu.be/SDd6ndcIy9A?t=4898)
- Makerman: I want to echo Luca here. Companies will buy back their stock because they have nothing better to invest in. This is particularly true of dividend companies because they will pay themselves dividends, loosely speaking. However, it is also true earnings. You buy back when your PE is low and sell when PE is high. Buying back Makers is not helping us for price. This is a political issue other than anything else. To Chris's point, building an SB buffer here is good for Maker holders because it means we are covering ourselves in a loss event in the loosest sense. We are building cash as an asset against dilution related to a loss risk profile. As Luca and Mark said, we have better investments that can net us higher returns than even the excellent PE that Maker has right now. We need to do this when times are good, and we need to bank some cash for when times are bad. It is a real financial thing, but it is becoming a political issue. These two things are battling quite strongly, and I would like to settle it. I am also sensitive to comments about how we do not have engineering. Well, fine. Let us not burn and build cash. If we get lots of cash, we can decide to do it when we have engineering and time. If things are deficient in the environment, we will not have a lot of things to work on. We can fix stuff in the protocol after.
- Chris: Growing surplus buffer helps us grow revenue. This is long-term thinking. Burning Maker is short-sighted thinking. Based on this discussion, I feel there is a need for a chain vault for the whole DAO. Our burn versus surplus buffer strategy is for the next year, not two months. All the teams need to plan appropriately so this will not change too often. We are missing something like this: why would you work on better FLAP actions when it is not used next year?
## Conclusion
### David Utrobin
[1:24:32](https://youtu.be/SDd6ndcIy9A?t=5072)
- I want to note the time; it is 30 past the last top of the hour. I want to thank everybody who came for the call and those who participated in the discussion. Thanks to Derek for putting together the awesome update. I will stop the recording here, but I will keep the room open for maybe another 10-15 minutes for people wanting to continue to conversation.
[Suggestion Box](https://app.suggestionox.com/r/GovCallQs)
## Common Abbreviated Terms
`CR`: Collateralization Ratio
`DC`: Debt Ceiling
`ES`: Emergency Shutdown
`SF`: Stability Fee
`DSR`: Dai Savings Rate
`MIP`: Maker Improvement Proposal
`OSM`: Oracle Security Module
`LR`: Liquidation Ratio
`RWA`: Real-World Asset
`RWF`: Real-World Finance
`SC`: Smart Contracts
`Liq`: Liquidations
`CU`: Core Unit
## Credits
- Andrea Suarez produced this summary.
- Artem Gordon produced this summary.
- Larry Wu produced this summary.
- Everyone who spoke and presented on the call listed in the headers.